The days of postponing cyber regulations are over.
The European Union (EU) Cyber Resilience Act (CRA) and the EU AI Act are ushering in a new era of accountability — one where organizations must prove their software is secure and was built securely from the start.
As these regulations take shape, they are fundamentally influencing how teams design, govern, and ship software. The shift impacts everything from open source consumption to AI adoption, supply chain practices, and long-term maintenance strategies.
Let's break down what's changing, why it matters, and how organizations can prepare.
For decades, the world of open source has run on a quiet assumption: Use at your own risk. If something went wrong, the impact was usually treated as a cost of doing business.
Regulators see a different picture:
Cyberattacks are increasing in frequency and impact.
Critical infrastructure, consumer devices, and entire services now depend on software.
Ransomware attacks and supply chain compromises are no longer rare occurrences, but commonplace threats.
Instead of hoping organizations "do the right thing," CRA makes secure-by-design and secure-by-default practices a legal expectation.
One of the biggest shifts the CRA brings is the move from best effort to provable effort.
Manufacturers must now design secure software, provide timely updates throughout its life, and prove these practices were followed. Secure development, maintenance, and documentation are now the new baseline.
That evidence is where automation and artifacts like software bills of materials (SBOMs) become essential.
Instead of scrambling to reconstruct what you used or how you tested it, you'll need:
SBOMs generated as part of the build.
Logs and attestations for key security checks.
Traceability for vulnerabilities, patches, and risk decisions.
This practice is more than just good housekeeping for established teams. It's critical for maintaining compliance and defensibility.
For most organizations, most of your software isn't written in-house — it's assembled from open source components.
That reality doesn't change under the CRA, but the expectations around how you manage those components do.
To stay compliant and reduce risk, teams must:
Know exactly which components they rely on across every application and service.
Continuously check components for vulnerabilities, malicious behavior, and quality issues.
Quickly address or replace risky components with clear evidence of your decisions.
This is where software composition analysis (SCA), repository-level protections, and robust SBOM management become critical. They provide the visibility, automation, and traceability needed to meet CRA expectations while keeping development moving fast.
The CRA addresses general software security, while the EU AI Act focuses specifically on the responsibilities of building and deploying AI systems.
Despite their different scopes, the practical challenges overlap, because AI is ultimately another part of your software supply chain.
The AI Act emphasizes the need for:
Transparency and explainability.
Governance of training data and model behavior.
Controls around bias and high-risk AI use cases.
Visibility into all underlying components — models, libraries, infrastructure, and services.
Tracking model versions, sources, and usage with the same rigor applied to dependencies.
While regulatory details may continue to evolve, AI must be governed with more transparency, traceability, and accountability than most organizations apply today.
You don't need a perfect end-state architecture to begin preparing, but you need a clear starting point.
Focus on these core actions:
Map Your Exposure. Identify which products, services, and AI systems fall under CRA or AI Act scope, so you can prioritize high-impact areas first.
Make Evidence Automatic. Generate SBOMs, run SCA, and capture build attestations as part of your pipelines to produce audit-ready proof by default.
Govern Your Supply Chain. Block risky components at ingestion, rely on vetted sources, and define clear policies for vulnerabilities, quality, and licensing.
Treat AI Models Like Dependencies. Track model origins, versions, and usage, and apply the same governance you use for open source components.
Start Small and Scale Up. Pilot these practices in one product or business unit, then expand based on what you learn.
These steps help teams move from reactive compliance efforts to proactive, automated, and sustainable security practices.
CRA and AI regulation may seem like a "security problem," but the organizations that handle them well treat them as an opportunity to modernize how they build and run software.
At Sonatype, we've been focused on software supply chain security long before the CRA or AI Act.
Our platform and resources are built to help you:
Understand and govern your SBOMs at scale.
Block malicious and vulnerable components before they ever reach your developers.
Automate policy enforcement across repositories, builds, and releases.
Bring AI capabilities into your software development.
Want to learn how to turn CRA and AI regulation into a strategic advantage instead of a blocker?
Watch our webinar, "CRA and AI Regulation: What's Next for Software Compliance," to see how industry leaders are preparing — and how Sonatype can help you embed secure, compliant, and resilient innovation into your software development life cycle.