Artificial intelligence (AI) is rapidly transforming software development, accelerating innovation, streamlining processes, and opening the door to entirely new capabilities.
But as with any transformative technology, AI introduces new risks. From opaque training data to compliance challenges, organizations now face an urgent question: How do we secure and govern AI in the software supply chain?
Key themes are emerging at the intersection of AI and software security:
The urgent need for visibility into AI models and their provenance;
The growing weight of global and domestic regulations; and
The role of automation in managing complexity at scale.
Together, these forces highlight why organizations must act now to ensure that AI adoption is both secure and compliant, laying the groundwork for a future of innovation built on trust.
Developers have long relied on open source components that greatly assist with development velocity, but also introduce vulnerabilities, licensing complexities, and operational risks.
AI models provide a similar breakthrough in software development, offering immense potential while introducing comparable risks.
Open source AI models inherit the same challenges as traditional open source components, but with added layers of uncertainty:
Data provenance: Unlike a logging library, AI models are trained on vast datasets that may contain bias, malicious data, or copyrighted material.
Non-determinism: AI outputs aren't consistent — asking the same question twice may yield different answers. This unpredictability complicates risk assessment.
Security vulnerabilities: Models and their supporting frameworks can be tampered with, just like any other dependency in the supply chain.
The bottom line? Organizations need the same level of rigor for AI models as traditional open source components, if not more.
One of the biggest obstacles organizations face is simply knowing what AI models are in use. Security teams often aren't involved in model development or deployment, leaving them in the dark about what models exist, where they came from, and what risks they introduce.
Provenance matters not only for security, but also for licensing obligations. Just like open source software, many models have usage restrictions that teams may overlook. Without visibility, compliance gaps widen quickly.
Governments worldwide are stepping in to shape how AI is governed. The European Union (EU) is leading the charge with the EU AI Act, the first comprehensive legislative framework for AI. It requires organizations to demonstrate transparency, fairness, and accountability in AI applications, especially when they’re used in critical decisions like hiring, lending, or healthcare.
For global organizations, this patchwork of requirements makes compliance even more complex. Whether you're building in the U.S. or abroad, if your AI models touch the EU market, you'll need to align with EU regulations.
One concept gaining traction as a solution is the AI Bill of Materials (AIBOM). Modeled on the software bill of materials (SBOM), an AIBOM provides a detailed inventory of an AI model's components, dependencies, datasets, and provenance. Its goal is simple but powerful: transparency.
An AIBOM promises to help organizations:
Identify vulnerabilities and dependencies within models.
Track licenses and obligations tied to models and datasets.
Prove compliance with emerging regulations.
Build trust with customers and regulators through transparency.
However, AIBOMs are still in their infancy. Much like SBOMs after the 2021 U.S. Executive Order, the industry still defines standards and use cases. The next frontier will be improving the quality of data within AIBOMs, ensuring they contain accurate, actionable information, rather than a simple list of components.
AI isn't just about models. It's about the entire stack that surrounds them. From vector databases to observability tools to frameworks like PyTorch and LangChain, every new component is another link in the software supply chain.
To manage this complexity, DevSecOps practices are essential. By integrating automated security validation into CI/CD pipelines, organizations can:
Catch vulnerabilities and licensing issues early.
Ensure compliance artifacts (like SBOMs and AIBOMs) are generated consistently.
Maintain auditable, repeatable processes for regulators and customers.
Free developers to focus on innovation instead of manual security checks.
Automation is particularly critical as models grow larger. Unlike pulling down a small Java library, downloading an AI model can involve gigabytes of data. Without proper caching and optimization, build times can soar, slowing development and increasing costs.
As AI adoption accelerates, organizations can't afford to treat governance as an afterthought. The risks — bias, data leakage, and regulatory noncompliance — are too significant. But with risks also come opportunities.
By adopting AIBOMs, embracing DevSecOps, and leaning on automation, organizations can build not only secure but also trustworthy AI systems. The result is a stronger foundation for innovation, where developers can focus on what they do best, while governance happens seamlessly in the background.
Want to dive deeper into these themes of governance, risk, and the future of software security? Watch the full webinar recording of AI Fireside Chat: Governance, Risk & Managing the Future of Software Security.