
Build smarter with AI and your software supply chain
5 minute read time
AI adoption is reshaping how software gets built. From coding assistants to full-fledged agentic AI applications, developers now routinely rely on artificial intelligence in their workflows. But a subtler shift is also underway: the rise of open source AI/ML models as foundational components in modern software development.
In the past year alone, Sonatype identified over 300,000 such models actively used in customer software supply chains. These models are downloaded, fine-tuned, and deployed just like traditional open source packages, and they carry many of the same risks.
Security, compliance, and governance challenges that once accompanied open source software adoption are now resurfacing in the AI era. The good news? We've seen this story before. By borrowing from proven supply chain practices, organizations can bring structure and confidence to their AI strategy.
The rise of open source AI models
For many developers, AI is no longer a separate function. It's embedded in the very fabric of software they build — through APIs, hosted services, and increasingly, downloadable open source models. Platforms like Hugging Face have made it easier than ever to integrate AI/ML models directly into software applications.
These models often arrive as pre-trained assets, ready to be fine-tuned with proprietary data. Once customized, they're deployed within applications, powering features like recommendation systems, document summarization, or even automated code generation. The convenience and power of these models make them attractive, but also introduce new layers of complexity.
Just like open source libraries, AI/ML models become dependencies in the software supply chain. And they deserve the same level of due diligence before they are trusted in production environments.
The AI/ML software supply chain
To manage the risks and rewards of open source AI, organizations must view model adoption through a supply chain lens.
This means recognizing that each model goes through a lifecycle:
-
Ingestion: Models are pulled from public sources and evaluated.
-
Tuning: Developers fine-tune or retrain models with internal data.
-
Deployment: The modified models are deployed in SaaS products, services, or internal tools.
At every stage, there's potential for risk — from data poisoning during training to violations of data privacy regulations or inadvertent use of biased or unauthorized data. Organizations need to establish a process that mirrors what's already in place for software components: validate, control, and monitor.
When managed correctly, the AI/ML software supply chain becomes an enabler, not a liability. And it starts with shifting focus earlier in the development process.
Shifting left in model selection
Traditionally, many teams focus on monitoring models during runtime. While runtime observability is important, the most significant gains in efficiency and risk reduction come earlier, during selection and ingestion.
This is where supply chain thinking truly shines. Just as manufacturing companies rely on fewer, better suppliers to streamline operations and reduce defects, software teams should prioritize vetted, policy-aligned models before they're ever integrated into production code.
By applying policy at the point of ingestion, such as blocking models that don't meet licensing requirements or flagging those with biased training data, organizations can dramatically reduce downstream churn and improve the reliability of their applications.
The key insight? Fewer, better components lead to more predictable outcomes and faster development cycles.
Applying software supply chain governance to AI
What does AI governance look like in practice? It starts with the ability to inspect, validate, and enforce policy — not only for vulnerabilities or malware, but also for legal, ethical, and compliance criteria.
These might include:
-
Data handling and privacy: Does the model process or retain sensitive information?
-
Licensing and IP: Is the model's license compatible with your software usage?
-
Bias and fairness: Has the model been evaluated for harmful outputs or imbalanced training?
-
Provenance and traceability: Can you identify where the model came from and how it was modified?
With robust policy enforcement and automation, these checks can be applied continuously across the model lifecycle. And by incorporating them into a software bill of materials (SBOM), organizations gain full visibility and accountability over their AI components.
Automation is essential
Given the pace of software development and the sheer volume of open source AI models being adopted, manual review simply doesn't scale. Intelligent automation is essential, not just for identifying potential risks, but for taking action.
Effective tools can help teams:
-
Identify the best available models based on quality, usage history, and policy alignment.
-
Block high-risk models from entering the organization.
-
Suggest better alternatives with zero-effort upgrades and no breaking changes.
-
Continuously monitor deployed models for emerging issues.
And in highly regulated industries, automation plays another key role: enabling defensible audits. The ability to show not only what was done, but when and why, is becoming increasingly important as regulators sharpen their focus on AI.
Enabling developers, not slowing them down
Ultimately, the goal isn't to burden developers with more tools or complexity. It's to empower them to make better decisions with less friction. By integrating AI governance earlier in the development lifecycle — in the IDE, during CI/CD, or at the proxy level — organizations can help developers avoid risk altogether.
Developers shouldn't have to become experts in model licensing, data privacy law, or bias detection. Instead, intelligent tools can surface relevant insights at the right time, while offering automation to reduce the burden of remediation.
The end result? More secure, compliant, and efficient software, without compromising innovation.
Building for the future
The AI revolution in software development is here to stay. But it doesn't have to introduce unmanageable risk or technical debt. By applying well-established supply chain principles — visibility, traceability, policy enforcement, and automation — organizations can harness the full potential of AI while staying in control.
Whether you're adopting AI models to enhance your own products or building new applications with agentic AI, the foundational practices of software supply chain management still apply. And they've never been more relevant.
To learn more about how to build secure, compliant AI-driven software, watch the full webinar on-demand.

Aaron is a technical writer on Sonatype's Marketing team. He works at a crossroads of technical writing, developer advocacy, software development, and open source. He aims to get developers and non-technical collaborators to work well together via experimentation, feedback, and iteration so they ...
Explore All Posts by Aaron Linskens
Build Smarter with AI and ML.
Take control of your AI/ML usage with visibility, policy enforcement, and regulatory compliance.