Artificial intelligence (AI) can already write code that compiles, runs, and sometimes even surprises us by passing tests. In many ways, it's crossed the threshold that once separated "assisted coding" from "autonomous creation."
But shipping software is not a syntax problem. It's a policy problem.
The hard part is not getting code to run — it's deciding if it should. Governance, not generation, is what separates a demo from a deployment.
This is the "last mile" problem of AI-powered development: the gap between what AI can produce and what an organization can responsibly release into production. And like every wave of automation before it, creation is outpacing control.
The "Last Mile" of Software Delivery
Every new automation wave runs into a last mile problem, from manufacturing robots to self-driving cars. The machine can do the work, but someone still has to decide where it’s allowed to go. AI development has its own version of this challenge.
AI can now generate most application code: functions, configurations, tests, even infrastructure definitions. But ensuring that output aligns with an organization's security, licensing, and compliance policies remains a manual and error-prone process.
Without a policy layer, AI will flood the pipes faster than we can clear them.
Consider a few common scenarios:
-
AI suggests a dependency with a known vulnerability.
-
AI-generated code violates an internal compliance policy.
-
AI introduces a library under a restrictive license that could create legal exposure.
Each of these issues lives in the last mile — the space between an AI's output and a company's trusted release process.
Why Policy Is the Bottleneck
Policy is the invisible architecture of software — the quiet part that decides whether things move or die in review.
Every engineering team operates under policy, whether it's explicit (security rules, license restrictions) or implicit (team conventions, unwritten standards). Policy is the invisible force that governs whether code moves forward or stops dead in its tracks.
In most organizations, "policy" lives in Slack threads, tribal memory, and the heads of the people you can't afford to lose. It's fragmented, undocumented, and rarely machine-readable. And AI, as capable as it is at generating code, can’t intuit the policies that keep that code safe, compliant, and releasable.
Scaling AI without policy is like giving every developer root access and hoping culture will handle the rest. Velocity without control, and innovation without assurance.
AI doesn't eliminate the need for policy. It makes policy more essential and far more complex.
Enter MCP: A Control Plane for AI
To bridge this gap, we need a new kind of infrastructure — one that connects what AI creates to the policies that govern production.
That's where Model Context Protocol (MCP) comes in. MCP isn't just another integration standard. It's a control plane for trust. It connects what AI can do with what it's allowed to do.
MCP acts as the control plane for AI development: a policy-aware layer that standardizes how AI tools interact with the systems that define organizational rules, from security scanners to license databases to compliance frameworks.
With MCP:
-
Every AI-generated output can be automatically checked against contextual policies before it ships.
-
Policies stop being tribal knowledge and become enforceable, contextualized rules that scale across teams.
-
AI assistants, agents, and pipelines can operate within defined trust boundaries, without constant human babysitting.
CI/CD pipelines made shipping repeatable. MCP makes trust repeatable. It doesn't just help you build faster. It helps you ship safely.
Policy-First AI Enablement
The winners won't be the ones who write more code. They'll be the ones who ship code that survives contact with reality.
Those that don't will fall into one of two traps:
-
Move fast and break things. They will push unverified AI code into production, only to face security, compliance, and legal consequences.
-
Over-govern and stall. They will try to manually mitigate risk, creating review bottlenecks that erase any productivity gains from AI.
A policy-first approach powered by MCP avoids both extremes. It gives organizations a way to automate trust, enabling AI to accelerate delivery without sacrificing integrity, compliance, or control.
This isn't about slowing innovation. It's about making innovation sustainable.
From Hype to Habit
AI can already write the code. The next frontier isn't intelligence — it's judgment.
That means embedding policy, not as an afterthought or a patch, but as a first-class citizen in the AI development life cycle.
Model Context Protocol is how we start encoding judgment into the development pipeline. It's a blueprint for how organizations can align AI with the immutable laws of software delivery — policy, provenance, and proof.
AI has crossed many frontiers. The last mile is the one that matters most.
Brian Fox, CTO and co-founder of Sonatype, is a Governing Board Member for the Open Source Security Foundation (OpenSSF), a Governing Board Member for the Fintech Open Source Foundation (FINOS), a member of the Monetary Authority of Singapore Cyber and Technology Resilience Experts (CTREX) Panel, a ...
Explore All Posts by Brian FoxTags
Build Smarter with AI and ML.
Take control of your AI/ML usage with visibility, policy enforcement, and regulatory compliance.