News and Notes from the Makers of Nexus | Sonatype Blog

AgentOps Is Here: What DevSecOps Leaders Need to Do Now

Written by Brian Fox | February 05, 2026

We've seen this pattern before. The industry gets a new kind of leverage, treats it like a tool upgrade, and then acts surprised when the operating model snaps under the strain. Waterfall didn't "become" Agile because of Jira. DevOps didn't "become" DevSecOps because someone added a scanner to CI. Those shifts worked because teams changed how decisions were made, who was accountable for what, and how alignment held when the pace increased.

That's what's happening again — except this time the pace isn't increasing because humans got better at shipping. It's increasing because we've introduced non-human actors into the software lifecycle. AI agents aren't just tools that assist developers. They are developers, in the sense that they propose and execute change. And once you have actors inside your SDLC — generating code, modifying dependencies, refactoring services, writing tests, opening pull requests — the assumptions underneath DevSecOps start to wobble.

Why DevSecOps Breaks at Agent Speed

DevSecOps was built for a world where humans write code, tools advise, policies gate, and review happens at human speed. Even as we automated more of the pipeline, the center of gravity stayed human: the tools suggested, the humans decided, and the gates slowed things down just enough to keep judgment in the loop. Agentic workflows invert that. Code is generated and revised continuously, decisions are made at machine speed, and changes happen in parallel. At that velocity, "we'll review it" stops being a control mechanism and becomes a hope.

The uncomfortable truth is that you can't scale human judgment to agent throughput. Not by hiring more reviewers, not by adding more checklists, and not by moving the same approvals earlier in the process. If an agent can propose a thousand changes in the time it takes a person to context-switch, then the idea that your control plane is a human approval step is already obsolete. DevSecOps didn't fail; it just wasn't designed for non-human developers.

So we need to name the next operating model clearly, because naming matters. The first group to define the model usually ends up defining the market around it.

AgentOps is the discipline of governing, securing, and aligning autonomous and semi-autonomous software agents throughout the SDLC.

That's not a rebrand of DevSecOps. It's what DevSecOps evolves into when agents become the primary producers of change. The simplest way to see the shift is to compare the mental models. DevSecOps assumes humans are the producers and tools are the amplifiers, with gates to catch the worst outcomes. AgentOps assumes agents are the producers, and the system must constrain and align them continuously — through policy that can be understood and enforced at decision time.

The Real Risk Is Unaligned Action

If that sounds abstract, it helps to focus on what the real risk actually is. When people worry about AI in software delivery, they often fixate on bad code: insecure logic, sloppy patterns, vulnerabilities introduced by hallucination. Those problems are real, but they're not the core failure mode. Most supply chain incidents aren't driven by genius adversaries; they're driven by contextual mistakes that become scalable. Someone didn't understand an environment. Someone didn't realize a constraint existed. Someone made a decision that was locally rational and globally damaging.

Humans carry context in messy ways — tribal knowledge, scar tissue from prior incidents, half-remembered rules, and a sense for where the landmines are buried. Agents don't have any of that unless you give it to them. They don't "just know" why a particular dependency is forbidden, or why a certain repo shouldn't be touched, or why a change that looks correct on paper will trigger a nasty operational consequence. When you combine high speed with missing context, you don't just get mistakes. You get mistakes at scale.

And we already know what scaled mistakes look like in the supply chain. Dependency confusion, typosquatting, accidental vulnerability propagation — these weren't powered by magic. They were powered by systems that allowed small misalignments to compound. Agents will reproduce the same category of failure faster, simply because they can act faster. Speed without context is risk.

Policy As the Control Plane

This is why policy becomes the control plane in AgentOps. In too many organizations, "policy" is treated as paperwork: a checklist, a PDF, a quarterly artifact that exists for an auditor. That posture barely works when humans are the only actors. It collapses when agents are the primary actors, because you can't rely on episodic review to keep continuous action aligned.

In AgentOps, policy has to be machine-readable, contextual, and enforceable at the moment a decision is made. If an agent is about to introduce a new dependency, the system should be able to deterministically answer: is this allowed, under what conditions, with what provenance, and with what evidence? If an agent is about to refactor a service, the system should constrain the blast radius by default. If an agent is about to "fix" a vulnerability by swapping in a package with unclear origins, the system should prevent that decision before it becomes an incident you're debugging at 2 a.m.

That's the operational shift in plain terms: you move from blocking bad outcomes to preventing bad decisions. You design the environment agents operate within, rather than trying to micromanage the outputs after the fact. At human speed, post-hoc review is clumsy but survivable. At agent speed, it's a strategy for being continuously surprised.

So what should DevSecOps leaders do now, without turning this into a panic drill?

Start by being honest about where you've already automated decisions. CI/CD is automation. Auto-merge dependency updates are automation. Security remediation bots are automation. Agents are simply the next step: not just task automation, but decision automation. Map where those decisions exist today, because those are your early fault lines.

Next, identify where your organization relies on unwritten rules. Anywhere "common sense" substitutes for explicit policy is a place an agent will eventually do something locally correct and globally wrong. If a team's security posture depends on the fact that "everyone knows not to do that," you should assume an agent will eventually do exactly that — unless the constraint is encoded and enforceable.

Then invest in policy as code, not policy as documentation. If policy can't be executed, it can't scale. And if it can't be enforced at decision time, it isn't policy — it's commentary. The goal isn't to create more gates. The goal is to create deterministic constraints that make the safe path the default path, even when the actor is non-human.

Finally, treat software supply chain data as foundational infrastructure. Agents will make decisions based on what they can see. If your inventory is incomplete, your provenance is weak, or your dependency intelligence is shallow, your agents will operate in a fog — and so will you. The organizations that handle this well won't be the ones with the most impressive demos; they'll be the ones with the most disciplined data and the clearest operational rules.

The temptation right now is to wait for "best practices" to emerge, for a vendor category to mature, for a consensus vocabulary to form. That's understandable — and it's also how you end up living inside someone else's definitions. In a few years, AgentOps will sound obvious in the same way DevOps sounds obvious now. The question is whether you helped define what it means, or whether you inherited a model shaped by someone else's incentives.

A final reality check: autonomy without alignment is chaos at scale. AgentOps isn't about trusting AI more. It's about being precise about what you trust it to do, under what constraints, and with what evidence. If agents are going to be actors in our SDLC, then the work isn't to slow them down until they behave like humans. The work is to build systems where their speed is an advantage and their actions remain aligned with human intent.