The Laws of Software Haven't Changed. We're Just Choosing to Forget Them

By

5 minute read time

The Unchanging Laws of Software in the Age of AI | Sonatype
7:20

We're in the middle of something that feels like a renaissance — a golden age of software creation that's less about syntax and more about prompting. At Black Hat 2025 last week, every conversation revolved around AI. As GPT-5 rolls out, the AI arms race intensifies between the U.S. and China, and regulators struggle to write legislation fast enough, AI is now helping people ship applications before they've even had their coffee. Code is being written, tested, and deployed by machines. Agents are stringing together APIs like Lego bricks.

It's fast. It's fun. It feels limitless. And yet, there's a creeping feeling I can't shake.

A little voice saying: We've been here before.

Not with AI, exactly. But with that same sense of giddy invincibility comes the belief that new tools mean new rules. That just because we can go faster, we're somehow exempt from the responsibilities that come with moving quickly. This is a retelling of the same old story:

  • When cloud arrived and governance got left behind.

  • When microservices exploded and simplicity vanished with them.

  • When containers became the answer to everything, and we ran insecure workloads just to escape, "it works on my machine."

  • When we embraced open source, often without reading the licenses.

In each case, we paid for moving faster than our maturity could absorb. Spoiler: We're not exempt this time either.

The Stack Still Has Gravity

For all the excitement around what's new, there are still a few things in software that haven't changed and probably never will. Code still runs on physical infrastructure. It still consumes finite resources. It still interacts with fallible users. And it still has to operate within legal and regulatory boundaries, no matter who, or what, wrote it.

Whether your application was built by a senior engineering team or stitched together by an LLM in a weekend, it still needs to scale, fail gracefully, and comply with laws like the EU Cyber Resilience Act or whatever regulatory patchwork you're deploying into.

And when something breaks — and it will — someone's still on the hook to fix it.

The stack didn't get lighter. It just got faster. And if we're not paying attention, it's going to hit harder when it breaks.

They Don't Even Know What They Don't Know

This isn't a knock on new builders. The fact that someone can now create working software with little more than curiosity, a prompt window, and a few smart tools is a massive achievement. That's worth celebrating.

But creation is not the same as engineering.

Many of these new developers (and let's be honest, plenty of experienced ones, too) are moving faster than their own understanding. They're stitching together systems from components they didn't write, dependencies they didn't vet, and patterns they didn't grow up with. Not out of negligence, but because the tools don't slow them down long enough to ask why something works, or what it depends on to keep working tomorrow.

And the ecosystem rewards this. We praise fast shipping over sound architecture. We conflate output with impact. We are creating an AI centric development culture that treats governance and resilience as things we'll get to later, if we get to them at all.

The result is software built without architecture. Code assembled by AI, dependencies pulled from unknown sources, and zero plans for patching, observability, or maintenance. These systems get deployed. They get used. And then they get handed off to teams who have to live with the long-term consequences.

It's not just that people don't know the risks — it's that we've made it easy to ignore them.

The Crash Back to Earth and the Attackers Waiting Below

The thing about forgetting fundamentals is that reality doesn't forget with you.

Eventually, one of these systems built in a sprint becomes critical. It starts handling real data, serving real customers, integrating into other systems. And at some point, something goes wrong — a component breaks, a dependency drifts, a vulnerability surfaces. And when that happens, you don't want to be the one explaining how it ended up in production in the first place.

Meanwhile, attackers are watching this AI boom with great interest and adapting faster than most builders. They know developers copy and paste without vetting. They know that trust is being shifted from maintainers and teams to language models and prompt tools. And they're planting traps all over the ecosystem.

We're already seeing it:

  • Malicious packages seeded with names designed to match auto-complete.

  • Old vulnerabilities resurfacing in new frameworks.

  • LLM-suggested code with unvalidated input paths and insecure defaults.

They don't need a zero-day. They just need us to forget what we used to remember.

We Need to Be Wide-Eyed, Not Wide-Eyed-and-Shut

AI isn't the problem here. And this isn't a call for fear or restraint. The tools are powerful. The potential is enormous. We should be exploring every inch of what they can do.

But speed without structure is just chaos, and we've seen how that plays out.

What we need now is maturity. We need to carry the lessons of software engineering into this next phase. We need to encode best practices into tools, so developers don't have to guess. We need policy that operates at runtime, in context, and in real-time — not as an afterthought in a doc somewhere.

We need to treat AI not as a shortcut, but as a collaborator. One that still needs guidance. One that still benefits from the judgment and experience we've spent decades earning.

Because AI may write the code, but humans still own the consequences.

We've Been Here Before. Let's Bring the Map.

If this moment feels familiar, it's because it is.

We've seen transformative shifts before. Each time, we told ourselves the old rules no longer applied. And each time, we learned they still did. The fundamentals didn't go away. We just forgot to bring them with us.

This time, though, it's not just speed. It's autonomy. Software is being generated, modified, and deployed by systems that don't remember what we've learned and won't ask for permission before acting. That changes the stakes.

What we need now isn't nostalgia for slower times. What we need is adaptation — a rethinking of how we govern, audit, and guide software development when humans are no longer the only ones in the loop. Governance can't live in PDFs or spreadsheets anymore. Auditing can't be something we do once a quarter. And policies can't rely on tribal knowledge or Slack threads to enforce intent.

If we want to thrive in this AI-accelerated ecosystem, we need to embed our expectations, our boundaries, and our risk models directly into the tools agents use to build. We need contextual, automated, machine-readable governance that operates as fast as the systems it protects.

Because if we don't bring these foundations with us, if we leave them behind out of convenience or inertia, the consequences won't be theoretical. They'll be real, fast, and wide-reaching.

The laws of software didn't change. But if our governance doesn't evolve to meet this moment, we'll be the ones left behind, and we'll only have ourselves to blame.

Picture of Brian Fox

Written by Brian Fox

Brian Fox, CTO and co-founder of Sonatype, is a Governing Board Member for the Open Source Security Foundation (OpenSSF), a Governing Board Member for the Fintech Open Source Foundation (FINOS), a member of the Monetary Authority of Singapore Cyber and Technology Resilience Experts (CTREX) Panel, a ...

Tags