News and Notes from the Makers of Nexus | Sonatype Blog

The Developer's Hippocratic Oath in the Age of AI

Written by Mitchell Johnson | September 04, 2025

The best software developers I've had the privilege to work with live by the principle that they have ultimate responsibility for the code we introduce. They take ownership of what they write, review, and ship. They ask questions when they don't understand what problem they are solving — sometimes uncomfortable questions that slow down meetings but save weeks later.

They propose alternatives with the end user, the business goals, and the sustainability of the codebase in mind. They build with care and expect the same from their tools and teammates.

The ones who live by this mindset become the real difference makers — the mythical 10x developers. Not because they write more code, but because they write the right code, the right way, the first time. They avoid the rework, the churn, the technical debt. They earn trust with business and engineering partners alike, elevate their teams, and quietly move the business forward.

This mindset is what I've come to call the Developer's Hippocratic Oath.

But generative AI has made it harder than ever to honor that oath, even as it becomes more important than ever to uphold it.

The Speed vs. Understanding Tension

Tools like GitHub Copilot, ChatGPT, Claude, and Cursor are fueling a significant shift in how we build software. They can generate thousands of lines of code in seconds, accelerating development and reshaping our daily workflows. The speed and promise are undeniable.

So is the pressure to skip understanding, bypass judgment, and move forward as long as the feature "works" in testing. Is it really realistic, right or wrong, that developers can thoroughly review thousands of lines in a pull request generated in seconds from a single prompt?

Software is critical infrastructure for the modern enterprise. It powers how we operate, compete, connect, and serve customers. Because of that, developers hold considerable responsibility in shaping the world around us — not just in what we build, but in how we build it.

We build the systems that power businesses, connect people, and in many cases protect lives. That's both a privilege and a responsibility.

First, do no harm to the code.

Owning the Code We Ship

In medicine, a doctor doesn't act without a proper diagnosis. The same principle should apply in engineering.

Every line we introduce — whether written, copied, suggested by AI, or handed to us — becomes ours once it's in the codebase. We're accountable for what it does and what it breaks.

I've worked with developers who blame bad requirements or tight timelines when things go wrong. The great ones don't. They own the outcome. When they don't understand something, they ask questions until they do.  When the code isn't right, they fix it and learn from it.

That's what it means to "do no harm" in software development.

Better Tools, Not Blind Trust

This isn't a call to slow down or avoid AI tools. It's a call to build smarter. Use AI to automate, explore, scaffold, and accelerate — but not to replace your judgment. Use it to enhance your capabilities.

Think of AI as a "bionic super suit" amplifying your skills rather than replacing them. It makes you faster and more capable, but you still decide what to do with that enhanced capacity.

I believe we'll eventually reach a world where software systems are more self-maintaining and resilient. We are already experimenting with agentic development agents that promise to completely eliminate developer toil. But like ubiquitous self-driving cars or even flying taxis, that future is still way ahead of us. Today's systems still rely on human context, human decisions, and human judgment.

How quickly we get there depends largely on how much we demand from our tools.

Today's AI assistants are trained on aged public code repositories, with no awareness of our specific architecture, policies, or priorities.  AI coding assistants confidently suggest solutions that may work in isolation, or not at all, but don't fit our context. They don't know what matters in our particular environment — but we do. And it doesn't take a security expert to know that bad actors love it when the systems they're targeting are built with year-old dependencies and by developers who've abandoned code review altogether because of the mountain of code thrown at them by AI.

If we want to work at AI speed without sacrificing quality, we need more context-aware AI tools that understand our enterprise policies, security models, and application architecture. Tools that explain their suggestions, not just autocomplete. Tools that help us make informed decisions faster.

In a world where software powers nearly everything we do, this approach isn't just good practice — it's what modern businesses need and what our users deserve.