Large language models are reshaping how we write software. With a few prompts, developers can generate boilerplate, integrate dependencies, write tests, and scaffold entire systems in a fraction of the time it used to take.
But for all that acceleration, there's a blind spot forming right at the foundation: how these systems choose dependencies.
LLMs don't understand context. They don't know what's secure, what's actively maintained, or what your organization has flagged as off-limits. They simply surface what looks common — which is often outdated, vulnerable, or legally risky.
And when developers treat those outputs as trustworthy by default, bad choices make it into production at machine speed.
It Looks Right, So It Must Be Fine
Most of these models were trained on years of public code — GitHub, Stack Overflow, old docs, even abandoned projects. The result is that their recommendations often reflect what's been used a lot, not what's safe to use now.
We often find AI recommending deprecated packages, such as 'request, 'in new Node applications, or outdated Python modules that haven't been updated in years. Worse still, AI can suggest unvetted packages, introducing layers of unchecked transitive dependencies that compromise security and stability.
It works, until it doesn't. And most teams don't realize what they've inherited until it breaks — or until legal or security comes asking.
Attackers Know This Too
They've seen this before — when developers started trusting auto-complete too much, or copy-pasting code without checking. LLMs are the next iteration of that behavior, scaled up.
Attackers are now exploiting vulnerabilities by releasing malicious packages that mimic common LLM hallucinations. They are manipulating prompt patterns to boost the likelihood of being recommended and planting traps within the ecosystem, targeting speed-over-safety pipelines.
The AI doesn't know better. And if your devs aren't questioning the suggestion, the attacker wins by default.
More Code, Fewer Questions
Even before AI, most teams struggled to track what was in their software. Transitive dependencies piled up. Licensing drifted. CVEs stacked faster than triage queues.
Now imagine that problem running on autopilot.
AI-assisted tools make it easy to generate a functional app. But they also make it dangerously easy to pull in dozens of dependencies without reading a single line of code — or knowing who owns what.
It's not that we've lost control. It's that we're increasingly choosing not to notice.
What Needs to Change
We don't need to stop using AI. But we need to stop assuming the suggestions are good enough.
That starts with:
-
Guardrails at the point of generation, not post-merge scanning.
-
Context-aware dependency evaluation, including license and maintainer health.
-
Enforceable policy that applies to both developers and agents working alongside them.
LLMs are great at code completion. But they don't know how your organization defines risk — and they won't stop asking.
If You Don't Know What You're Shipping, You're Not Moving Fast — You're Flying Blind
There's a difference between velocity and recklessness.
And we've been through this before — with open source, with cloud, with containers. Each time we moved faster than our maturity could support, and each time we paid the price.
Let's not repeat it again with AI. If you don't know what's in your dependency tree or why it's there, you're not just shipping software faster. You're scaling risk faster, too.
Brian Fox, CTO and co-founder of Sonatype, is a Governing Board Member for the Open Source Security Foundation (OpenSSF), a Governing Board Member for the Fintech Open Source Foundation (FINOS), a member of the Monetary Authority of Singapore Cyber and Technology Resilience Experts (CTREX) Panel, a ...
Explore All Posts by Brian FoxTags
Build Smarter with AI and ML.
Take control of your AI/ML usage with visibility, policy enforcement, and regulatory compliance.