News and Notes from the Makers of Nexus | Sonatype Blog

Compromised litellm PyPI Package Delivers Multi-Stage Credential Stealer

Written by Sonatype Security Research Team | March 24, 2026

This morning, the widely used Python package litellm, a popular abstraction layer for interacting with large language models (LLMs), was compromised and two malicious versions released (1.82.7 and 1.82.8).

The compromised versions were available on PyPI for at least two hours. Given the package's three million daily downloads, the compromised litellm could have seen significant exposure during that short time span. Malicious code embedded in these versions functioned as both a credential stealer and a dropper, enabling further compromise of affected systems with follow-on payloads.

Sonatype's automated tooling immediately detected and blocked the malicious PyPI versions within seconds of publication (tracked as sonatype-2026-001357). Initial reporting on this incident was first raised publicly in an issue by a GitHub user, and further analyzed by external researchers at Futuresearch.

There are indications that this activity may be associated with the threat group TeamPCP, as well as speculation that the threat group is related to LAPSUS$. However, attribution remains under active investigation, and the maintainers continue to investigate the situation.

Why Is litellm a Target?

The litellm PyPI package serves as a unified interface for apps that interact with LLMs, calling models from providers including OpenAI, Anthropic, Google, and more, and is widely adopted across modern AI applications. Its popularity, combined with its position in the AI stack, makes it an attractive target for attackers seeking access to sensitive data flows.

Because litellm typically sits directly between applications and multiple AI service providers, it often has access to API keys, environment variables, and other sensitive configuration data. Compromising a package in this position allows attackers to intercept and exfiltrate valuable secrets without needing to directly breach upstream systems.

Additionally, the breadth of data targeted by the malware underscores how modern development environments — spanning local machines, CI/CD pipelines, and cloud infrastructure — are deeply interconnected. A single compromised dependency can expose credentials across multiple systems, dramatically increasing the potential blast radius.

This attack highlights a growing trend: widely trusted packages within the AI ecosystem are becoming high-value targets in software supply chain attacks.

How Does the Payload Work?

Analysis of the malicious PyPI package versions reveals a multi-stage payload implemented through obfuscated, base64-encoded Python code. The attack is structured in three layers, each responsible for a distinct phase of execution.

At a high level:

  • Layer 1: Launcher and data exfiltration.

  • Layer 2: Extensive reconnaissance and credential harvesting.

  • Layer 3: Establishes persistence and ongoing remote control (and pokes fun at researchers).

Once the first layer exfiltrates data, it decodes and executes a second-stage payload, captures the output produced by that stage, and then encrypts the collected data using AES-256-CBC with a randomly generated session key. That session key is subsequently encrypted using a hard-coded RSA public key embedded in the payload. The encrypted data and key are packaged into an archive (tpcp.tar.gz) and exfiltrated to a remote endpoint controlled by the attacker.

The second layer enumerates system-level details such as hostname, user information, network configuration, and environment variables, then searches for and extracts sensitive data from a wide range of sources. These include SSH keys and configurations, Git credentials, cloud provider credentials (AWS, GCP, Azure), Kubernetes configuration files and service account tokens, and environment files commonly used to store secrets. It also targets infrastructure and automation artifacts such as Terraform, Helm, and CI/CD configurations, along with API keys, webhook URLs, and even cryptocurrency wallet data. In some cases, the malware attempts to actively use discovered credentials — for example, querying AWS APIs or accessing Kubernetes secrets — further increasing its impact.

The third layer is written to disk as a Python script (sysmon.py) and configured to run as a system service. Once active, it polls a remote endpoint every 50 minutes and writes the response to disk as long as it doesn't contain "youtube.com." Interestingly, when Sonatype research attempts to grab the payload, the endpoint returns with a link to an English remastering of the song Bad Apple!!, a method used to defeat researchers analyzing the malware in a sandbox. This mechanism allows the attacker to continuously deliver new malicious functionality to already-compromised systems.

Compromise Indicators Linked to Malicious litellm Packages

The malicious PyPI packages communicate with external infrastructure controlled by the attacker, including the domains models[.]litellm[.]cloud and checkmarx[.]zone. On compromised systems, defenders may observe artifacts such as the archive tpcp.tar.gz, temporary files like /tmp/pglog and /tmp/.pg_state, or the presence of a persistent service associated with sysmon.py.

The malicious behavior is embedded in files including litellm_init.pth (version 1.82.8) and proxy_server.py (versions 1.82.7 and 1.82.8), with known hashes identified by Sonatype Security Research. Additional indicators and forensic details will be updated as analysis continues.

The AI Stack Holds Valuable Secrets

Given the package's place in the AI stack, cybercriminals are looking to take advantage of enterprises leveraging open source to rapidly develop and deploy AI applications. The design of the malware suggests a broad targeting strategy aimed at developers, cloud environments, and modern application infrastructure.

While Kubernetes environments appear to receive particular attention, through mechanisms that attempt to deploy privileged pods and extract cluster secrets, the overall data collection strategy is intentionally expansive. Any system capable of storing credentials or interacting with cloud services is a potential target.

This makes the software supply chain attack especially dangerous in environments where developers, CI/CD systems, and production infrastructure share access to sensitive credentials, as compromise in one layer can quickly cascade into others.

Compromised litellm Version Mitigation and Recommendations

Organizations that installed or executed the affected litellm versions should treat impacted systems as compromised. Simply removing the package is not sufficient, as the malware is designed to establish persistence and may have already deployed additional payloads.

Immediate steps should include removing the malicious litellm PyPI package, rotating all potentially exposed credentials, and conducting a thorough investigation of affected systems. This includes reviewing logs for suspicious outbound connections, identifying any persistence mechanisms such as unauthorized services, and validating the integrity of infrastructure packages. In many cases, rebuilding affected systems from a known clean state may be the safest course of action.

Organizations should also verify that only trusted litellm versions are present in their environments and ensure that dependency management processes include safeguards against compromised packages.

Broader Implications for AI Supply Chains

This incident reinforces a key reality that as organizations rapidly adopt AI technologies, the surrounding software supply chain is becoming an increasingly attractive attack surface.

Components like litellm occupy a central position in the AI stack, often handling sensitive data and credentials that connect applications to external services.

As a result, attackers are shifting their focus toward these high-leverage packages. Securing the AI supply chain requires not only vigilance, but also automated defenses capable of detecting and blocking threats in real time.

How Sonatype Helps

Incidents like the litellm compromise demonstrate how quickly malicious code can infiltrate widely trusted packages, especially within fast-moving AI ecosystems. As attackers shift toward compromising legitimate packages, identifying these threats early becomes increasingly difficult without automated safeguards.

Sonatype Guide provides developers with contextual, real-time intelligence on open source packages, helping teams detect malicious or high-risk dependencies before they are introduced into their software supply chain.

In parallel, protections like Sonatype Repository Firewall automatically block known malicious packages at the point of ingestion, reducing the likelihood that compromised components ever reach development environments.

Together, Guide and Repository Firewall help teams make safer dependency decisions and keep malicious packages out, reducing the risk of credential-stealing attacks hidden in trusted packages.

Sonatype Security Research will continue to track this activity and provide updates as the situation evolves.