Supply Chain Security Gets Real: litellm PyPI Package Compromised
Malicious versions of the popular litellm Python package stole credentials from developers' machines in a sophisticated supply chain attack. Here's what happened and what you need to do.
On March 24, 2026, developers who ran a routine pip install litellm unknowingly exposed their entire development environments to attackers. Two malicious versions of the widely-used AI library—with over 40,000 GitHub stars and 95 million monthly downloads—were published to PyPI containing sophisticated credential-stealing payloads. The attack marks a significant escalation in supply chain threats targeting AI infrastructure.
What Happened
Threat actors compromised versions 1.82.7 and 1.82.8 of litellm, a popular Python package that serves as an abstraction layer for interacting with multiple large language model APIs. According to the project's official security update, the malicious packages were published directly to PyPI between 10:39 UTC and 16:00 UTC on March 24, bypassing the project's normal GitHub-based release process.
The attack is attributed to TeamPCP, the same threat actor behind recent compromises of the Trivy vulnerability scanner and Checkmarx KICS code scanner. The LiteLLM team believes the compromise originated from a Trivy dependency used in their CI/CD security scanning workflow, creating an ironic scenario where a security tool became the attack vector.
The Technical Sophistication
Version 1.82.8 employed a particularly insidious technique: a malicious .pth file named litellm_init.pth. This file automatically executes every time the Python interpreter starts—no import litellm statement required. As detailed in the GitHub issue tracking the incident, the 34,628-byte payload was even listed in the package's own RECORD file, hiding in plain sight.
The payload executed a three-stage attack:
Stage 1: Credential Harvesting
The malware systematically collected sensitive data from infected systems, including:
Stage 2: Encryption and Exfiltration
Collected data was encrypted using AES-256 with a randomly generated session key. The session key itself was encrypted with a hardcoded 4096-bit RSA public key, ensuring only the attackers could decrypt the stolen information. The encrypted archive was then exfiltrated via HTTP POST to https://models.litellm.cloud/—a domain not affiliated with the legitimate litellm project.
Stage 3: Persistence and Lateral Movement
According to security researchers at Endor Labs, the malware also attempted to install a persistent systemd backdoor and spread laterally across Kubernetes clusters by deploying privileged pods to every accessible node.
The Blast Radius
The impact extends beyond direct litellm users. According to the official security update, anyone in the following categories may be affected:
pip install litellm without version pinningNotably, users of the official LiteLLM Proxy Docker image were not affected, as that deployment path pins dependencies in requirements.txt and doesn't rely on PyPI packages.
The Response
The LiteLLM team acted swiftly once the compromise was discovered. Both malicious versions were removed from PyPI, maintainer credentials were rotated, and the team engaged Google's Mandiant security team for forensic analysis. The project has paused new releases pending a comprehensive supply-chain review.
In their security update, the team provided clear guidance for affected users:
1. Rotate all secrets present on systems where v1.82.7 or v1.82.8 was installed
2. Search for indicators of compromise, specifically the litellm_init.pth file in site-packages directories
3. Audit version history across local environments, CI/CD pipelines, and deployment logs
4. Pin to version 1.82.6 or earlier until a verified safe release is announced
The Bigger Picture
This incident is part of a broader campaign by TeamPCP targeting the software supply chain. Security firm Sysdig documented how the same threat actor compromised Trivy and Checkmarx's GitHub Actions workflows, using stolen CI credentials to inject malicious code into trusted build pipelines.
The choice of litellm as a target is strategic. As an abstraction layer for LLM APIs, litellm sits at a critical junction in many AI development workflows. A compromise here provides access not just to general development credentials, but specifically to LLM API keys and AI infrastructure secrets—high-value targets as AI adoption accelerates.
What Developers Should Do Now
Even if you weren't directly affected, this incident offers critical lessons:
Audit your dependencies immediately. Run pip show litellm to check your installed version. Search your codebase and build configurations for unpinned litellm dependencies.
Pin your dependencies. Specify exact versions in your requirements.txt and Dockerfile. Unpinned dependencies that auto-upgrade to the latest version create a window of vulnerability during compromise windows.
Monitor for indicators. Check for the presence of litellm_init.pth in your Python site-packages directories. Review logs for outbound connections to models.litellm.cloud.
Assume compromise and rotate. If you installed litellm on March 24, 2026, treat all credentials on those systems as compromised. This includes API keys, cloud credentials, SSH keys, and any secrets stored in environment variables or configuration files.
Layer your defenses. Use tools like dependency scanning, software bill of materials (SBOM) tracking, and network monitoring to detect anomalous behavior. No single defense is sufficient.
The Takeaway
Supply chain attacks are no longer theoretical risks—they're active threats targeting the tools developers use daily. The sophistication of the litellm compromise, from the .pth file technique to multi-stage encryption, demonstrates that attackers understand Python packaging internals and developer workflows.
As AI libraries become more central to development practices, they represent an increasingly attractive attack surface. The combination of widespread adoption, access to valuable credentials, and often-unpinned dependencies makes packages like litellm high-value targets.
The security community's rapid response—from initial disclosure to package removal to forensic analysis—shows the ecosystem can respond effectively. But prevention remains better than response. Pin your dependencies, audit your supply chain, and treat package installations with the same security scrutiny you'd apply to any code running in your environment.