Whitepaper

Unpickling PyTorch: Keeping Malicious AI Out of The Enterprise

Discover how to protect your organization from hidden threats in AI models — download the whitepaper to secure your AI pipelines today.

Unpickling-Pytorch-Cover

Open source AI is exploding in popularity. Machine learning frameworks like PyTorch and platforms like Hugging Face have transformed how machine learning models are developed, shared, and deployed. But this gold rush is also a double-edged sword. The same factors that enable fast innovation also create blind spots in security.

This whitepaper explores the risks associated with PyTorch's use of pickle files and provides actionable strategies to safeguard your AI systems.

Key highlights include:

  • Understanding pickle file vulnerabilities: How malicious actors exploit PyTorch's serialization process.
  • Real-world examples: Case studies of malware embedded in AI models.
  • Evasion techniques: How attackers bypass common security tools like picklescan.
  • Best practices for AI security: Proactive measures to secure your AI supply chain, including safer serialization formats and policy enforcement.

Download your copy of the whitepaper today

Download Now