Malicious attack method on hosted ML models now targets PyPI
May 26, 2025, 9:49 a.m.
Description
A new malicious campaign has been discovered targeting the Python Package Index (PyPI) by exploiting the Pickle file format in machine learning models. Three malicious packages posing as an Alibaba AI Labs SDK were detected, containing infostealer payloads hidden inside PyTorch models. The packages exfiltrate information about infected machines and .gitconfig file contents. This attack demonstrates the evolving threat landscape in AI and machine learning, particularly in the software supply chain. The campaign likely targeted developers in China and highlights the need for improved security measures and tools to detect malicious functionality in ML models.
Tags
Date
- Created: May 26, 2025, 9:17 a.m.
- Published: May 26, 2025, 9:17 a.m.
- Modified: May 26, 2025, 9:49 a.m.
Indicators
- 1f83b32270c72146c0e39b1fc23d0d8d62f7a8d83265dfa1e709ebf681bac9ce
- 22fd17b184cd6f05a2fbe3ed7b27fa42f66b7a2eaf2b272a77467b08f96b6031
Additional Informations
- Technology
- China