CVE-2025-25183
Feb. 7, 2025, 8:15 p.m.
2.6
Low
Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.
Product(s) Impacted
Product | Versions |
---|---|
vLLM |
|
Weaknesses
CWE-354
Improper Validation of Integrity Check Value
The product does not validate or incorrectly validates the integrity check values or "checksums" of a message. This may prevent it from detecting if the data has been modified or corrupted in transmission.
Tags
CVSS Score
CVSS Data
- Attack Vector: NETWORK
- Attack Complexity: HIGH
- Privileges Required: LOW
- Scope: UNCHANGED
- Confidentiality Impact: NONE
- Integrity Impact: LOW
- Availability Impact: NONE
View Vector String
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:N/I:L/A:N
Date
- Published: Feb. 7, 2025, 8:15 p.m.
- Last Modified: Feb. 7, 2025, 8:15 p.m.
Status : Awaiting Analysis
CVE has been recently published to the CVE List and has been received by the NVD.
More infoSource
security-advisories@github.com
*Disclaimer: Some vulnerabilities do not have an associated CPE. To enhance the data, we use AI to infer CPEs based on CVE details. This is an automated process and might not always be accurate.