CVE-2025-48956

Aug. 21, 2025, 3:15 p.m.

7.5
High

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.10.1.1, a Denial of Service (DoS) vulnerability can be triggered by sending a single HTTP GET request with an extremely large header to an HTTP endpoint. This results in server memory exhaustion, potentially leading to a crash or unresponsiveness. The attack does not require authentication, making it exploitable by any remote user. This vulnerability is fixed in 0.10.1.1.

Product(s) Impacted

Vendor Product Versions
Github
  • Vllm
  • 0.1.0-0.10.1

Weaknesses

Common security weaknesses mapped to this vulnerability.

CWE-400
Uncontrolled Resource Consumption
The product does not properly control the allocation and maintenance of a limited resource, thereby enabling an actor to influence the amount of resources consumed, eventually leading to the exhaustion of available resources.

*CPE(s)

Affected systems and software identified for this CVE.

Type Vendor Product Version Update Edition Language Software Edition Target Software Target Hardware Other Information
a github vllm 0.1.0-0.10.1 / / / / / / /

CVSS Score

7.5 / 10

CVSS Data - 3.1

  • Attack Vector: NETWORK
  • Attack Complexity: LOW
  • Privileges Required: NONE
  • Scope: UNCHANGED
  • Confidentiality Impact: NONE
  • Integrity Impact: NONE
  • Availability Impact: HIGH
  • CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

    View Vector String

Timeline

Published: Aug. 21, 2025, 3:15 p.m.
Last Modified: Aug. 21, 2025, 3:15 p.m.

Status : Received

CVE has been recently published to the CVE List and has been received by the NVD.

More info

Source

security-advisories@github.com

*Disclaimer: Some vulnerabilities do not have an associated CPE. To enhance the data, we use AI to infer CPEs based on CVE details. This is an automated process and might not always be accurate.