CVE-2024-34359
May 14, 2024, 4:12 p.m.
9.6
Critical
Description
llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload.
Product(s) Impacted
Product | Versions |
---|---|
llama-cpp-python |
|
Weaknesses
Common security weaknesses mapped to this vulnerability.
Tags
CVSS Score
CVSS Data - 3.1
- Attack Vector: NETWORK
- Attack Complexity: LOW
- Privileges Required: NONE
- Scope: CHANGED
- Confidentiality Impact: HIGH
- Integrity Impact: HIGH
- Availability Impact: HIGH
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H
Timeline
Published: May 14, 2024, 3:38 p.m.
Last Modified: May 14, 2024, 4:12 p.m.
Last Modified: May 14, 2024, 4:12 p.m.
Status : Awaiting Analysis
CVE has been marked for Analysis. Normally once in this state the CVE will be analyzed by NVD staff within 24 hours.
More infoSource
security-advisories@github.com
*Disclaimer: Some vulnerabilities do not have an associated CPE. To enhance the data, we use AI to infer CPEs based on CVE details. This is an automated process and might not always be accurate.