๐Ÿ” CVE Alert

CVE-2026-34159

CRITICAL 9.8

llama.cpp: Unauthenticated RCE via GRAPH_COMPUTE buffer=0 bypass in llama.cpp RPC backend

CVSS Score
9.8
EPSS Score
0.0%
EPSS Percentile
0th

llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.

CWE CWE-119
Vendor ggml-org
Product llama.cpp
Published Apr 1, 2026
Last Updated Apr 2, 2026
Stay Ahead of the Next One

Get instant alerts for ggml-org llama.cpp

Be the first to know when new critical vulnerabilities affecting ggml-org llama.cpp are published โ€” delivered to Slack, Telegram or Discord.

Get Free Alerts โ†’ Free ยท No credit card ยท 60 sec setup

CVSS v3 Breakdown

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Attack Vector
Network
Attack Complexity
Low
Privileges Required
None
User Interaction
None
Scope
Unchanged
Confidentiality
High
Integrity
High
Availability
High

Affected Versions

ggml-org / llama.cpp
< b8492

References

NVD โ†— CVE.org โ†— EPSS Data โ†—
github.com: https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-j8rj-fmpv-wcxw github.com: https://github.com/ggml-org/llama.cpp/pull/20908 github.com: https://github.com/ggml-org/llama.cpp/commit/39bf0d3c6a95803e0f41aaba069ffbee26721042