๐Ÿ” CVE Alert

CVE-2026-27940

HIGH 7.8

llama.cpp has a Heap Buffer Overflow via Integer Overflow in `mem_size` Calculation โ€” Bypass of CVE-2025-53630 Fix

CVSS Score
7.8
EPSS Score
0.0%
EPSS Percentile
0th

llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the buffer boundary. This is a bypass of a similar bug in the same file - CVE-2025-53630, but the fix overlooked some areas. This vulnerability is fixed in b8146.

CWE CWE-122 CWE-190
Vendor ggml-org
Product llama.cpp
Published Mar 12, 2026
Last Updated Mar 14, 2026
Stay Ahead of the Next One

Get instant alerts for ggml-org llama.cpp

Be the first to know when new high vulnerabilities affecting ggml-org llama.cpp are published โ€” delivered to Slack, Telegram or Discord.

Get Free Alerts โ†’ Free ยท No credit card ยท 60 sec setup

CVSS v3 Breakdown

CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Attack Vector
Local
Attack Complexity
Low
Privileges Required
None
User Interaction
Required
Scope
Unchanged
Confidentiality
High
Integrity
High
Availability
High

Affected Versions

ggml-org / llama.cpp
< b8146

References

NVD โ†— CVE.org โ†— EPSS Data โ†—
github.com: https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-3p4r-fq3f-q74v