CVE-2026-44223
vLLM: extract_hidden_states speculative decoding crashes server on any request with penalty parameters
CVSS Score
6.5
EPSS Score
0.0%
EPSS Percentile
0th
vLLM is an inference and serving engine for large language models (LLMs). From to before 0.20.0, the extract_hidden_states speculative decoding proposer in vLLM returns a tensor with an incorrect shape after the first decode step, causing a RuntimeError that crashes the EngineCore process. The crash is triggered when any request in the batch uses sampling penalty parameters (repetition_penalty, frequency_penalty, or presence_penalty). A single request with a penalty parameter (e.g., "repetition_penalty": 1.1) is sufficient to crash the server. This vulnerability is fixed in 0.20.0.
| CWE | CWE-131 CWE-704 |
| Vendor | vllm-project |
| Product | vllm |
| Published | May 12, 2026 |
Stay Ahead of the Next One
Get instant alerts for vllm-project vllm
Be the first to know when new medium vulnerabilities affecting vllm-project vllm are published โ delivered to Slack, Telegram or Discord.
Get Free Alerts โ
Free ยท No credit card ยท 60 sec setup
CVSS v3 Breakdown
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H Attack Vector
Network
Attack Complexity
Low
Privileges Required
Low
User Interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High
Affected Versions
vllm-project / vllm
>= 0.18.0, < 0.20.0