CVE-2026-44222
vLLM: Remote DoS via Special-Token Placeholders
vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLMโs multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0.
| CWE | CWE-129 |
| Vendor | vllm-project |
| Product | vllm |
| Published | May 12, 2026 |
| Last Updated | May 13, 2026 |
Get instant alerts for vllm-project vllm
Be the first to know when new medium vulnerabilities affecting vllm-project vllm are published โ delivered to Slack, Telegram or Discord.
CVSS v3 Breakdown
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H