CVE-2026-27893
vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out
CVSS Score
8.8
EPSS Score
0.0%
EPSS Percentile
9th
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
| CWE | CWE-693 |
| Vendor | vllm-project |
| Product | vllm |
| Published | Mar 26, 2026 |
| Last Updated | Mar 27, 2026 |
Stay Ahead of the Next One
Get instant alerts for vllm-project vllm
Be the first to know when new high vulnerabilities affecting vllm-project vllm are published โ delivered to Slack, Telegram or Discord.
Get Free Alerts โ
Free ยท No credit card ยท 60 sec setup
CVSS v3 Breakdown
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H Attack Vector
Network
Attack Complexity
Low
Privileges Required
None
User Interaction
Required
Scope
Unchanged
Confidentiality
High
Integrity
High
Availability
High
Affected Versions
vllm-project / vllm
>= 0.10.1, < 0.18.0