vllm.v1.attention.backends.utils ¶
KV_SHARING_FAST_PREFILL_METADATA_FIELDS module-attribute
¶
KV_SHARING_FAST_PREFILL_METADATA_FIELDS = [
("logits_indices_padded", Optional[Tensor], None),
("num_logits_indices", int, 0),
]
_KV_CACHE_LAYOUT_OVERRIDE module-attribute
¶
_KV_CACHE_LAYOUT_OVERRIDE: Union[
KVCacheLayoutType, None
] = None
AttentionCGSupport ¶
Bases: Enum
Constants for the cudagraph support of the attention backend Here we do not consider the cascade attention, as currently it is never cudagraph supported.
Source code in vllm/v1/attention/backends/utils.py
ALWAYS class-attribute
instance-attribute
¶
Cudagraph always supported; supports mixed-prefill-decode
UNIFORM_BATCH class-attribute
instance-attribute
¶
Cudagraph supported for batches the only contain query lengths that are the same, this can be used for spec-decode i.e. "decodes" are 1 + num_speculative_tokens
UNIFORM_SINGLE_TOKEN_DECODE class-attribute
instance-attribute
¶
Cudagraph supported for batches the only contain query_len==1 decodes
AttentionMetadataBuilder ¶
Source code in vllm/v1/attention/backends/utils.py
232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 |
|
reorder_batch_threshold class-attribute
instance-attribute
¶
__init__ abstractmethod
¶
__init__(
kv_cache_spec: AttentionSpec,
layer_names: list[str],
vllm_config: VllmConfig,
device: device,
)
Source code in vllm/v1/attention/backends/utils.py
_init_reorder_batch_threshold ¶
_init_reorder_batch_threshold(
reorder_batch_threshold: int = 1,
supports_spec_as_decode: bool = False,
) -> None
Source code in vllm/v1/attention/backends/utils.py
build abstractmethod
¶
build(
common_prefix_len: int,
common_attn_metadata: CommonAttentionMetadata,
fast_build: bool = False,
) -> M
Central method that builds attention metadata. Some builders (MLA) require reorder_batch to be called prior to build.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
common_prefix_len | int | The length of the common prefix of the batch. | required |
common_attn_metadata | CommonAttentionMetadata | The common attention metadata. | required |
fast_build | bool | The meta-data will prioritize speed of building over then speed at execution. Can be used for spec-decode where the result of a build call may only be used for few layers/iters. | False |
Source code in vllm/v1/attention/backends/utils.py
build_for_cudagraph_capture ¶
build_for_cudagraph_capture(
common_attn_metadata: CommonAttentionMetadata,
) -> M
Build attention metadata for CUDA graph capture. Uses build by default. Subclasses that override this method should call self.build or super().build_for_cudagraph_capture.
Source code in vllm/v1/attention/backends/utils.py
build_for_drafting ¶
build_for_drafting(
common_attn_metadata: CommonAttentionMetadata,
draft_index: int,
) -> M
Build attention metadata for draft model. Uses build by default.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
common_attn_metadata | CommonAttentionMetadata | The common attention metadata. | required |
draft_index | int | The index of the current draft operation. When speculating a chain of tokens, this index refers to the draft attempt for the i-th token. For tree-based attention, this index instead refers to the draft attempt for the i-th level in the tree of tokens. | required |
Source code in vllm/v1/attention/backends/utils.py
reorder_batch ¶
reorder_batch(
input_batch: InputBatch,
scheduler_output: SchedulerOutput,
) -> bool
Update the order of requests in the batch based on the attention backend's needs. For example, some attention backends (namely MLA) may want to separate requests based on if the attention computation will be compute-bound or memory-bound.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_batch | InputBatch | input batch | required |
scheduler_output | SchedulerOutput | scheduler output. | required |
Returns:
Type | Description |
---|---|
bool | True if the batch was modified, False otherwise. |
Source code in vllm/v1/attention/backends/utils.py
CommonAttentionMetadata dataclass
¶
Per-batch attention metadata, shared across layers and backends. AttentionMetadataBuilder instances use it to construct per-layer metadata.
For many of the tensors we keep both GPU and CPU versions.
Source code in vllm/v1/attention/backends/utils.py
logits_indices_padded class-attribute
instance-attribute
¶
num_computed_tokens_cpu instance-attribute
¶
num_computed_tokens_cpu: Tensor
(batch_size,), the number of computed tokens for each request
query_start_loc_cpu instance-attribute
¶
query_start_loc_cpu: Tensor
(batch_size + 1,), the start location of each request in query Tensor
seq_lens_cpu instance-attribute
¶
seq_lens_cpu: Tensor
(batch_size,), the length of each request including both computed tokens and newly scheduled tokens
__init__ ¶
__init__(
query_start_loc: Tensor,
query_start_loc_cpu: Tensor,
seq_lens: Tensor,
seq_lens_cpu: Tensor,
num_computed_tokens_cpu: Tensor,
num_reqs: int,
num_actual_tokens: int,
max_query_len: int,
max_seq_len: int,
block_table_tensor: Tensor,
slot_mapping: Tensor,
causal: bool = True,
logits_indices_padded: Optional[Tensor] = None,
num_logits_indices: Optional[int] = None,
encoder_seq_lens: Optional[ndarray] = None,
) -> None
PerLayerParameters dataclass
¶
Currently, FlashInfer backend only support models in which all layers share the same values for the following hyperparameters. Should not be used for trtllm-gen backend since it supports different values for the following hyperparameters.
Source code in vllm/v1/attention/backends/utils.py
_make_metadata_with_slice ¶
_make_metadata_with_slice(
ubatch_slice: UBatchSlice,
attn_metadata: CommonAttentionMetadata,
) -> CommonAttentionMetadata
This function creates a new CommonAttentionMetadata that corresponds to the requests included in ubatch_slice
Source code in vllm/v1/attention/backends/utils.py
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
|
compute_causal_conv1d_metadata ¶
compute_causal_conv1d_metadata(query_start_loc_p: Tensor)
Source code in vllm/v1/attention/backends/utils.py
create_fast_prefill_custom_backend ¶
create_fast_prefill_custom_backend(
prefix: str, underlying_attn_backend: AttentionBackend
) -> type[AttentionBackend]
Source code in vllm/v1/attention/backends/utils.py
get_kv_cache_layout cached
¶
Source code in vllm/v1/attention/backends/utils.py
get_per_layer_parameters ¶
get_per_layer_parameters(
vllm_config: VllmConfig,
layer_names: list[str],
cls_: type[AttentionImpl],
) -> dict[str, PerLayerParameters]
Scan layers in layer_names
and determine some hyperparameters to use during plan
.
Source code in vllm/v1/attention/backends/utils.py
infer_global_hyperparameters ¶
infer_global_hyperparameters(
per_layer_params: dict[str, PerLayerParameters],
) -> PerLayerParameters
Currently, FlashInfer backend other than trtllm-gen only support models in which all layers share the same values for the following hyperparameters: - window_left
- logits_soft_cap
- sm_scale
So this function asserts that all layers share the same values for these hyperparameters and returns the global values.
Source code in vllm/v1/attention/backends/utils.py
is_valid_kv_cache_layout ¶
make_kv_sharing_fast_prefill_common_attn_metadata ¶
make_kv_sharing_fast_prefill_common_attn_metadata(
common_attn_metadata: CommonAttentionMetadata,
) -> CommonAttentionMetadata
Source code in vllm/v1/attention/backends/utils.py
make_local_attention_virtual_batches ¶
make_local_attention_virtual_batches(
attn_chunk_size: int,
common_attn_metadata: CommonAttentionMetadata,
block_size: int = 0,
) -> CommonAttentionMetadata
Source code in vllm/v1/attention/backends/utils.py
503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
|
reorder_batch_to_split_decodes_and_prefills ¶
reorder_batch_to_split_decodes_and_prefills(
input_batch: InputBatch,
scheduler_output: SchedulerOutput,
decode_threshold: int = 1,
) -> bool
Reorders the batch to split into prefill and decode requests; places all requests with <= decode_threshold tokens at the front of the batch.
Returns:
Type | Description |
---|---|
bool | True if the batch was modified, False otherwise. |
Source code in vllm/v1/attention/backends/utils.py
reshape_attn_output_for_spec_decode ¶
Reshapes the attention output tensor, so that the batch_size and seq_len dimensions are combined.
Source code in vllm/v1/attention/backends/utils.py
reshape_query_for_spec_decode ¶
Reshapes the query tensor for the specified batch size, so that it has shape (batch_size, seq_len, num_heads, head_dim).
Source code in vllm/v1/attention/backends/utils.py
set_kv_cache_layout ¶
set_kv_cache_layout(cache_layout: KVCacheLayoutType)
slice_query_start_locs ¶
Creates a new query_start_loc that corresponds to the requests in request_slice.
Note: This function creates a new tensor to hold the new query_start_locs. This will break cudagraph compatibility.
Source code in vllm/v1/attention/backends/utils.py
split_attn_metadata ¶
split_attn_metadata(
ubatch_slices: list[UBatchSlice],
common_attn_metadata: CommonAttentionMetadata,
) -> list[CommonAttentionMetadata]
Creates a new CommonAttentionMetadata instance that corresponds to the requests for each UBatchSlice in ubatch_slices.
Note: This function does not modify common_attn_metadata
Source code in vllm/v1/attention/backends/utils.py
split_decodes_and_prefills ¶
split_decodes_and_prefills(
common_attn_metadata: CommonAttentionMetadata,
decode_threshold: int = 1,
require_uniform: bool = False,
) -> tuple[int, int, int, int]
Assuming a reordered batch, finds the boundary between prefill and decode requests.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
common_attn_metadata | CommonAttentionMetadata | CommonAttentionMetadata object containing the batch metadata. | required |
decode_threshold | int | The maximum query length to be considered a decode. | 1 |
require_uniform | bool | If True, requires that all decode requests have the same query length. When set, some queries may be considered prefills even if they are <= decode_threshold, in order to ensure uniformity. | False |
Returns:
Name | Type | Description |
---|---|---|
num_decodes | int | The number of decode requests. |
num_prefills | int | The number of prefill requests. |
num_decode_tokens | int | The number of tokens in the decode requests. |
num_prefill_tokens | int | The number of tokens in the prefill requests. |
Source code in vllm/v1/attention/backends/utils.py
subclass_attention_backend ¶
subclass_attention_backend(
name_prefix: str,
attention_backend_cls: type[AttentionBackend],
builder_cls: type[AttentionMetadataBuilder[M]],
) -> type[AttentionBackend]
Return a new subclass where get_builder_cls
returns builder_cls
.
Source code in vllm/v1/attention/backends/utils.py
subclass_attention_metadata ¶
subclass_attention_metadata(
name_prefix: str,
metadata_cls: Any,
fields: list[tuple[str, Any, Any]],
) -> Any
Return a new subclass of metadata_cls
with additional fields