vllm.distributed.device_communicators.all2all ¶
AgRsAll2AllManager ¶
Bases: All2AllManagerBase
An implementation of all2all communication based on all-gather (dispatch) and reduce-scatter (combine).
Source code in vllm/distributed/device_communicators/all2all.py
__init__ ¶
combine ¶
Reduce-scatter hidden_states across all dp ranks.
Source code in vllm/distributed/device_communicators/all2all.py
destroy ¶
dispatch ¶
dispatch(
hidden_states: Tensor,
router_logits: Tensor,
is_sequence_parallel: bool = False,
) -> tuple[Tensor, Tensor]
Gather hidden_states and router_logits from all dp ranks.
Source code in vllm/distributed/device_communicators/all2all.py
DeepEPAll2AllManagerBase ¶
Bases: All2AllManagerBase
All2All communication based on DeepEP High-Throughput kernels.
Source code in vllm/distributed/device_communicators/all2all.py
__init__ ¶
Source code in vllm/distributed/device_communicators/all2all.py
combine ¶
destroy ¶
DeepEPHTAll2AllManager ¶
Bases: DeepEPAll2AllManagerBase
All2All communication based on DeepEP High-Throughput kernels.
Source code in vllm/distributed/device_communicators/all2all.py
__init__ ¶
_make_all2all_kwargs ¶
Source code in vllm/distributed/device_communicators/all2all.py
get_handle ¶
Source code in vllm/distributed/device_communicators/all2all.py
DeepEPLLAll2AllManager ¶
Bases: DeepEPAll2AllManagerBase
All2All communication based on DeepEP Low-Latency kernels.
Source code in vllm/distributed/device_communicators/all2all.py
__init__ ¶
_make_all2all_kwargs ¶
_make_all2all_kwargs(
max_num_tokens_per_dp_rank: int,
token_hidden_size: int,
num_ep_ranks: int,
num_global_experts: int,
num_local_experts: int,
) -> dict[Any, Any]
the maximum number of tokens a DP rank
can dispatch all the ranks must hold the same value.
token_hidden_size: the hidden dimension of each token. num_ep_ranks: the number of EP group ranks. num_global_experts: Number of experts in the model. num_local_experts: Number of experts in an EP rank.
Source code in vllm/distributed/device_communicators/all2all.py
get_handle ¶
The kwargs for DeepEPLLAll2AllManager is dictated by _make_all2all_kwargs.
Source code in vllm/distributed/device_communicators/all2all.py
FlashInferAllToAllManager ¶
Bases: All2AllManagerBase
All2All communication based on flashinfer kernels.
Source code in vllm/distributed/device_communicators/all2all.py
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 |
|
__init__ ¶
Source code in vllm/distributed/device_communicators/all2all.py
cleanup ¶
Clean up workspace
Source code in vllm/distributed/device_communicators/all2all.py
ensure_alltoall_workspace_initialized ¶
Ensure workspace is initialized
Source code in vllm/distributed/device_communicators/all2all.py
get_handle ¶
initialize ¶
Initialize workspace
Source code in vllm/distributed/device_communicators/all2all.py
NaiveAll2AllManager ¶
Bases: All2AllManagerBase
A naive implementation of all2all communication. It uses all-reduce under the hood, which is not efficient at all. The main purpose is for testing and debugging.
Source code in vllm/distributed/device_communicators/all2all.py
__init__ ¶
combine ¶
Source code in vllm/distributed/device_communicators/all2all.py
destroy ¶
dispatch ¶
dispatch(
hidden_states: Tensor,
router_logits: Tensor,
is_sequence_parallel: bool = False,
) -> tuple[Tensor, Tensor]
Source code in vllm/distributed/device_communicators/all2all.py
naive_multicast ¶
naive_multicast(
x: Tensor,
cu_tokens_across_sp_cpu: Tensor,
is_sequence_parallel: bool,
) -> Tensor
Source code in vllm/distributed/device_communicators/all2all.py
PPLXAll2AllManager ¶
Bases: All2AllManagerBase
All2All communication based on PPLX kernels.