vllm.executor.uniproc_executor ¶
ExecutorWithExternalLauncher ¶
Bases: UniProcExecutor
An executor that uses external launchers to launch engines, specially designed for torchrun-compatible launchers, for offline inference with tensor parallelism.
see https://github.com/vllm-project/vllm/issues/11400 for the motivation, and examples/offline_inference/torchrun_example.py for the usage example.
The key idea: although it is tensor-parallel inference, we only create one worker per executor, users will launch multiple engines with torchrun-compatible launchers, and all these engines work together to process the same prompts. When scheduling is deterministic, all the engines will generate the same outputs, and they don't need to synchronize the states with each other.
Source code in vllm/executor/uniproc_executor.py
_distributed_args ¶
Source code in vllm/executor/uniproc_executor.py
_init_executor ¶
Initialize the worker and load the model.
Source code in vllm/executor/uniproc_executor.py
determine_num_available_blocks ¶
Determine the number of available KV blocks. Add an additional all_reduce to get the min across all ranks. Note that even if we have the same gpu_memory_utilization
and swap_space
, the available memory in every rank might still differ because NCCL can take different amounts of memory in different ranks. Therefore, it is necessary to test if all ranks agree on the same KV cache configuration.
Source code in vllm/executor/uniproc_executor.py
UniProcExecutor ¶
Bases: ExecutorBase
Source code in vllm/executor/uniproc_executor.py
_distributed_args ¶
Return (distributed_init_method, rank, local_rank).
Source code in vllm/executor/uniproc_executor.py
_init_executor ¶
Initialize the worker and load the model.
Source code in vllm/executor/uniproc_executor.py
check_health ¶
collective_rpc ¶
collective_rpc(
method: Union[str, Callable],
timeout: Optional[float] = None,
args: Tuple = (),
kwargs: Optional[Dict] = None,
non_block: bool = False,
) -> List[Any]
Source code in vllm/executor/uniproc_executor.py
reinitialize_distributed ¶
reinitialize_distributed(
reconfig_request: ReconfigureDistributedRequest,
) -> None