Enhance `new_group` doc to mention using NCCL concurrently. (#48872)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48872
Using NCCL communicators concurrently is not safe and this is
documented in NCCL docs.
However, this is not documented in PyTorch and we should add documentation for
ProcessGroupNCCL so that users are aware of this limitation.
ghstack-source-id: 118148014
Test Plan: waitforbuildbot
Reviewed By: rohan-varma
Differential Revision: D25351778
fbshipit-source-id: f7f448dc834c47cc1244f821362f5437dd17ce77
diff --git a/torch/distributed/distributed_c10d.py b/torch/distributed/distributed_c10d.py
index 1081c6e..83260ec 100644
--- a/torch/distributed/distributed_c10d.py
+++ b/torch/distributed/distributed_c10d.py
@@ -2349,6 +2349,17 @@
if they are not going to be members of the group. Additionally, groups
should be created in the same order in all processes.
+ .. warning::
+ Using multiple process groups with the ``NCCL`` backend concurrently
+ is not safe and the user should perform explicit synchronization in
+ their application to ensure only one process group is used at a time.
+ This means collectives from one process group should have completed
+ execution on the device (not just enqueued since CUDA execution is
+ async) before collectives from another process group are enqueued.
+ See `Using multiple NCCL communicators concurrently <https://docs.nvid
+ ia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#using
+ -multiple-nccl-communicators-concurrently>`_ for more details.
+
Arguments:
ranks (list[int]): List of ranks of group members. If ``None``, will be
set to all ranks. Default is ``None``.