Fix FSDP CI

Sometimes we randomly see unrelated FSDP CI failures such as https://github.com/pytorch/pytorch/runs/6298275361?check_suite_focus=true which are unrelated to the diff at hand. Suspicion is that because some other tests set `BACKEND` which is a generic env var for distributed tests, if those tests are run in same CI container before, they won't get unset and we'll use gloo for FSDP backend.

But gloo is not currently supported, and this was mostly added for easy testing during early FSDP development, so remove this entirely.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76878
Approved by: https://github.com/awgu
diff --git a/torch/testing/_internal/common_fsdp.py b/torch/testing/_internal/common_fsdp.py
index 44e7866..dad54cc 100644
--- a/torch/testing/_internal/common_fsdp.py
+++ b/torch/testing/_internal/common_fsdp.py
@@ -1,6 +1,5 @@
 # Owner(s): ["oncall: distributed"]
 
-import os
 import sys
 from contextlib import suppress
 from copy import deepcopy
@@ -381,10 +380,7 @@
 
         # Specify gloo backend to make 'init_process_group()' succeed,
         # Actual tests will be skipped if there is no enough GPUs.
-
-        backend = os.environ.get("BACKEND", None)
-        if backend is None:
-            backend = "nccl" if torch.cuda.is_available() else "gloo"
+        backend = "nccl" if torch.cuda.is_available() else "gloo"
 
         try:
             dist.init_process_group(