ci: move nvidia repo disable to common.sh

Didn't realize that we also use apt package repos during test as well so
moving this to a more common place that gets sourced by everything

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74994
Approved by: https://github.com/malfet
diff --git a/.jenkins/pytorch/build.sh b/.jenkins/pytorch/build.sh
index 6a3729e..01faa94 100755
--- a/.jenkins/pytorch/build.sh
+++ b/.jenkins/pytorch/build.sh
@@ -25,11 +25,6 @@
   # only on one config for now, can expand later
   export USE_DEPLOY=ON
 
-  # TODO: Remove this once nvidia package repos are back online
-  # Comment out nvidia repositories to prevent them from getting apt-get updated, see https://github.com/pytorch/pytorch/issues/74968
-  # shellcheck disable=SC2046
-  sudo sed -i 's/.*nvidia.*/# &/' $(find /etc/apt/ -type f -name "*.list")
-
   # Deploy feature builds cpython. It requires these packages.
   # TODO move this to dockerfile?
   sudo apt-get -qq update
diff --git a/.jenkins/pytorch/common.sh b/.jenkins/pytorch/common.sh
index be5245b..06ac005 100644
--- a/.jenkins/pytorch/common.sh
+++ b/.jenkins/pytorch/common.sh
@@ -8,6 +8,13 @@
 # Save the SCRIPT_DIR absolute path in case later we chdir (as occurs in the gpu perf test)
 SCRIPT_DIR="$( cd "$(dirname "${BASH_SOURCE[0]}")" ; pwd -P )"
 
+if [[ "${BUILD_ENVIRONMENT}" == *linux* ]]; then
+  # TODO: Remove this once nvidia package repos are back online
+  # Comment out nvidia repositories to prevent them from getting apt-get updated, see https://github.com/pytorch/pytorch/issues/74968
+  # shellcheck disable=SC2046
+  sudo sed -i 's/.*nvidia.*/# &/' $(find /etc/apt/ -type f -name "*.list")
+fi
+
 # Required environment variables:
 #   $BUILD_ENVIRONMENT (should be set by your Docker image)