Fix cuda out of memory test (#13864)

Summary:
torch.randn(big_number_here, dtype=torch.int8) is wrong because randn
isn't implemented for torch.int8. I've changed it to use torch.empty
instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13864

Differential Revision: D13032130

Pulled By: zou3519

fbshipit-source-id: d157b651b47b8bd736f3895cc242f07de4c1ea12
diff --git a/test/test_cuda.py b/test/test_cuda.py
index 3ab9997..30343f5 100644
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -833,7 +833,7 @@
         tensor = torch.zeros(1024, device='cuda')
 
         with self.assertRaisesRegex(RuntimeError, "Tried to allocate 80.00 GiB"):
-            torch.randn(1024 * 1024 * 1024 * 80, dtype=torch.int8, device='cuda')
+            torch.empty(1024 * 1024 * 1024 * 80, dtype=torch.int8, device='cuda')
 
         # ensure out of memory error doesn't disturb subsequent kernel
         tensor.fill_(1)