[quant][gpu][core][bug fix] Added memset to CacheKey for quantized cudnn conv2d op (#76436)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76436

In `quantized/cudnn/Conv.cpp`, memset was added for CacheKey. Memset is needed here because there is implicit padding added for CacheKey, and this can result in uninitialized padded values that are
 used for hashing ((see how at::native::ParamsHash is defined)). without memset, we can potentially come across a situation where two CacheKey objects have the same user defined parameters, but different padded values, resulting in different hash outputs.

Test Plan:
```
python test/test_quantization.py -k test_qconv2d_cudnn
```

Reviewed By: jerryzh168

Differential Revision: D35965241

Pulled By: dzdang

fbshipit-source-id: bdeab6c3d6d6066b71b2fb313ac851fe30ae5510
(cherry picked from commit 4ac2b7a858ac62f78b49da3cf43c76d9a7371d29)
diff --git a/aten/src/ATen/native/quantized/cudnn/Conv.cpp b/aten/src/ATen/native/quantized/cudnn/Conv.cpp
index fcf4367..69c61f4 100644
--- a/aten/src/ATen/native/quantized/cudnn/Conv.cpp
+++ b/aten/src/ATen/native/quantized/cudnn/Conv.cpp
@@ -119,6 +119,11 @@
 
   cudnnHandle_t handle = at::native::getCudnnHandle();
   CacheKey key;
+  // memset is needed here because there is implicit packing added for CacheKey, and this can result in uninitialized padded values that are
+  // used for hashing (see how at::native::ParamsHash is defined). without memset, we can potentially come across a situation where two
+  // CacheKey objects have the same user defined parameters, but
+  // different padded values, resulting in different hash outputs.
+  memset(&key, 0, sizeof(key));
   bool deterministic{true};
   bool allow_tf32{false};
   auto padding_vec = padding_.vec();