Get accumulate dtype for Intel GPU (#134465)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
There are two function variants to get accumulated dtype for a given dtype:
- Func1: `c10::ScalarType toAccumulateType(c10::ScalarType type, c10::DeviceType device)`
- Func2: `c10::ScalarType toAccumulateType(c10::ScalarType type, bool is_cuda)`
The Func1 is general enough to support different devices, while the Func2 only supports CUDA and CPU. This PR intends to add the Intel GPU path in the Func1. And we expect users to invoke the Func1 to ensure compatibility for different devices.
* __->__ #134465
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134465
Approved by: https://github.com/Skylion007, https://github.com/atalman
diff --git a/aten/src/ATen/AccumulateType.cpp b/aten/src/ATen/AccumulateType.cpp
index c4623cc..5de4757 100644
--- a/aten/src/ATen/AccumulateType.cpp
+++ b/aten/src/ATen/AccumulateType.cpp
@@ -9,6 +9,8 @@
switch (device) { \
case DeviceType::CUDA: \
return CppTypeToScalarType<at::acc_type_device<scalar_t, c10::DeviceType::CUDA>>::value; \
+ case DeviceType::XPU: \
+ return CppTypeToScalarType<at::acc_type_device<scalar_t, c10::DeviceType::XPU>>::value; \
case DeviceType::MPS: \
return CppTypeToScalarType<at::acc_type_device<scalar_t, c10::DeviceType::MPS>>::value; \
default: \