Mark mv as CompositeExplicitAutograd (#67373)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67373
From the implementation of mv, it's decomposed into addmv. So it should
be a CompositeExplicitAutograd op.
Test Plan: It shouldn't change any behaviors. So, CI.
Reviewed By: bdhirsh
Differential Revision: D31973265
Pulled By: alanwaketan
fbshipit-source-id: 3b6850f08e6f671b908a177f148cc6194baa8a13
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml
index e8a280e..d59d440 100644
--- a/aten/src/ATen/native/native_functions.yaml
+++ b/aten/src/ATen/native/native_functions.yaml
@@ -3127,7 +3127,7 @@
- func: mv(Tensor self, Tensor vec) -> Tensor
variants: function, method
dispatch:
- CPU, CUDA, SparseCsrCUDA: mv
+ CompositeExplicitAutograd: mv
SparseCPU, SparseCUDA, SparseCsrCPU: mv_sparse
- func: mv.out(Tensor self, Tensor vec, *, Tensor(a!) out) -> Tensor(a!)