Allow tensors with requires_grad=True in c10 ops (#21599)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21599
We prevented this because c10 ops can't have a backwards yet and calling them with requires_grad=True would do the wrong thing
if the c10 op is not purely implemented by calling other autograd-able ops.
However, it is a valid use case to have c10 ops that just call other autograd-aware ops, and these ops should be callable with requires_grad=True.
This should fix https://github.com/pytorch/pytorch/issues/21584.
Differential Revision: D15744692
fbshipit-source-id: ba665365c850ef63fc9c51498fd69afe49e5d7ec
diff --git a/torch/csrc/jit/register_c10_ops.cpp b/torch/csrc/jit/register_c10_ops.cpp
index 37d4daa..02c094d 100644
--- a/torch/csrc/jit/register_c10_ops.cpp
+++ b/torch/csrc/jit/register_c10_ops.cpp
@@ -8,9 +8,6 @@
namespace {
at::Tensor unwrap_tensor(at::Tensor&& tensor) {
- if (tensor.requires_grad()) {
- throw std::runtime_error("Autograd not yet supported for c10 ops.");
- }
if (tensor.is_variable()) {
return torch::autograd::Variable(std::move(tensor)).tensor_data();
} else {