Test TensorTypeSet instead of autograd_meta_ for variable-ness. (#28543)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28543
By the current autograd_meta_ <=> type_set_ invariant (now explicitly documented
in the right place!), these are equivalent. But when I introduce null
autograd_meta_ optimization, they won't be equivalent anymore: TensorTypeSet is
going to give me the right information no matter what.
In the long run, this patch will be a wash, because everything will "be a variable"
in the long term. But I am making this change now to make sure that the invariant
actually holds.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D18171157
Pulled By: ezyang
fbshipit-source-id: cbba8fd5df9e6873a8757925db5f578fecbd2486
diff --git a/c10/core/TensorImpl.h b/c10/core/TensorImpl.h
index efdf65a..03c038c 100644
--- a/c10/core/TensorImpl.h
+++ b/c10/core/TensorImpl.h
@@ -804,7 +804,8 @@
* True if a tensor is a variable. See Note [Tensor versus Variable in C++]
*/
bool is_variable() const {
- return autograd_meta_ != nullptr && !impl::tls_local_tensor_type_set().excluded_.has(TensorTypeId::VariableTensorId);
+ return type_set_.has(TensorTypeId::VariableTensorId) &&
+ !impl::tls_local_tensor_type_set().excluded_.has(TensorTypeId::VariableTensorId);
}
/**
@@ -1579,7 +1580,7 @@
// This pointer always has unique ownership (meaning only one TensorImpl can own it
// at a time).
// This is private because we must maintain dispatcher invariants on it
- // in type_set_.
+ // in type_set_, namely, that autograd_meta_ != nullptr iff type_set_.has(VariableTensorId).
std::unique_ptr<c10::AutogradMetaInterface> autograd_meta_ = nullptr;
protected: