Fix small typo (#51542)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51541
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51542
Reviewed By: albanD
Differential Revision: D26199174
Pulled By: H-Huang
fbshipit-source-id: 919fc4a70d901916eae123672d010e9eb8e8b977
diff --git a/torch/optim/optimizer.py b/torch/optim/optimizer.py
index b3e38c6..53560a5 100644
--- a/torch/optim/optimizer.py
+++ b/torch/optim/optimizer.py
@@ -191,7 +191,7 @@
Args:
set_to_none (bool): instead of setting to zero, set the grads to None.
- This is will in general have lower memory footprint, and can modestly improve performance.
+ This will in general have lower memory footprint, and can modestly improve performance.
However, it changes certain behaviors. For example:
1. When the user tries to access a gradient and perform manual ops on it,
a None attribute or a Tensor full of 0s will behave differently.