Fix typos (#6348)
* Fix typo
* Fix typo
* Update faq.rst
diff --git a/docs/source/notes/cuda.rst b/docs/source/notes/cuda.rst
index 8949db3..bb750d8 100644
--- a/docs/source/notes/cuda.rst
+++ b/docs/source/notes/cuda.rst
@@ -97,7 +97,7 @@
Memory management
-----------------
-PyTorch use a caching memory allocator to speed up memory allocations. This
+PyTorch uses a caching memory allocator to speed up memory allocations. This
allows fast memory deallocation without device synchronizations. However, the
unused memory managed by the allocator will still show as if used in
``nvidia-smi``. You can use :meth:`~torch.cuda.memory_allocated` and
diff --git a/docs/source/notes/extending.rst b/docs/source/notes/extending.rst
index 216da77..f03b9f4 100644
--- a/docs/source/notes/extending.rst
+++ b/docs/source/notes/extending.rst
@@ -132,7 +132,7 @@
:class:`Module` requires implementing a :class:`~torch.autograd.Function`
that performs the operation and can compute the gradient. From now on let's
assume that we want to implement a ``Linear`` module and we have the function
-implementated as in the listing above. There's very little code required to
+implemented as in the listing above. There's very little code required to
add this. Now, there are two functions that need to be implemented:
- ``__init__`` (*optional*) - takes in arguments such as kernel sizes, numbers
diff --git a/docs/source/notes/faq.rst b/docs/source/notes/faq.rst
index b0a9f63..ddbf36d 100644
--- a/docs/source/notes/faq.rst
+++ b/docs/source/notes/faq.rst
@@ -84,7 +84,7 @@
My GPU memory isn't freed properly
-------------------------------------------------------
-PyTorch use a caching memory allocator to speed up memory allocations. As a
+PyTorch uses a caching memory allocator to speed up memory allocations. As a
result, the values shown in ``nvidia-smi`` usually don't reflect the true
memory usage. See :ref:`cuda-memory-management` for more details about GPU
memory management.