Update `is_floating_point()` docs to mention bfloat16 (#49611)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49610 . Explicitly mentions that `is_floating_point()` will return `True` if passed a `bfloat16` tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49611
Reviewed By: mrshenli
Differential Revision: D25660723
Pulled By: VitalyFedyunin
fbshipit-source-id: 04fab2f6c1c5c2859c6efff1976a92a676b9efa3
diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
index 0294942..ae0ffd9 100644
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -3865,7 +3865,7 @@
is_floating_point(input) -> (bool)
Returns True if the data type of :attr:`input` is a floating point data type i.e.,
-one of ``torch.float64``, ``torch.float32`` and ``torch.float16``.
+one of ``torch.float64``, ``torch.float32``, ``torch.float16``, and ``torch.bfloat16``.
Args:
{input}