You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source-pytorch/common/precision_intermediate.rst
+1-4Lines changed: 1 addition & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -115,8 +115,6 @@ BFloat16 Mixed Precision
115
115
116
116
.. warning::
117
117
118
-
BFloat16 requires PyTorch 1.10 or later and is only supported with PyTorch Native AMP.
119
-
120
118
BFloat16 is also experimental and may not provide significant speedups or memory improvements, offering better numerical stability.
121
119
122
120
Do note for GPUs, the most significant benefits require `Ampere <https://en.wikipedia.org/wiki/Ampere_(microarchitecture)>`__ based GPUs, such as A100s or 3090s.
@@ -126,14 +124,13 @@ BFloat16 Mixed precision is similar to FP16 mixed precision, however, it maintai
126
124
Under the hood, we use `torch.autocast <https://pytorch.org/docs/stable/amp.html>`__ with the dtype set to ``bfloat16``, with no gradient scaling.
127
125
128
126
.. testcode::
129
-
:skipif: not _TORCH_GREATER_EQUAL_1_10 or not torch.cuda.is_available()
0 commit comments