Deep learning has spurred interest in novel floating point formats. Algorithms often don’t need as much precision as standard IEEE-754 doubles or even single precision floats. Lower precision makes it possible to hold more numbers in memory, reducing the time spent swapping numbers in and out of memory. Also, low-precision circuits are far less complex. Together these can benefits can give significant speedup.

深度学习促使了人们对新的浮点数格式的兴趣。通常(深度学习)算法并不需要 64 位,甚至 32 位的浮点数精度。更低的精度可以使在内存中存放更多数据成为可能,并且减少在内存中移动进出数据的时间。低精度浮点数的电路也会更加简单。这些好处结合在一起,带来了明显了计算速度的提升。

BF16 (bfloat16) is becoming a de facto standard for deep learning. It is supported by several deep learning accelerators (such as Google’s TPU), and will be supported in Intel processors two generations from now.

bfloat16,BF16 格式的浮点数已经成为深度学习事实上的标准。已有一些深度学习 “加速器” 支持了这种格式,比如 Google 的 TPU。Intel 的处理与在未来也可能支持。

The BF16 format is sort of a cross between FP16 and FP32, the 16- and 32-bit formats defined in the IEEE 754-2008 standard, also known as half precision and single precision.

BF16 浮点数在格式,介于 FP16 和 FP32 之间。(FP16 和 FP32 是 IEEE 754-2008 定义的 16 位和 32 位的浮点数格式。)

Format Bits Exponent Fraction sign(符号)
FP32 32 8 23 1
FP16 16 5 10 1
BF16 16 8 7 1

BF16 的指数位比 FP16 多,跟 FP32 一样,不过小数位比较少。这样设计说明了设计者希望在 16bits 的空间中,通过降低精度(比 FP16 的精度还低)的方式,来获得更大的数值空间(Dynamic Range)。

reference

https://www.maixj.net/ict/bfloat16-19900
https://blog.csdn.net/qq_36533552/article/details/105885714
https://www.cnblogs.com/mengfu188/p/13561682.html