FP32

Known as the single-precision binary floating point format of the IEEE 754 specification, this widely-used number format has been used to store weights and biases in deep learning for a long time. It is also used in other scientific calculations, but is not considered precise and can quickly accumulate significant error.

Was this page helpful?