In the early days of computing, mathematicians focused on understanding and mitigating the limitations of floating point arithmetic.With the shift to double precision, the need to consider floating point computation has decreased, but it still remains relevant in some cases.There is now a growing interest in using lower precision numbers, such as half precision, particularly in applications like neural networks.However, using lower precision numbers requires revisiting issues like different ways of rounding, which were studied in the past.