See also: https://en.wikipedia.org/wiki/Minifloat
Related: hoops that one jumps through when training using fp16 accelerated in hardware: https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#training
Ha, I remember trying to do 8 bit fixed point 4.4 in z80 assembler. I was trying to make a boid flocking demo on a gameboy color, and I didn’t understand cordic functions. I still remember the frustration!
Maybe math here, since this talks about how to represent a numbering system.
math
See also: https://en.wikipedia.org/wiki/Minifloat
Related: hoops that one jumps through when training using fp16 accelerated in hardware: https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#training
Ha, I remember trying to do 8 bit fixed point 4.4 in z80 assembler. I was trying to make a boid flocking demo on a gameboy color, and I didn’t understand cordic functions. I still remember the frustration!
Maybe
mathhere, since this talks about how to represent a numbering system.