Brain Bit Precision Int32 FP32, Int16 PF16, Int8 FP8, Int6 FP6, Int4? Idealness of Computational Machine Learning ML TOPS

QuantumHelos
QuantumHelos
Joined: 5 Nov 17
Posts: 190
Credit: 64239858
RAC: 0
Topic 225036

Brain Bit Precision Int32 FP32, Int16 PF16, Int8 FP8, Int6 FP6, Int4? Idealness of Computational Machine Learning ML TOPS for the human brain:

Brain level Int/Float inferencing is ideally in Int8/7 with error bits or float remainders

Comparison List : RS

48Bit Int+Float Int48+FP48 (many connections, Eyes for example) HDR Vison

40BitInt+Float Int40+FP40 HDR Basic

Int16 FP32

Int8 Float16(2Channel, Brain Node)(3% Brain Study)

Int7 (20% Brain Study)

Int6 (80% Brain Study)

Int5 (Wolves (some are 6+))

Int4 (Sheep & worms)

Int3 (Germ biosystems)

Statistically a science test stated 80% of brains in man quantify Bit at 6 20% to 7Bit

XBox X & PlayStation 5 do down to INT4Bit (quite likely for quick inferencing)

Be aware that using 4 bit Int instructions .. potentially means more instructions used per clock cycle & more micro data transfers..

Int8 is most commonly liable to quantify data with minimum error in 8Bit like the Atari STE or the Nintendo 8Bit..

Colour perception for example is many orders of magnitude higher! Or 8bit colours EGA is all we would use..


16Bit was not good enough.. But 32Bit suites most people! But 10Bit(x4) 40Bit & Dolby 12Bit(x4) 48Bit is a luxury & we love it!

(c)QE https://is.gd/ProcessorLasso

https://science.n-helix.com/2021/03/brain-bit-precision-int32-fp32-int16.html