Saturday, 15 June 2013

Decimal accuracy of binary floating point numbers -



Decimal accuracy of binary floating point numbers -

i've found problem in many interview exams, don't see how work out proper solution myself. problem is:

how many digits of accuracy can represented floating point number represented 2 16-bit words?

the solution apparently approximately 6 digits.

where come from, , how work out?

it's quite simple: 32 bit ieee-754 float has 23+1 bits mantissa (aka significand, in ieee-speak). size of mantissa more or less determines number of representable digits.

to number of important digits, calculate log10(224), approx. 7.22. (or, if think 23 bits count, top bit fixed anyway, log10(223), approx. 6.92). in effect, have 6-7 important digits, normalized values.

the same can done 64 bit floating point values (doubles). have 52 (or 53) bits store mantissa, calculate log10(252), approx. 15.6 (or 15.9 53 bits), gives 15 important digits.

floating-point floating-point-precision numerical

No comments:

Post a Comment