On Intermediate Precision Required for Correctly-Rounding Decimal-to-Binary Floating-Point Conversion

The algorithms developed ten years ago in preparation for IBM's support of IEEE Floating-Point on its mainframe S/390 processor use an overly conservative intermediate precision to guarantee correctly-rounded results across the entire exponent range. here we study the minimal requirement for both bounded and unbounded precision on the decimal side (converting to machine precision on the binary side). An interesting new theorem on Continued Fraction expansions is offered, as well as an open problem on the growth of partial quotients for ratios of powers of two and five.

By: Michel Hack

Published in: RC23203 in 2004


This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.


Questions about this service can be mailed to reports@us.ibm.com .