When subtracting two floating-point numbers x - y, we usually lose some precision, especially if x and y are very close. However, under certain conditions where x and y are close but not too close, a guard digit allows us to perform an exact subtraction when .
We can create a rudimentary mechanism for detecting precision loss during subtraction by scaling up the values before we subtract them, then scaling down the difference by the same factor.
Loading TypeScript...
Intuition suggests that this is true - now let's examine how we would construct a rigorous proof of this.
If and have the same exponent, then certainly is exact. Otherwise, from the condition of the theorem, the exponents can differ by at most . Scale and interchange and if necessary so that , and is represented as and as . Then the algorithm for computing will compute exactly and round to a floating-point number. If the difference is of the form , the difference will already be digits long, and no rounding is necessary. Since , , and since is of the form , so is .
When , the hypothesis of Theorem 11 cannot be replaced by ; the stronger condition is still necessary. The analysis of the error in , immediately following the proof of Theorem 10, used the fact that the relative error in the basic operations of addition and subtraction is small, which is the most common kind of error analysis.