We wish to find the roots of the equation
| (1) |
That is, we wish to find values of x that satisfy the equation, for given values of the coefficients a, b, and c.
The formula in equation 2 is recommended. It is numerically well-behaved. As we shall see in figure 1, this version performs better than the unsophisticated “textbook” version in equation 6, better by many orders of magnitude in situations where one root is much larger than the other.
The names “small” and “large” describe the absolute magnitude of the roots:
| (3) |
The rationale behind equation 2 is easy to understand:
The fundamental issue here is the fact that in a computer, floating point numbers are subject to roundoff. Roughly speaking, the roundoff error is on the order of the “machine epsilon”, which is not zero. (Typically there is considerable roundoff error in the 16th decimal place, although this varies somewhat from machine to machine.) There are lots of seemingly-innocuous real-world situations where this matters.
As a general rule, when the terms have the same sign, the sum of two terms in the denominator (as in equation 2b or equation 12) is numerically better behaved than the difference of two terms the numerator (as in equation 6 or equation 11). Vastly better.
It is a good habit to use equation 2 always, to the exclusion of less-clever formulas such as equation 6 (except in the trivial case where some of the coefficients are zero). You can get away with using equation 6 in situations where you know the two roots are a complex-conjugate pair, or are real and close together, that is, in situations where the discriminant b^{2}−4ac is either negative or small compared to b^{2}.
Note: When b is zero, the original equation is trivial. It can be solved by inspection:
| (4) |
Similarly, when a is zero, the original equation is trivial; it’s not even a quadratic. So in equation 2 we can safely assume that a and b are nonzero.
Figure 1 compares the smart formula (equation 2) with the not-so-smart formula (equation 6) over a range of conditions. The true x_{big} is between 1 and 8, such that log(x_{big}) is uniformly distributed. The true x_{small} is between 10^{−14} and 10^{−19}, such that log(x_{small}) is uniformly distributed.
There is a range of many orders of magnitude where equation 2 produces the correct answers, but equation 6 produces wildly incorrect answers for x_{small}. Cases where the incorrect answer is zero cannot be properly plotted on log-log axes, but are qualitatively indicated by downward-pointing triangles.
By way of contrast, let’s see what happens if we try to solve a real-world equation. Here is an equation that comes up in chemistry, when calculating the pH of an acid solution:
| (5) |
Let’s see what happens if we try to solve that using the “textbook” version of the quadratic formula.
| (6) |
where in this case the variables are:
| (7) |
Let’s do a numerical example, in the case where the acid is strong but moderately dilute:
| (8) |
We are talking about a hypothetical acid. Let’s assume we arrived at the K_{a} value by taking the average of various estimates. There is a huge amount of uncertainty in the resulting K_{a} value, easily ±1×10^{4} or even more. The uncertainty in the concentration is negligible by comparison. Plugging the K_{a} and C_{HA} numbers into equation 6, we get
| (9) |
Now some people might decide on the basis of «common sense» that the number inside the square root could be rounded off to 3.210×10^{9}. The uncertainty is so large that the sig-figs rules require us to round this number to a single digit, so carrying three extra digits «should» be plenty, or so the story goes. So let’s try rounding off and see what happens when we continue the calculation:
| (10) |
which is just completely wrong. Both of the alleged roots of the quadratic are negative. It is physically impossible for the [H^{+}] concentration to be negative.
Analysis: It turns out that the «common sense» roundoff leading to equation 10 was a disaster. In this situation:
Let’s consider the equation
| (11) |
where z is small. This comes up in connection with the quadratic formula, and also in special relativity, as discussed in reference 1. Although equation 11 is just fine if you are doing algebra, it is grossly unsuitable if you want to evaluate it numerically. This is because of the infamous “small difference between large numbers” problem. You are much better off using equation 12 instead; it is algebraically exact and numerically well-behaved for all z≤1.
| (12) |
The reasoning behind equation 12 is the same as the reasoning behind equation 2, as discussed in section 1.
Let’s investigate another way of dealing with equation 11. You could expand the square root using a first-order Taylor series, namely:
| (13) |
whenever z is small compared to 1. This would give you a reasonably accurate answer if |z| is small enough. On the other hand, there is no real advantage to equation 13, because equation 12 is just as convenient and is less restricted.
If you want more accuracy than is provided by a first-order Taylor series, you should not assume that the best way forward is to use a higher-order Taylor series. Often there are other numerical methods that are better behaved. That is, they converge more quickly, giving higher accuracy with less work.
For an interesting application of these ideas, see reference 1.