Linear least squares is also known as linear regression. It is used for fitting a theoretical curve (aka model curve, aka fitted function) to a set of data.
Fun fact #1: The word “linear” in this context does mean that the fitted function is a straight line (although it could be). It could perfectly well be a polynomial, where the monomials are nonlinear, or it could be a Fourier series or some such, where the basis functions are highly nonlinear. The key requirement is that the coefficients i.e. the parameters that you are adjusting must enter the fitted function linearly.
Linear regression is by no means the most general or most sophisticated type of data analysis, but it is widely used because it is easy to understand and easy to carry out.
Let’s start with a simple example. (A lesssimple morerealistic example is discussed in section 4. Some of the principles involved, and some additional examples, are presented in section 5.)
Suppose we want to compute the density of some material, based on measuring three samples. The volume and mass of the samples are shown in table 1.
V  M  
Volume  Mass  
1.000  1.062  
1.100  1.091  
1.200  1.211 
This is Monte Carlo data, which is a fancy way of saying it was cooked up with the help of the randomnumber generator on a computer. The volume readings are evenly spaced. The mass readings are drawn from a random distribution, based on a density of ρ=1 (exactly) plus Gaussian random noise with a standard deviation of 0.05. All the data and plots in this section were prepared using the spreadsheet in reference 1.
V  M  ρ  
Volume  Mass  Density  
1.000  1.062  1.062  
1.100  1.091  0.992  
1.200  1.211  1.010  
Naïve  
Average  
1.021 
We plot this data as well, and look at it.
We can estimate the average density graphically, by hand, with the help of a transparent ruler. Draw a horizontal line so that the data points are distributed symmetrically above and below the line. Note:
Optionally, we can take the average of these three densities numerically, as shown at the bottom of table 2, although this is slightly naïve (for reasons discussed in item 3). The value we compute is very nearly the same as the value corresponding to the height of the horizontal line we drew using the transparent ruler, which makes sense. This is plotted in figure 2.
Note that taking the average is tantamount to a oneparameter fit. There is only one quantity to be determined from the data, namely the average density.
V  M  ρ  Weight  
Volume  Mass  Density  Factor  
1.000  1.062 ±.05  1.062 ±.05  1.000  
1.100  1.091 ±.05  0.992 ±.046  1.210  
1.200  1.211 ±.05  1.010 ±.042  1.440  
Naïve  Weighted  
Average  Average  
1.021  1.018 
Again, we plot the data and look at it. Again we note that taking the average is tantamount to fitting the data using a oneparameter model i.e. a zerothorder polynomial i.e. a horizontal straight line.
The key step in the analysis is to draw a straight line through the data, as shown in figure 4.
This can be done by hand using a transparent ruler. Draw a line so that the data points are distributed symmetrically above and below the line. Note:
Pinning the ruler gives us a oneparameter model. In fact if we do it properly, it gives the same result as the weighted average in item 3 – the same conceptually, numerically, and in every other way. It should come as no surprise that we want a oneparameter model, given that we used oneparameter models in item 2 and item 3. See section 3.1 for more about how and why we pin the ruler.
Refer to section 3.2 and section 3.3, if you dare, for a discussion of some of the misconceptions that can arise in conjunction with this approach.
If you don’t believe me, take a look at figure 5. The red line shows what would happen if you performed a twoparameter straightline fit. It’s a horror show. The slope of the twoparameter fit is wildly different from the actual density. The oneparameter fit (shown in blue) does a much, much better job.
Keep in mind that the details depend on how much data we have, and now noisy it is. If we had 300 data points rather than 3 data points, we would be better able to afford adding fitting parameter, but even then we would need some specific rationale for doing so.
In connection with figure 5, when we speak of pinning the ruler, you could use an actual pushpin (with a suitable backing pad), but the usual good practice is to use your pencil or pen. Hold the tip of your pen at (0, 0). Gently push the ruler up against it, and use it as a pivot. In other words, use your pen as a pin. This trick is widely used by artists, woodshop and metalshop workers, draftsmen, et cetera.
Note that the situation shown in figure 5 is very common and utterly typical. The red line illustrates an important general principle, namely that it is a very bad idea to use “extra” adjustable parameters when fitting your data. This principle is known as Occam’s Razor and has been known for more than 700 years.^{1} Narrowly speaking, the red line does a better of fitting the data; it just does a much worse job of explaining and modeling the data. That is to say, the red line actually comes closer to the data points than the blue line does. However, our goal is to interpret the data in terms of density, and the red line does not help us with that. Equation 1b provides us with a sensible model of the situation, and the blue line implements this model. The slope of the blue line can be directly interpreted as density.
Some descriptive terminology:
For a simple yet modern introduction to the fundamental issues involved, see reference 2. See section 3.3 for more about the perils of superfluous parameters. If not pinned, the ruler implements at twoparameter model. Just because you could use a twoparameter model doesn’t mean you should. Just because the tool allows a twoparameter fit doesn’t mean it is a good idea. As the ancient proverb says,:
This section exists only to deal with a misconception. If you don’t suffer from this particular misconception, you are encouraged to skip this section.
Some people try to explain pinning the ruler by saying we are fitting the curve to an imaginary data point at (0, 0) ... but I do not recommend this way of thinking about it. It is better to base the decision on good, solid theory than on imaginary data. We do indeed have a good, solid theory, namely the definition of density, equation 1a.
In some sense it would be the easiest thing in the world to take data at (0, 0). We know exactly how much a zerosized sample weighs, and we know exactly how much volume a zerosized sample displaces. So this data point is in some sense more accurate than any of the data you see in table 3. However, this data point would be hard to analyze using the method of table 3, because we cannot divide zero mass by zero volume to get information about the density. This is harmless but uninformative. It just reflects the fact that for any density whatsoever, the massversusvolume curve must pass through (0, 0).
It must be emphasized that all the calculations involved in table 3 can be and should be done without reference to any imaginary data at the origin ... or any real data at the origin. In third grade you learned how to divide one number by another, and that is all that is needed to calculate the density in accordance with equation 1a.
Starting from
 (2) 
you can, if you want, rewrite this so that it looks like the slope of a line:
You are free to do this, but you are not required to. Furthermore, the legitimacy of the step from equation 2 to equation 3a is guaranteed by the axioms of arithmetic. It does not depend on any fake data (or real data) at the origin.
Let’s be clear: I do not want to hear any complaints about “fake” data at the origin, for multiple reasons:
The whole idea of using a twoparameter model in massversusvolume space is sophomoric, i.e. pseudosophisticated yet dumb at the same time. The normal approach would be to calculate the densities and then average them in the obvious way, as in figure 3. People get into trouble if they are sophisticated enough to realize that they can analyze the data in massversusvolume space, but not sophisticated enough to do it correctly.
This is another section that exists only to deal with a misconception. If you don’t suffer from this particular misconception, you are encouraged to skip this section.
Sometimes people who ought to know better suggest that “extra” parameters are a way of detecting and/or dealing with systematic errors. This is completely wrong, for multiple reasons.
Example #1: Let’s consider the scenario where Alice and Bob are lab partners. Alice takes some of the data, and Bob takes the rest. Alice screws up the tare, but Bob doesn’t. The data is shown in figure 6. Specifically, the twoparameter straightline fit to the data, as shown by the red line, has a negligible yintercept. The slope, meanwhile, is significantly wrong i.e. not equal to the actual density.
It should be obvious that in this scenario, allowing for a nonzero yintercept is nowhere near sufficient to detect the problem, let alone resolve it.
In contrast, plotting the data and looking at it, as suggested in item 1, helps a lot.
To repeat: You should never increase the number of parameters in your model without a good reason. Mumbling vague words about some unexplained “experimental error” is never a sufficient rationale. That’s the sort of thing that leads to the disaster shown by the red curve in figure 5 and/or figure 6.
Example #2: If you systematically use the wrong units, e.g. CGS units (g/cc) instead of SI units (kg/m^{3}), there result will suffer from a huge systematic error, but fitting the data using a superfluous yintercept will be completely ineffective at detecting the problem, let alone resolving it.
Example #3: Porosity can cause systematic errors in any determination of density. These cannot be reliably detected, let alone corrected, by throwing in an “extra” parameter without understanding what’s going on. See section 4.
Example #4: If you forgot to tare the balance in a way that affected all the data equally, it would lead to a nonzero yintercept on the pseudomassversusvolume curve. However, the converse is not true. The huge yintercept on the red curve in figure 5 is absolutely not evidence of a screwedup tare. Conversely, the absence of a significant yintercept in figure 6 is absolutely not evidence of the absence of systematic error. Furthermore, even if the data behind figure 5 included a screwedup tare, using a twoparameter fit would not be an appropriate way to resolve the problem. It would just add to your problems. You would have all the problems associated with the red curve in figure 5, plus a screwedup tare. The correct procedure would be to go back and determine the tare by direct measurement. If this requires retaking all three data points, so be it.
The red curve in figure 6 shows what happens if you try to detect and/or deal with a problem without fully understanding it. Again, the best procedure is to do things right the first time. In an emergency, if you wanted to correct the mistake in figure 6, you would first need to understand the nature of the mistake, and then cobble up a complicated model to account for it. Just throwing in an adjustable yintercept and hoping for the best is somewhere between unprofessional and completely insane.
If you have a very specific, wellunderstood nonideality in your experiment, the best procedure is to redesign the experiment to remove the nonideality. The second choice would be to make a richer set of measurements, so that you get independent observations of whatever is causing the nonideality ... for instance by making independent observations of the tare. Failing that, if you are sure you understand the nonideality, it might be OK to complexify the model so that it accounts for that specific nonideality, such as the fancy stepwise correction we see in the cyan curve in figure 7. We use a model of the form y=mx for Bob’s data and a model of the form y=mx+b for Alice’s data. Note that m is the same for both, as it should be, since we are trying to determine the density and it should be the same for both. In this scenario we have an order of magnitude more data than we did in section 2 41 points instead of 3 points – so the additional parameter (b) does not cause nearly so much trouble. Away from the step, the slope (m) of the cyan curve is constant and is a very good estimate of the density we are trying to measure. However, even in the best of circumstances, adding variables comes at a cost. You need to make sure you have enough data points (and short enough error bars) so that you can successfully ascertain values for all the parameters. That is, you need to make sure you are on the right side of the bias/variance tradeoff.
Keep in mind that all the methods discussed in this document are leastsquares methods. That means they are in the category of maximum likelihood methods. In other words, they calculate the conditional probability of the data, given the model. For virtually all dataanalysis purposes, that’s the wrong way around. What you really want is the probability of the model, given the data. The details of how to do this right are beyond the scope of this document. For simple tasks such as the density determination in section 2 you can get away with maximum likelihood, but please do not imagine that there is any 11th commandment that says maximum likelihood is the right thing to do. A lot of people who ought to know better assume it is OK, even when it isn’t.
Whenever you need to analyze data, you should test your methods. The procedure outlined in section 2 is a good empirical check (but not the only possible check).
Starting from a set of reasonable parameters, generate some artificial data. Add some Monte Carlo noise to the data. Then analyze the data and see how accurately you get the right answer, i.e. how closely the fitted parameters agree with the parameters you started with.
This kind of check is called “closing the loop” around the data reduction process. The closed loop is shown in figure 8. Closing the loop is considered standard good practice aka due diligence.
The linest(...) spreadsheet function is often used for finding a “trend line” ... but it is capable of doing much more than that. In general, it finds a regression curve, which not be a straight line.
In statistics, this fitting procedure is called linear regression. It must be emphasized that this method requires the fitted function (the regression curve) be a linear combination of the basis functions, but the basis functions themselves do not need to be linear functions of X (whatever X may be).
Let’s do another density determination. This time the task will be somewhat more realistic and more challenging. We will have to apply a higher degree of professionalism. The spreadsheet used to construct the model and do the analysis is available; see reference 1.
In the real world, it is usually easier to measure mass accurately than to measure volume accurately. Therefore, in this section we have arranged for the artificial data to have significant error bars in the volume direction but not in the mass direction, as you can see in figure 9.
In a logical world, dataanalysis procedures would be able to handle uncertain abscissas just as easily as uncertain ordinates. However, the most commolyavailable procedures are not as logical as you might have hoped. They do not handle uncertain abscissas at all well. Therefore in figure 10 we replot the data using mass as the abscissa. It must be emphasized that figure 9 and figure 10 are two ways of representing exactly the same data. (For clarity, the error bars in these two figures are 3σ long. All other figures use ordinary 1σ bars.)
We divide mass by volume to get the density. This is done numerically, on a samplebysample basis, just as we did in section 2. The numerical data table can be found in reference 1. The results are plotted in figure 11. The naïve (unweighted) average is also shown.
At first glance, the data doesn’t look too terrible. However, if you look more closely, it looks like there might be something fishy going on. The lowmass data might be systematically high, while the highmass data might be systematically low.
It’s hard to tell what’s going on just by looking at the data in this way. The professional approach is to look at the residuals. That is, we subtract the fitted average from the data and see what’s left. Even better, if possible, we normalize each of the residuals by its own error bar. Then, to make the data fit nicely on the graph, we reduce the residuals by a factor of 10. This is shown in figure 12. The properlyweighted average is also shown.
Note that normalizing the residuals is easy for artificial data but may be more difficult for realworld data. At this stage in the analysis, you might or might not have a good estimate for the error bars on the data. If necessary, make an orderofmagnitude guess and then forge ahead with the preliminary analysis. With any luck, you can use the results of the preliminary analysis to obtain a reasonable estimate of the uncertainy. You can then redo the analysis using the estimated uncertainty, and check to see whether everything is consistent.By the same token, it is pointless to quote the chisquare of a fit, if you are depending on the fit to obtain an estimate of the uncertainty of the data.
Our suspicions are confirmed; there is a definite northwest/southeast trend to the residuals. This is not good. There is some kind of systematic problem.
As mentioned in section 3.3, sometimes people who ought to know better suggest that throwing in a superfluous fitting parameter is a good way to check for and/or correct for systematic errors. Applying this idea to our example gives the results shown in figure 13. As you can see, adding a parameter to the model without understanding the problem is somewhere between unprofessional and completely insane. It is not successful in detecting (let alone correcting) the problem.
After thinking about the problem for a long time, we discover that the material is slightly porous. When we measure the volume by seeing how much water is displaced by the sample, the water percolates into the sample for a distance of 0.2 units. As a result, there is a “shell” surrounding a “core”. Specifically, it turns out that all along, the data was synthesized by stipulating that the effective displacement of the sample is 100% of the core volume plus 2/3rds of the shell volume, plus some noise.
Now that we understand what is going on, we can build a wellfounded model. Specifically, we will fit to a twoparameter model, hoping to find the density of the core and the (apparent) density of the shell. The results are shown in figure 14.
Note that the residuals are much better behaved now. They are closer to zero, and exhibit no particular trend.
As you can see from the yintercept of the model, the apparent density of the shell is 1.518. This is very nearly the ideal answer i.e. 1.5 i.e. the reciprocal of 2/3rds.^{2}
Meanwhile, the other fitted parameter tells us the asymptotic density, i.e. the density of the core, which is 0.995. This is very nearly the ideal answer i.e. 1.0. Note that this is incomparably more accurate than the result we would have gotten via simple oneparameter averaging (as in figure 12) or via a dumb twoparameter straightline fit (as in figure 13). Even if you took data all the way out to volume=10 or volume=100 the shell would cause a significant systematic error. That is to say, the apparent density of the sample approaches the asymptotic density only slowly ... very very slowly.
You can see that in order to make an accurate determination of the density of the material, it was necessary to account for the systematic error. Random noise on the mass and volume measurements was nowhere near being the dominant contribution to the overall error.
If the data had been less noisy, the importance of dealing with the systematic error would have been even more spectacular. (I dialed up the noise on the raw data to make the error bars in figure 9 more easily visible. The downside is that as a result, the systematic errors are only moderately spectacular. They don’t stand out from the noise by as many orders of magnitude as they otherwise would. It is instructive to reduce the noise and rerun the simulation.)
In general, linear regression can be understood as follows. We want to find a fitted function that is a linear combination of certain basis functions.
To say the same thing in mathematical terms, we want to find a bestfit function B that takes the following form:

The a_{j} are called the coefficients, and the b_{j} are called the basis functions. There are M coefficients, where M can be any integer from 1 on up. Naturally, there are also M basis functions.
Using vector notation, we can rewrite equation 4 as
where a is an Mdimensional vector, and where b is a vectorvalued function of i. Equation 5b uses Dirac braket notation to represent the dot product.
Given a set of observed Yvalues {Y(i)} and a set of basis functions, the simplified fitting procedure finds the optimal coefficientvector ⟨a ... where “optimal” is defined in terms of minimizing the distance
where D_{u} is the naïve unweighted distance. The weighted distance is:
 (7) 
When minimizing D, you can ignore the denominator in equation 7, since it is a constant.
It must be emphasized that the coefficients a_{j} are always independent of i; that is, independent of which data point we are talking about. In contrast, the weights w_{i} depend on i but are independent of j, i.e. independent of which basis function we are talking about.
Pedagogical suggestion:
In this section, we assume each data point has a nontrivial weight associated with it. If the data is uncorrelated, we should minimize the weighted distance:
 (8) 
where σ(i) is the uncertainty on the ith point, and the weighting factor W(i) ≡ 1/σ^{2}(i) tells how much emphasis to give the ith point. Alas the spreadsheet linest(...) function does not allow you to specify the weights. It forces you to give all the points the same weight. This is, in general, a very bad practice. The workaround for this problem is to pull a factor of σ(i) out of every point in the data, and out of every point in every basis function. This is doable, but incredibly annoying. Among other things, it means the supposedlyconstant function b_{0}(i) in equation 11a is no longer constant, i.e. no longer independent of i. This means you must set the third argument of linest(y_vec, basis_vecs, affine) to zero, and then supply your own b_{0}(i) function as one of the basis functions in the second argument.
More generally, we can handle correlated raw data as follows:
 (9) 
where w is the weight matrix.
Typical spreadsheet programs have a linest() function that doesn’t know anything about weights. For uncorrelated data, you can compensate for this easily enough, as follows: For each data point i, multiply the observed Y(i) and each of the basis functions b(i) by 1/σ. This produces the required factor of 1/σ^{2} in the objective function, in accordance with equation 8.
This is of course synonymous with multiplying the data and the objective functions by √(w_{ii}), in accordance with equation 9.
In the linest() function, you not enable the builtin feature that gives you a constant term. You need a column of constants to which weighting can be applied. Other than that, you do not need to allocate any new rows or columns, because you can do the division on the fly, dividing one array by another. If the observations are in column Y, the basic functions are in columns ABC, and the weights are in column W, you can use an expression like this:
=linest(Y1:Y10/W1:W10, A1:C10/W1:W10, 0, 1)
An example of this can be found on one of the sheets in reference 3.
Consider the data shown in figure 15.
You will notice that the data points to not have error bars associated with them. Instead I used the model to plot:
The model (including the tolerance band) is plotted in the background, and then the zerosized pointlike data points are plotted in the foreground.
I am not the only person to do things this way. You can look at some data from the search for the Higgs boson. The slide shows the 1σ (green) and 2σ (yellow) bands. The data points are pointlike. We get to ask whether the points are outside the band.
Reference 4 intro goes into more detail on this point, and shows additional ways way of visualizing the concept. Search for where it talks about “misbegotten error bars”. The point is, putting error bars on the raw data points is just wrong. It will give you the wrong fitted parameters. I am quite aware that typical dataanalysis texts tell you to do it the wrong way.
Now the question arises, how do we achieve the desired state. In favorable cases, the weights can be determined directly from the abscissas. In leastsquares fitting, the abscissas are considered known, so determining the weights is straightforward.
Sometimes, however, the weights depend on the model in more complicated ways. They may depend on the fitted parameters. This is a a chickenandegg problem, because the fitting procedure requires weights in order to do the fit, yet the fitted parameters are needed in order to calculate the weights.
The solution is to pick some initial estimate for the weights, and do an initial fit. The use the fitted parameters to obtain better weights, and iterate.
Let’s apply this to the example in reference 3. In order of decreasing desirability:
Note that nonlinear leastsquares fitting is always iterative. If you have to hunt for suitable weights, even linear leastsquares fitting becomes iterative.
The documentation for linest(...) is confusing in at least two ways:
The documentation claims the linest() function explains Y(i) in terms of what it calls “X(i)” but we are calling the basis functions b_{j}(i). We write “X(i)” in scare quotes to avoid confusion. In fact, in terms of dimensional analysis, the basis functions don’t even have the same dimensions as x. Consider the case where y(x) is voltage as a function of time; the basis functions necessarily have dimensions of voltage, i.e. the same as y (not time, i.e. not the same as x). IMHO the linest() function should be documented as linest(y_vec, basis_vecs, ...) or something like that.
In contrast, the genuine x (without scare quotes) means something else entirely: Suppose you think of y as a function of genuine x, plotted on a graph with a yaxis and a xaxis. Then it is emphatically not necessary for this genuine x to be one of the basis functions b_{j}. The poster child for this is a Fourier series (with no DC term), where the genuine x is not one of the basis functions.
Let’s be clear: the fitting procedure cares only about b_{j}(i), that is, the value of the jth basis function associated with the ith data point. It does not care about the genuine xvalue associated with the ith point, unless you choose to make x one of the basis functions. The xaxis need not even exist.
To say the same thing another way: Linear regression means that the bestfit function B needs to be a linear function of the parameters a_{j}. It does not mean that the jth basis function b_{j}(i) needs to be a linear function of i. Again, the poster child for this is a Fourier series (with no DC term), where none of the basis functions is linear.
Here’s another point of possible confusion:
The documentation says you should set the third argument (affine) to TRUE if «the model contains a constant term». What it actually means is that third argument should be set to TRUE if (and only if) you want an x^{0} basis function to be thrown into the model in addition to whatever basis functions you specified explicitly in the second argument.
To say the same thing the other way, if you included an explicit x^{0} basis function in the second argument, the third argument needs to be FALSE.
In particular, if you are doing a weighted fit (as you almost certainly should be), you need to set the third argument (“affine”) to FALSE and provide your own x^{0} points, properly weighted. (There is no way to apply weights to the builtin “affine” term.)
Keep in mind the warning in section 5.4: The basis functions do not need to be linear. A lot of people are confused about this point, because the special case described by equation 11 is the most common case. That is to say, it is common to choose the genuine xvalue to be one of the basis functions. Not necessary but common. We can express that mathematically as:
 (10) 
where x(i) is the genuine xvalue that you plot on the xaxis.
In particular, fitting a straight line with a twoparameter fit means choosing b_{1} to be the identity function (equation 10 or equation 11b), and choosing b_{0} to be the trivial constant function (equation 11a):
This linear equation can be contrasted with the quadratic in equation 14.
A oneparameter fit is used when the straight line is constrained to go through the origin. It omits the constant basis function (equation 11a).
Figure 16 shows a straight line fitted to four data points. Because of the symmetry of the situation, the fitted line has zero slope.
Figure 17 shows the same situation, except that it is a weighted fit. Point #3 has four times the weight of any of the other points. The weight comes from the model. The model says that the error band is only half as wide at abscissa #3.
Figure 18 shows the same situation, except that oint #4 is the one with the extra weight.
A lot of people are accustomed to thinking in terms of error bars. This is very often the wrong thing to do. When fitting to raw data points, such as the blue points in section 5.5.1, it is better to think in terms of zerosized pointlike points, with no error bars, sitting within an error band.
However, sometimes you get information that is not in the form of raw data points, but rather cooked distributions. The width of the distribution can be represented by error bars. Let’s be clear: the blue plotting symbols in section 5.5.1 represent numbers, whereas the red plotting symbols in this section represent distributions. A number is different from a distribution as surely as a scalar is different from a highdimensional vector.
The mathematics of weighted fitting is the same in this case, even though the interpretation of resulting fit is quite different. This is shown in the following diagrams.
It is not difficult to perform weighted fits using the usual implementation of linest(), even though the documentation doesn’t say anything about it. The procedure is messy and arcane, but not difficult.
The first step is to not use the builtin “affine” feature. Instead create an explicit basis function with constant values, as in equation 11a. This basis function can be weighted, whereas the builtin “affine” feature cannot.
The next step is to create a copy of all your yvalues and all your basis functions, scaled by the square root of the weight:
 (12) 
Then apply the linest() function to the primed quantities.
Note that the weight of a point is inversely proportional to the uncertainty squared. So multiplying by the square root of the weight is the same as dividing by the uncertainty to the first power:
 (13) 
The scaling factor applied by these equations affects both terms within the square brackets in equation 6b. It gets squared along with everything else, so the effect is the same as the weighting factor in the numerator of equation 7, as desired.
Keep in mind that linest() has no notion of continuity of y as a function of x. It treats the ith point (x_{i}, y_{i}) independently of all other points. Therefore scaling the yvalues in accordance with equation ?? does no harm. It has no effect other than the desired weighting factor.
A quadratic is perhaps the simplest possible nonlinear function. Figure 22 shows an example of using multiparameter leastsquares fitting to fit a quadratic to data. The fit (and the figure) were produced using the "polynomialplain" page of the spreadsheet in reference 5.
In the figure, the blue points are the observed data. For the purposes of this example, the data was cobbled up using the ideal distribution given by the black curve, plus some noise. (In the real world, you do not normally have the ideal distribution available; all you are given are the observed data.) The magenta curve is the optimal^{3} quadratic fit to the data.
One slightly tricky thing about this example is the following: When entering the "linest" expression into cells Q5:S5, you have to enter it as an array expression. To do that, highlight all three of the cells, then type in the formula, and then hit <ctrlshiftenter> (not simply <enter>). That is to say, while holding down the <ctrl> and <shift> keys with one hand, hit the <enter> key with the other hand.
More generally, whenever entering a formula that produces an array (rather than a single cell), you need to use <ctrlshiftenter> (rather than simply <enter>). Another application of this rule applies to the transpose formula in cells O5:O7.
For details about what the linest() function does, see reference 6.
Also note the BoxMuller transform in columns A through D. This is the standard way to generate a normal Gaussian distribution of random numbers. Depending on what version of spreadsheet you are using, and what addons you have installed, there may be or may not be easier ways of generating Gaussian noise. I habitually do it this way, to maximize portability.
Note that if you hit the <F9> key, it will recalculate the spreadsheet using a new set of random numbers. This allows you to appreciate the variability in the results.
Note that if you look at the formula in cell Q5, the second argument to the linest() function explicitly specifies two basis functions, namely the functions^{4} in columns J and K. However, the coefficientvector found by linest() and returned in cells Q5 through S5 is a vector with three (not two) components. That’s because by default, linest() implicitly uses the constant function (equation 11a) as one of the basis functions, in addition to whatever basis functions you explicitly specify. (If you don’t want this, you can turn it off using the optional third argument to linest(). An example of a fit that does not involve the constant function can be found in section 5.8.)
To summarize, the basis functions used for this quadratic fit are
 (14) 
This can be considered an extension or elaboration of equation 11.
Last but not least, note that linest() returns the coefficients in reverse order! That is, in cells Q5, R5, and S5, we have the coefficients a_{2}, a_{1}, and a_{0} respectively. I cannot imagine any good reason for this, but it is what it is.
For our next example, we fit some data to a threeterm Fourier sine series. That is to say, we choose to use the following three basis functions:
 (15) 
As a premise of this scenario, we assume a priori that we are looking for an odd function. Therefore we do not include any cosines in the basis set. This also means that we do not include the constant function (equation 11a) in the basis set. The constant function can be considered a zerofrequency cosine, so there is really only one assumption here, not two. We need to be careful about this, because the linest(...) function will stick in the constant function unless we explicitly tell it not to, by specifying false as the third argument to linest(...), as you can see in cell I2 of the Fouriertriangle tab in reference 5.
The color code in figure 23 is the same as in figure 22. That is, the blue points are the data. For the purposes of this example, the data was cobbled up using the ideal distribution given by the black curve, plus some noise. The magenta curve is the optimal^{5} quadratic fit to the data.
As implemented in reference 5, in cells I24 through K38 we tablulate the basis functions. However, this is just for show, and these tabulated values are not used by the linest(...) function. You could delete these cells and it would have no effect on the calculation. Instead, the linest(...) function evaluates the basis functions onthefly, using the array constant trick as discussed in reference 7.
As always, the linest(...) function returns the fitted coefficients in reverse order. In contrast, we really prefer to have the coefficients in the correct order, as given in cells I9 through K9, so that they line up with the corresponding basis functions in cells I24 through K38. Alas there is not, so far as I know, any convenient spreadsheet function that returns the reverse of a list, but the reverse can be computed using the index(...) function, as you can see in cell I9.
The linest(...) function can also return information about the residuals and the overall quality of the fit. To enable this feature, you need to set the fourth argument of linest(...) to true. You also need to arrange for the results of the linest(...) to fill a block M columns wide and 5 rows high, where M is the number of basis functions; for example, M=2 for a twoparameter straightline fit. That is, highlight a block of the right size, type in the formula, and then hit <ctrlshiftenter>.
We need to estimate the uncertainty associated with the fitted parameters. The procedures for doing this in the linear case are almost identical to the nonlinear case. See reference 8.