No, swapping just exchanges the relation. What one needs to do is to put the errors in X and errors in Y in equal footing. That's exactly what TLS does.
Another way to think about it is that the error of a point from the line is not measured as a vertical drop parallel to Y axis but in a direction orthogonal to the line (so that the error breaks up in X and Y directions). From this orthogonality you can see that TLS is PCA (principal component analysis) in disguise.
Had a QuantSci Prof who was fond of asking "Who can name a data collection scenario where the x data has no error?" and then taught Deming regression as a generally preferred analysis [1]
Most of the time, if you have a sensor that you sample at, say 1 KHz and you’re using a reliable MCU and clock, the noise terms in the sensor will vastly dominate the jitter of sampling.
So for a lot of sensor data, the error in the Y coordinate is orders of magnitude higher than the error in the X coordinate and you can essentially neglect X errors.
That is actually the case in most fields outside of maybe clinical chemistry and such, where Deming became famous for explaining it (despite not even inventing the method). Ordinary least squares originated in astronomy, where people tried to predict movement of celestial objects. Timing a planet's position was never an issue (in fact time is defined by celestian position), but getting the actual position of a planet was.
Total least squares regression also is highly non-trivial because you usually don't measure the same dimension on both axes. So you can't just add up errors, because the fit will be dependent on the scale you chose. Deming skirts around this problem by using the ratio of variances of errors (division also works for different units), but that is rarely known well. Deming works best when the measurement method for both dependent and independent variable is the same (for example when you regress serum levels against one another), meaning the ratio is simply one. Which of course implies that they have the same unit. So you don't run into the scale-invariance issues, which you would in most natural science fields.
From that wikipedia article, delta is the ratio of y variance to x variance. If x variance is tiny compared to y variance (often the case in practice) then will we not get an ill-conditioned model due to the large delta?
If you take the limit of delta -> infinity then you will get beta_1 = s_xy / s_xx which is the OLS estimator.
In the wiki page, factor out delta^2 from the sqrt and take delta to infinity and you will get a finite value. Apologies for not detailing the proof here, it's not so easy to type math...
The issue in that case is that OLS is BLUE, the best linear unbiased estimator (best in the sense of minimum variance). This property is what makes OLS exceptional.
For most time series, noise in time measurement is negligible.
However, this does not prevent complex coupling phenomena from occurring for other parameters, such as GPS coordinates.
That brings up an interesting issue, which is that many systems do have more noise in y than in x. For instance, time series data from an analog-to-digital converter, where time is based on a crystal oscillator.
Well yeah, x is specifically the thing you control, y is the thing you don't. For all but the most trivial systems, y will be influenced by something besides x which will be a source of noise no matter how accurately you measure. Noise in x is purely due to setup error. If your x noise was greater than your y noise, you generally wouldn't bother taking the measurement in the first place.
You could, and maybe sometimes you would, but generally you won't. If at all possible, it makes a lot more sense to improve your setup to reduce the x noise, either with a better setup or changing your x to be something you can better control.
(Generalized) linear models have a straightforward probabilistic interpretation -- E(Y|X) -- which I don't think is true of total least squares. So it's more of an engineering solution to the problem, and in statistics you'd be more likely to go for other methods such as regression calibration to deal with measurement error in the independent variables.
Is there any way to improve upon the fit if we know that e.g. y is n times as noisy as x? Or more generally, if we know the (approximate) noise distribution for each free variable?
> Or more generally, if we know the (approximate) noise distribution for each free variable?
This was a thing 30 odd years ago in radiometric spectrometry surveying.
The X var was time slot, a sequence of (say) one second observation accumulation windows, the Yn vars were 256 (or 512, etc) sections of the observable ground gamma ray spectrum (many low energy counts from the ground, Uranium, Thorium, Potassium, and associated breakdown daughter products; some high energy counts from the infinite cosmic background that made it through the radiation belts and atmosphere to near surface altitudes)
There was a primary NASVD (Noise Adjusted SVD) algorithm (Simple var adjustment based on expected gamma event distributions by energy levels) and a number of tweaks and variations based on how much other knowledge seemed relevant (broad area geology and radon expression by time of day, etc)
I haven't dealt with statistics for a while, but what I don't get is why squares specifically? Why not power of 1, or 3, or 4, or anything else? I've seen squares come up a lot in statistics. One explanation that I didn't really like is that it's easier to work with because you don't have to use abs() since everything is positive. OK, but why not another even power like 4? Different powers should give you different results. Which seems like a big deal because statistics is used to explain important things and to guide our life wrt those important things. What makes squares the best? I can't recall other times I've seen squares used, as my memories of my statistics training is quite blurry now, but they seem to pop up here and there in statistics relatively often, it seems.
It has nothing to do with being easier to work with (at least, not in this day and age). The biggest reason is that minimizing sum of squares of residuals gives the maximum likelihood estimator if you assume that the error is iid normal.
If your model is different (y = Ax + b + e where the error e is not normal) then it could be that a different penalty function is more appropriate. In the real world, this is actually very often the case, because the error can be long-tailed. The power of 1 is sometimes used. Also common is the Huber loss function, which coincides with e^2 (residual squared) for small values of e but is linear for larger values. This has the effect of putting less weight on outliers: it is "robust".
In principle, if you knew the distribution of the noise/error, you could calculate the correct penalty function to give the maximum likelihood estimate. More on this (with explicit formulas) in Boyd and Vandenberghe's "Convex Optimization" (freely available on their website), pp. 352-353.
Edit: I remembered another reason. Least squares fits are also popular because they are what is required for ANOVA, a very old and still-popular methodology for breaking down variance into components (this is what people refer to when they say things like "75% of the variance is due to <predictor>"). ANOVA is fundamentally based on the pythagorean theorem, which lives in Euclidean geometry and requires squares. So as I understand it ANOVA demands that you do a least-squares fit, even if it's not really appropriate for the situation.
A power of 1 doesn’t guarantee a unique solution. A simple example has 3 points:
(0,0), (1,0), (1,1)
Any y = a × x with a between zero and one gives you a sum of errors of 1.
Powers less than 1 have the undesired property that they will prefer making one large error over multiple small ones. With the same 3-point example and a power of ½, you get:
- for y = 0, a cumulative error of 1
- for y = x/2, a cumulative error of 2 times √½. That’s √2 or about 1.4
- for y = x, a cumulative error of 1
(Underlying reason is that √(|x|+|y|) < √|x| + √|y|. Conversely for powers p larger than 1, we have *(|x|+|y|)^p > |x|^p + |y|^p)
Odd powers would require you to take absolute differences to avoid getting, for example, an error of -2 giving a contribution of (-2)³ = -8 to the cumulative error. Otherwise they would work fine.
Also, quadratics are just much easier to work with in a lot of ways than higher powers. Like you said, even powers have the advantage over odd powers of not needing any sort of absolute value, but quartic equations of any kind are much harder to work with than quadratics. A local optimum on a quartic isn't necessarily a global optimum, you lose the solvability advantages of having linear derivatives, et cetera.
The mean minimizes L2 norm, so I guess there's some connection there if you derive OLS by treating X, Y as random variables and trying to estimate Y conditioned on X in a linear form. "L[Y|X] = E[Y] + a*(X - E[X])"
If the dataset truly is linear then we'd like this linear estimator to be equivalent to the conditional expectation E[Y|X], so we therefore use the L2 norm and minimize E[(Y - L[Y|X])^2]. Note that we're forced to use the L2 norm since only then will the recovered L[Y|X] correspond to the conditional expectation/mean.
I believe this is similar to the argument other commenter mentioned of being BLUE. The random variable formulation makes it easy to see how the L2 norm falls out of trying to estimate E[Y|X] (which is certainly a "natural" target). I think the Gauss-Markov Theorem provides more rigorous justification under what conditions our estimator is unbiased, that E[Y|X=x] = E[Lhat | X=x] (where L[Y|X] != LHat[Y|X] because we don't have access to the true population when we calculate our variance/covariance/expectation) and that under those conditions, Lhat is the "best": that Var[LHat | X=x] <= Var[Lh' | X=x] for any other unbiased linear estimator Lh'.
To put it very simplistically, from mostly a practical view:
abs: cannot be differentiated around 0; has multiple minima; the error space has sharp ridges
power 4: way too sensitive to noise
power 3: var(x+y) != var(x) + var(y)
L1 (abs linear difference) is useful as minimizing on it gives an approximation of minimizing on L0 (count, aka maximizing sparsity). The reason for the substitution is that L1 has a gradient and so minimization can be fast with conventional gradient descent methods while minimizing L0 is a combinatoric problem and solving that is "hard". It is also common to add an L1 term to an L2 term to bias the solution to be sparse.
It all boils down to the fact that mean and variance give a good approximation of a probability distribution.
In the same way that things typically converge to the average they converge even more strongly to a normal distribution. So estimating noise as a normal distribution is often good enough.
The second order approximation is just really really good, and higher orders are nigh impossible to work with.
For linear models, least squares leads to the BLUE estimator: Best Linear Unbiassed Estimator. This acronym is doing a lot of work with each of the words having a specific technical meaning.
Fitting the model is also "nice" mathematically. It's a convex optimization problem, and in fact fairly straightforward linear algebra. The estimated coefficients are linear in y, and this also makes it easy to give standard errors and such for the coefficients!
Also, this is what you would do if you were doing Maximum Likelihood assuming Gaussian distributed noise in y, which is a sensible assumption (but not a strict assumption in order to use least squares).
Also, in a geometric sense, it means you are finding the model that puts its predictions closest to y in terms of Euclidean distance. So if you draw a diagram of what is going on, least squares seems like a reasonable choice. The geometry also helps you understand things like "degrees of freedom".
Least squares is guaranteed to be convex [0]. At least for linear fit functions there is only one minimum and gradient descent is guaranteed to take you there (and you can solve it with a simple matrix inversion, which doesn't even require iteration).
Intuitively this is because a multidimensional parabola looks like a bowl, so it's easy to find the bottom. For higher powers the shape can be more complicated and have multiple minima.
But I guess these arguments are more about making the problem easy to solve. There could be applications where higher powers are worth the extra difficulty. You have to think about what you're trying to optimize.
One way to think of it is that each point in your data follows your model but with gaussian iid noise shifting them away. The likelihood is then product of gaussians mean shifted and rescaled by variance. Minimize the log-likelihood then becomes reducing the sum of (x-mu)^2 for each point, which is essentially least squares.
Because the Euclidean norm is defined as the square root of a sum of squares. You can drop the square root and calculate everything as a least squares optimization problem. This problem in turn can be solved by finding where the derivative of the quadratic function is zero. The derivative of a quadratic function is a linear function. This means it is possible to find a matrix decomposition, say QR decomposition, and solve the problem directly.
If you want non linear optimization, your best bet is sequential quadratic programming. So even in that case you're still doing quadratic programming with extra steps.
I haven't done it in a while, but you can do cubes (and more) too. Cubes would be the L3 norm, something about the distance between circles (spheres?) in 3d space? I need to read about norms again to tell you why or when to choose that, but I know the Googlable term is "vector norms"
I remember one is Manhattan distance, next is as-the-crow-flies straight line distance, next is if you were a crow on the earth that can also swim in a straight line underwater, and so on
Sorry for my negativity / meta comment on this thread. From what I can tell the stackexchange discussion in the submission already to provides all the relevant points to be discussed about this.
While the asymmetry of least squares will probably be a bit of a novelty/surprise to some, pretty much anything posted here is more or less a copy of one of the comments on stackexchange.
[Challenge: provide a genuinely novel on-topic take on the subject.]
But bringing it up as a topic, aside from being informative, allows for more varied conversation that is allowed on stack exchange, like exploring alternative modeling approaches. It may not have happened, but the possibility can only present itself given the opportunity
The least squares and pca minimize different loss functions. One is
sum of squares of vertical(y) distances, another is is sum of closest distances to the line. That introduces the differences.
The Pythagorean distance would assume that some of the distance (difference) is on the x axis, and some on the y axis, and the total distance is orthogonal to the fitted line.
OLS assumes that x is given, and the distance is entirely due to the variance in y, (so parallel to the y axis). It’s not the line that’s skewed, it’s the space.
They both fit Gaussians, just different ones! OLS fits a 1D Gaussian to the set of errors in the y coordinates only, whereas TLS (PCA) fits a 2D Gaussian to the set of all (x,y) pairs.
Both of these do, in a way. They just differ in which gaussian distribution they're fitting to.
And how I suppose. PCA is effectively moment matching, least squares is max likelihood. These correspond to the two ways of minimizing the Kullback Leibler divergence to or from a gaussian distribution.
Yes, and if I remember correctly, you get the Gaussian because it's the minimum entropy (least additional assumptions about the shape) continuous distribution given a certain variance.
You are absolutely correct that the difference between y against x and x against y fitting perfectly demonstrates why the bias exists, but the correct way to remove the bias is not normalization, but to use a coordinate-independent regression technique.
See the other comments by many other commenters for details.
The least squares model will produce unbiassed predictions of y given x, i.e. predictions for which the average error is zero. This is the usual technical definition of unbiassed in statistics, but may not correspond to common usage.
Whether x is a noisy measurement or not is sort of irrelevant to this -- you make the prediction with the information you have.
Many times I've looked at the output of a regression model, seen this effect, and then thought my model must be very bad. But then remember the points made elsewhere in thread.
One way to visually check that the fit line has the right slope is to (1) pick some x value, and then (2) ensure that the noise on top of the fit is roughly balanced on either side. I.e., that the result does look like y = prediction(x) + epsilon, with epsilon some symmetric noise.
One other point is that if you try to simulate some data as, say
y = 1.5 * x + random noise
then do a least squares fit, you will recover the 1.5 slope, and still it may look visually off to you.
Is it? The wikipedia article says that regression dilution occurs when errors in the x data bias the computed regression line.
But the stackexchange question is asking why an unbiased regression line doesn't lie on the major axis of the 3σ confidence ellipse. This lack of coincidence doesn't require any errors in the x data. https://stats.stackexchange.com/a/674135 gives a constructed example where errors in the x data are zero by definition.
Yes people want to mentally rotate, but that's not correct. This is not a "geometric" coordinate system independent operation.
IMO this is a basic risk to graphs. It is great to use imagery to engage the spatial reasoning parts of our brain. But sometimes, it is deceiving — like this case —because we impute geometric structure which isn't true about the mathematical construct being visualized.
If the true value is medium high, any random measurements that lie even further above are easily explained, as that is a low ratio of divergence.
If the true value is medium high, any random measurements that lie below by a lot are harder to explain, since their (relative, i.e.) ratio of divergence is high.
Therefore, the further you go right in the graph, the more a slightly lower guess is a good fit, even if many values then lie above it.
Without knowing the meaning of that level of mathematical jargon, it feels like a "reticulating splines" sort of line. Makes me want to copy it and use it somewhere.
This is probably obvious, but there is another form of regression that uses Mean Absolute Error rather than Squared Error as this approach is less prone to outliers. The Math isn’t as elegant, tho.
Your "visual inspection" assumes both X and Y have noise. That's called Total Least Squares.
This is a great diagnostic check for symmetry.
> Maybe this is what TLS does?
No, swapping just exchanges the relation. What one needs to do is to put the errors in X and errors in Y in equal footing. That's exactly what TLS does.
Another way to think about it is that the error of a point from the line is not measured as a vertical drop parallel to Y axis but in a direction orthogonal to the line (so that the error breaks up in X and Y directions). From this orthogonality you can see that TLS is PCA (principal component analysis) in disguise.
[1] https://en.wikipedia.org/wiki/Deming_regression
So for a lot of sensor data, the error in the Y coordinate is orders of magnitude higher than the error in the X coordinate and you can essentially neglect X errors.
Total least squares regression also is highly non-trivial because you usually don't measure the same dimension on both axes. So you can't just add up errors, because the fit will be dependent on the scale you chose. Deming skirts around this problem by using the ratio of variances of errors (division also works for different units), but that is rarely known well. Deming works best when the measurement method for both dependent and independent variable is the same (for example when you regress serum levels against one another), meaning the ratio is simply one. Which of course implies that they have the same unit. So you don't run into the scale-invariance issues, which you would in most natural science fields.
In the wiki page, factor out delta^2 from the sqrt and take delta to infinity and you will get a finite value. Apologies for not detailing the proof here, it's not so easy to type math...
Why not? You could still do inference in this case.
This was a thing 30 odd years ago in radiometric spectrometry surveying.
The X var was time slot, a sequence of (say) one second observation accumulation windows, the Yn vars were 256 (or 512, etc) sections of the observable ground gamma ray spectrum (many low energy counts from the ground, Uranium, Thorium, Potassium, and associated breakdown daughter products; some high energy counts from the infinite cosmic background that made it through the radiation belts and atmosphere to near surface altitudes)
There was a primary NASVD (Noise Adjusted SVD) algorithm (Simple var adjustment based on expected gamma event distributions by energy levels) and a number of tweaks and variations based on how much other knowledge seemed relevant (broad area geology and radon expression by time of day, etc)
See, eg: Improved NASVD smoothing of airborne gamma-ray spectra Minty / McFadden (1998) - https://connectsci.au/eg/article-abstract/29/4/516/80344/Imp...
If your model is different (y = Ax + b + e where the error e is not normal) then it could be that a different penalty function is more appropriate. In the real world, this is actually very often the case, because the error can be long-tailed. The power of 1 is sometimes used. Also common is the Huber loss function, which coincides with e^2 (residual squared) for small values of e but is linear for larger values. This has the effect of putting less weight on outliers: it is "robust".
In principle, if you knew the distribution of the noise/error, you could calculate the correct penalty function to give the maximum likelihood estimate. More on this (with explicit formulas) in Boyd and Vandenberghe's "Convex Optimization" (freely available on their website), pp. 352-353.
Edit: I remembered another reason. Least squares fits are also popular because they are what is required for ANOVA, a very old and still-popular methodology for breaking down variance into components (this is what people refer to when they say things like "75% of the variance is due to <predictor>"). ANOVA is fundamentally based on the pythagorean theorem, which lives in Euclidean geometry and requires squares. So as I understand it ANOVA demands that you do a least-squares fit, even if it's not really appropriate for the situation.
Powers less than 1 have the undesired property that they will prefer making one large error over multiple small ones. With the same 3-point example and a power of ½, you get:
- for y = 0, a cumulative error of 1
- for y = x/2, a cumulative error of 2 times √½. That’s √2 or about 1.4
- for y = x, a cumulative error of 1
(Underlying reason is that √(|x|+|y|) < √|x| + √|y|. Conversely for powers p larger than 1, we have *(|x|+|y|)^p > |x|^p + |y|^p)
Odd powers would require you to take absolute differences to avoid getting, for example, an error of -2 giving a contribution of (-2)³ = -8 to the cumulative error. Otherwise they would work fine.
The math for squares (https://en.wikipedia.org/wiki/Simple_linear_regression) is easy, even when done by hand, and has some desirable properties (https://en.wikipedia.org/wiki/Simple_linear_regression#Numer...)
Also, quadratics are just much easier to work with in a lot of ways than higher powers. Like you said, even powers have the advantage over odd powers of not needing any sort of absolute value, but quartic equations of any kind are much harder to work with than quadratics. A local optimum on a quartic isn't necessarily a global optimum, you lose the solvability advantages of having linear derivatives, et cetera.
If the dataset truly is linear then we'd like this linear estimator to be equivalent to the conditional expectation E[Y|X], so we therefore use the L2 norm and minimize E[(Y - L[Y|X])^2]. Note that we're forced to use the L2 norm since only then will the recovered L[Y|X] correspond to the conditional expectation/mean.
I believe this is similar to the argument other commenter mentioned of being BLUE. The random variable formulation makes it easy to see how the L2 norm falls out of trying to estimate E[Y|X] (which is certainly a "natural" target). I think the Gauss-Markov Theorem provides more rigorous justification under what conditions our estimator is unbiased, that E[Y|X=x] = E[Lhat | X=x] (where L[Y|X] != LHat[Y|X] because we don't have access to the true population when we calculate our variance/covariance/expectation) and that under those conditions, Lhat is the "best": that Var[LHat | X=x] <= Var[Lh' | X=x] for any other unbiased linear estimator Lh'.
In the same way that things typically converge to the average they converge even more strongly to a normal distribution. So estimating noise as a normal distribution is often good enough.
The second order approximation is just really really good, and higher orders are nigh impossible to work with.
Fitting the model is also "nice" mathematically. It's a convex optimization problem, and in fact fairly straightforward linear algebra. The estimated coefficients are linear in y, and this also makes it easy to give standard errors and such for the coefficients!
Also, this is what you would do if you were doing Maximum Likelihood assuming Gaussian distributed noise in y, which is a sensible assumption (but not a strict assumption in order to use least squares).
Also, in a geometric sense, it means you are finding the model that puts its predictions closest to y in terms of Euclidean distance. So if you draw a diagram of what is going on, least squares seems like a reasonable choice. The geometry also helps you understand things like "degrees of freedom".
So, may overlapping reasons.
Intuitively this is because a multidimensional parabola looks like a bowl, so it's easy to find the bottom. For higher powers the shape can be more complicated and have multiple minima.
But I guess these arguments are more about making the problem easy to solve. There could be applications where higher powers are worth the extra difficulty. You have to think about what you're trying to optimize.
[0] https://math.stackexchange.com/questions/483339/proof-of-con...
I think you meant sqrt(x^2+y^2)
If you want non linear optimization, your best bet is sequential quadratic programming. So even in that case you're still doing quadratic programming with extra steps.
I remember one is Manhattan distance, next is as-the-crow-flies straight line distance, next is if you were a crow on the earth that can also swim in a straight line underwater, and so on
While the asymmetry of least squares will probably be a bit of a novelty/surprise to some, pretty much anything posted here is more or less a copy of one of the comments on stackexchange.
[Challenge: provide a genuinely novel on-topic take on the subject.]
I think there is not much to be said. It is not a puzzle for us to solve, just a neat little mathematical observation.
OLS assumes that x is given, and the distance is entirely due to the variance in y, (so parallel to the y axis). It’s not the line that’s skewed, it’s the space.
And how I suppose. PCA is effectively moment matching, least squares is max likelihood. These correspond to the two ways of minimizing the Kullback Leibler divergence to or from a gaussian distribution.
I found it in the middle of teaching a stats class, and feel embarrassed.
I guess normalising is one way to remove the bias.
See the other comments by many other commenters for details.
The least squares model will produce unbiassed predictions of y given x, i.e. predictions for which the average error is zero. This is the usual technical definition of unbiassed in statistics, but may not correspond to common usage.
Whether x is a noisy measurement or not is sort of irrelevant to this -- you make the prediction with the information you have.
One way to visually check that the fit line has the right slope is to (1) pick some x value, and then (2) ensure that the noise on top of the fit is roughly balanced on either side. I.e., that the result does look like y = prediction(x) + epsilon, with epsilon some symmetric noise.
One other point is that if you try to simulate some data as, say
y = 1.5 * x + random noise
then do a least squares fit, you will recover the 1.5 slope, and still it may look visually off to you.
But the stackexchange question is asking why an unbiased regression line doesn't lie on the major axis of the 3σ confidence ellipse. This lack of coincidence doesn't require any errors in the x data. https://stats.stackexchange.com/a/674135 gives a constructed example where errors in the x data are zero by definition.
Unless I'm misunderstanding something?
IMO this is a basic risk to graphs. It is great to use imagery to engage the spatial reasoning parts of our brain. But sometimes, it is deceiving — like this case —because we impute geometric structure which isn't true about the mathematical construct being visualized.
If the true value is medium high, any random measurements that lie even further above are easily explained, as that is a low ratio of divergence. If the true value is medium high, any random measurements that lie below by a lot are harder to explain, since their (relative, i.e.) ratio of divergence is high.
Therefore, the further you go right in the graph, the more a slightly lower guess is a good fit, even if many values then lie above it.
https://t3x.org/klong-stat/toc.html
Klong language to play with:
https://t3x.org/klong/
...as one does...