But what if the variances differ from point to point, or each have their own (x,y) correlation matrix. In standard least-squares, one simply weights each point by 1/variance. -- Gene On Thursday, October 4, 2018, 12:50:05 PM PDT, Tomas Rokicki <rokicki@gmail.com> wrote: Yep, scaling is critical. One trick used frequently is to scale all dimensions so the variance is one. -tom On Thu, Oct 4, 2018 at 12:29 PM Eugene Salamin via math-fun < math-fun@mailman.xmission.com> wrote:
But the distance from data point to fitting line is not invariant under change of units, say inches to centimeters. Furthermore, abscissa might be in apples, while ordinate is oranges. Then what's the distance?
-- Gene
On Thursday, October 4, 2018, 12:19:20 PM PDT, Brent Meeker < meekerdb@verizon.net> wrote:
The difference is that the error is measured in the y-direction in one case and the x-direction in the other. If you do a least-squres fit using the distance from the data point to the fitting line then you get the same fit when you interchange x and y.
Brent
On 10/4/2018 9:21 AM, Henry Baker wrote:
In common discussions of least squares, the parameters (m,b) are estimated for the equation y = m*x+b using as data various datapoints [x1,y1], [x2,y2], [x3,y3], etc.
For example, in Wikipedia (where m=beta2 and b=beta1):
https://en.wikipedia.org/wiki/Linear_least_squares#Example
So far, so good.
Now, if I merely exchange x and y, then my equation is x = m'*y+b', where should be m' = 1/m and b' = -b/m. (Let's ignore the case where the best m=0.)
However, if I then estimate (m',b') using the same least squares method, I don't get (1/m,-b/m) !
So either I'm doing something wrong, or perhaps there is a more symmetric least squares method that treats x and y symmetrically ??