LEAST SQUARES FITTING OF ELLIPSOID USING ORTHOGONAL DISTANCES

: In this paper, we present techniques for ellipsoid fitting which are based on minimizing the sum of the squares of the geometric distances between the data and the ellipsoid. The literature often uses “orthogonal fitting” in place of “geometric fitting” or “best-fit”. For many different purposes, the best-ﬁt ellipsoid fitting to a set of points is required. The problem of fitting ellipsoid is encountered frequently in theimage processing, face recognition, computer games, geodesy etc. Today, increasing GPS and satellite measurements precision will allow usto determine amore realistic Earth ellipsoid. Several studies have shown that the Earth, other planets, natural satellites, asteroids and comets can be modeled as triaxial ellipsoids Burša and Šima (1980), Iz et al (2011). Determining the reference ellipsoid for the Earth is an important ellipsoid fitting application, because all geodetic calculations are performed on the reference ellipsoid. Algebraic fitting methods solve the linear least squares (LS) problem, and are relatively straightforward and fast. Fitting orthogonal ellipsoid is a difficult issue. Usually, it is impossible to reach a solution with classic LS algorithms. Because they are often faced with the problem of convergence. Therefore, it is necessary to use special algorithms e.g. nonlinear least square algorithms. We propose to use geometric fitting as opposed to algebraic ﬁtting. This is computationally more intensive, but it provides scope for placing visually apparent constraints on ellipsoid parameter estimation and is free from curvature bias Ray and Srivastava (2008).


INTRODUCTION
Fitting an ellipsoid to an arbitrary set of points is a problem of fundamental importance in many wide fields of applied science ranging from astronomy, geodesy, digital image processing and robotics to metrology etc. Ellipsoids, though a bit simple in representing 3D shapes in general, are the only bounded and centric quadrics that can provide information of center and orientation of an object.Fitting ellipsoid has been discussed widely and some excellent work has been done in literature.However, most of these fitting techniques are algebraic fitting, but not orthogonal fitting.Various "least-squares" fitting approaches have been formulated over the years Zhang (1997), but they all fall into two categories; (1) algebraic methods, which are extensively used due to their linear nature, simplicity and computationally efficiency, and (2) geometric methods that solve a nonlinear problemRay and Srivastava (2008).We could not find enough studies with numerical examples in the literature.Turner et al (1999) gave a numerical application, but the application's data are not given Turner et al (1999).No other comparable orthogonal fitting ellipsoid application could be found in literature.Against this background, the purpose of the study is to give an orthogonal fitting ellipsoid with numerical examples.In this article, we demonstrate that the geometric fitting approach, provides a more robust alternative than algebraic fitting approach-although it is computationally more intensive.The paper has eight parts.First, the basic ellipsoid will introduce some mathematical equationsto explainthe concepts.Then, it reviews the extended literature relevant to ellipsoid fitting.And we discussed in this research which estimators is used.Next, comes the part which deals with algebraic fitting, orthogonal fitting and numerical example.You will find ellipsoid fitting application based on both l 1 -norm and l 2 -norm methods.The paper concludes with a discussion of theoretical and managerial implications and directions for further research.

Ellipsoid
An ellipsoid is a closed quadric surface that is analogue of an ellipse.Ellipsoid has three different axes (a x >a y >b) in Figure 1.Mathematical literature often uses "ellipsoid" in place of "Triaxial ellipsoid or General ellipsoid".Scientific literature (particularly geodesy) often uses "ellipsoid" in place of "biaxial ellipsoid, rotational ellipsoid or ellipsoid revolution".Older literature uses 'spheroid' in place of rotational ellipsoid.The standard equation of an ellipsoid centered at the origin of a cartesian coordinate system and aligned with the axes is shown with this formula:

FITTING ELLIPSOID
For the solution of the fitting problem, the linear or linearized relationship, written between the given data points and unknown parameters (one equation per data points), consists of equations, including unknown parameters.
Here, A is design matrix, x is unknown parameters, lis measurements vector or data points, For this minimization problem to have a unique solution the necessary conditions is to be n>= 9 and the data points lie in general position (e.g., not all data points should lie is an elliptic plane).Throughout this paper, we assume that these conditions are satisfied.u=9 : number of unknown parameter n: number of given data point (or measurements) f=n-u :degree of freedom -If f = 0 there is only one (exact) solution, algebraic solution -If f<0 there is no solution.The solutioncan be found with based on the extra constraint -If f> 0 is most commonly encountered situation.The given data points (or measurements), which are much greater than the required number cause discrepancy, and in this case, the solution is not unique.There is an over determined system.Because n > u, in other words the number of equations is greater than the number of unknowns.The system of linear equations (3) must be solved.Therefore, this system must be consistent with the rang of design matrix, and design matrix extended with constant terms, must be equal, so that rang(A) =rang(A:l); whereas, the system of (3) is inconsistent, because x unknown parameters that provide (3), can not be calculated.In this case, rang(A) ≤ u.The extended matrix with l measurements rang(A:l) is generally more than rang(A) .There is no solution of inconsistent equations, and only the approximate solution of the system can be derived.The equation system with approximate solution is calculated by adding  residuals (or corrections) at the right side of (3).
Depending on the choice of  residuals vector, infinite solutions can be obtained.The unique solution can be derived only according to an estimator (objective function).For example, the LS always give an unique solution Bektas and Sisman (2010).Here, the question of which estimation method to use comes to mind?

WHICH ESTIMATOR SHOULD BE USED?
It is hoped that the residuals will be small.The more suitable estimation method is one that creates smaller residuals.It is seen that usually the objective functions are formed based on the minimization of corrections or a function of corrections.There are numerous estimators, some of these are l 1 -norm, l 2 -norm,l p -norm, Fair, Huber, Cauchy, German-McClure, Welsch and Tukey.Two estimator methods come to the forefront.The most used estimators are shown below:

The Comparison of l 1 and l 2 -Norm Methods
The solution of the l 2 -norm method is always unique, and this solution is easily calculated.The l 2 -norm method is widely used in parameters estimation.The l 2 -norm method has indisputable superiority in parameter estimation.The disadvantages of the l 2 -norm method are that is affected by outlying (gross errors) and it distributes to the sensitivity measurements.In this case, ellipsoid fitting is a very nice application.With least-squares techniques, even one or two outliers in a large set can wreak havoc!Outlying data give an effect so strong in the minimization that the parameters thus estimated by those outlying data are distorted.Numerous studies have been conducted, which clearly show that least-squares estimators are vulnerable to the violation of these assumptions.Sometimes, even when the data contains only one bad measurement, l 2 -norm method estimates may be completely perturbed Zhang (1997).The solution of the l 1 -norm method is not always unique, and there may be several solutions.Also, the solution of the l 1 -norm method is not generally obtained directly, but iteratively calculations are made.Therefore, the solution is not easily calculated like in the l 2 -norm method.Notwithstanding, when the computational tools, computer capacity and speed are considered, the difficulty of calculations are eliminated.The advantages of the l 1 -norm method are non-sensitivity against measurements, including gross errors, and the solution is not or is little affected by these measurements.The author of this study proposed and used the l 2 -norm method in the solution of parameter estimation (optimization problems, adjustment calculus), after the measurement group cleaned up gross and systematic errors using the l 1 -norm method.For further information see Bektas and Sisman (2010).

ALGEBRAIC ELLIPSOID FITTING METHODS
The general equation of an ellipsoid is given as (5) contains ten parameters.In fact, nine of those ten parameters are independent.For example, if all the coefficients in this equation multiply by (-1/K'), we get a new equation which contains nine unknown parameters, and its constant term will be equal to "-1".
In this algorithm, we need to check whether a fitted shape is an ellipsoid.In theory, the conditions that ensure a quadratic surface to be an ellipsoid have been well investigated and explicitly stated in analytic geometry textbooks.An ellipsoid can be degenerated into other kinds of elliptic quadrics, such as an elliptic paraboloid.Therefore, a proper constraint must be added.Li and Griffiths gave the following definitions Li and Griffiths (2004).
However 4j-i 2 > 0 is just a sufficient condition to guarantee that an equation of second degree in three variables represent an ellipsoid, but it is not necessary.In this paper, we assume that these conditions are satisfied.The algebraic method is a linear problem.It is solving the problem directly and easily.The fitting ellipsoid to a given the data set ((x,y,z) i , i=1,2,…,n), is obtained by the solution in the LS sense of in the following: Algebraic methods all have indisputable advantages of solving linear LS problems.The methods for this are well known and fast.However, it is intuitively unclear what it is we are minimizing geometrically in ( 6) is often referred to as the "algebraic distance" to be minimized Ray and Srivastava (2008).A geometric interpretation given by Bookstein (1979) clearly demonstrates that algebraic methods neglect points far from the center.

FITTING OF ELLIPSOID USING ORTHOGONAL DISTANCES
To overcome the problems with the algebraic distances, it is natural to replace them by the orthogonal distances which are invariant to transformations in Euclidean space and which do not exhibit the high curvature bias.An ellipsoid of best fit in the LS sense to the given data points can be found by minimizing the sum of the squares of the geometric distances from the data to the ellipsoid.The geometric distance is defined to be the distance between a data point and its closest point on the ellipsoid.Determining best fit ellipsoid is a nonlinear least squares problem which in principle can be solved by using the Levenberg-Marquardt (LM)algorithm.Generally, non-linear least squares is a complicated issue.It is very difficult to develop methods which can find the global minimizer with certainty in this situation.When a local minimizer has been discovered, we do not know whether it is a global minimizer or one of the local minimizer Zisserman (2013).
There are a variety of nonlinear optimization techniques.Such as Newton, Gauss-Newton, Gradient Descent, Levenberg-Marquardt approximation etc.However, these fitting techniques involve a highly nonlinear optimization procedure, which often stops at a local minimum and cannot guarantee an optimal solution Li and Griffiths (2004).Away from the minimum, in regions of negative curvature, the Gauss-Newton approximation is not very good.In such regions, a simple steepest-descent step is probably the best plan.The Levenberg-Marquardt method is a mechanism for varying between steepest-descent and Gauss-Newton steps depending on how good the H GN approximation is locally.The Levenberg-Marquardt method uses the modified Hessian H(x,λ) = H GN +λ.I ( I : identity matrix ) • When λ is small, H approximates the Gauss-Newton Hessian.
• When λ is large, H is close to the identity, causing steepest-descent steps to be taken.This algorithm does not require explicit line searches.More iterations than Gauss-Newton, but, no line search required, and more frequently converge suppose that we have a unknowns parameter set v= [ A B C D E F G H I ] T are unknown conic parameters.The general conic equation for an ellipsoid is given as (6) We will reach the solution by establishing relationships between variations in the conical coefficients and the orthogonal distances.The initial parameters were derived from the algebraic fitting ellipsoid.
: Projection coordinates (onto ellipsoid) of given P i data points ith row of the nx10 matrix J (jakobien matrix) ith row of the right side vector h nx1 We obtained the below linearized equation The fitted orthogonal ellipsoid is obtained by the solution in the LS sense with the L-M algorithm.

Finding Orthogonal Distances from the Ellipsoid
In this paper, we present techniques for ellipsoid fitting which are based on minimizing the sum of the squares of the geometric distances between the data and the ellipsoid.The most time-consuming part is the computation of the orthogonal distances between each point and the ellipsoid.Our aim to find the orthogonal distances from a shifted-oriented ellipsoid see Figure 2.For detailed information on this subject refer to Bektas (2014).We show both the algebraic and orthogonal fitting results are as shown in Table -1.

Table-1:
The result of algebraic and orthogonal fitting ellipsoid *RSS: The residual sum of squares of the orthogonal distances

DISCUSSION
Orthogonal least-squares has a much sounder basis, but is usually difficult to implement.Why are algebraic distances usually not satisfactory?The big advantage of use of algebraic distances is the gain in computational efficiency, because closed-form solutions can usually be obtained.In general, however, the results are not satisfactory.The function to minimize is usually not invariant under Euclidean transformations.For example, the function with normalization K' = -1 in ( 5) is not invariant with respect to translations.This is a feature we dislike, because we usually do not know in practice where the best coordinate system to represent the data is.A point may contribute differently to the parameter estimation depending on its position on the conic.If a priori all points are corrupted by the same amount of noise, it is desirable for them to contribute the same way Zhang (1997).More importantly, algebraic methods have an inherent curvature biasdata corrupted by the same amount of noise will misfit unequally at different curvatures Ray and Srivastava (2008).Our experience tells us that if the coordinates of given points consists of a large number this will cause bad condition.Therefore, before fitting, you must shift the given coordinates to the center of gravity, after fitting operation the coordinates of ellipsoid's center must be shifted back to the previous position.

CONCLUSION
In this paper, we studied on the orthogonal fitting ellipsoid.The problem offitting ellipsoid is encounteredfrequently intheimage processing, face recognition, computer games, geodesydeterminingmore realistic Earth ellipsoid etc.The paper has presented a new method of orthogonal fitting ellipsoid.The new method relies on solving an over determined system of nonlinear equations with the use of the L-M method.In conclusion, the presented method may be considered as fast, accurate and reliable and may be successfully used in other areas.The presented orthogonal fitting algorithm can be applied easily to biaxial ellipsoid, sphere and also other surfaces such as paraboloid, hyperboloid,etc.

Figure 2 :
Figure 2: Shifted -oriented ellipsoid Where  nxu = Design matrix v u = [ A B C D E F G H I ] T unknown conic parameters l n = [1 1 1…1] T unit vector : right side vector ith row of the nx9 matrix  It is solved easily in the LS sense as below or it is solved easily by MATLAB as below If there are differences in weights or correlations between given data points, P weight matrix is added in the solution, and then P=K ll -1 K ll :nxn variance-covariance matrix of data points Residual (or correction) vector is computed as below LS optimization give us |||| =min.
For numerical applications 12 triplets (x,y,z) cartesian coordinates were produced.