FRACTIONAL ORDER LOG BARRIER INTERIOR POINT ALGORITHM FOR POLYNOMIAL REGRESSION IN THE ℓ p

. Fractional calculus is the branch of mathematics that studies the several possibilities of generalizing the derivative and integral of a function to noninteger order. Recent studies found in literature have confirmed the importance of fractional calculus for minimization problems. However, the study of fractional calculus in interior point methods for solving optimization problems is still new. In this study, inspired in applications of fractional calculus in many fields, was developed the so-called fractional order log barrier interior point algorithm by replacing some integer derivatives for the corresponding fractional ones on the first order optimality conditions of Karush-Kuhn-Tucker to solve polynomial regression models in the ℓ p − norm for 1 < p < 2. Finally, numerical experiments are performed to illustrate the proposed algorithm.


INTRODUCTION
It is fundamentally important to make predictions based upon scientific data. The problem of fitting curves to data points has many practical applications Bard (1974); Sevaux & Mineur (2007); Chatterjee & Hadi (2012).
Given a set of m data points in R 2 : where a i is an argument value and b i a corresponding dependent value, with a i ̸ = a j for all i ̸ = j. The curve fitting procedure tries to build a linear or nonlinear function y = f (x), defined for all possible choices of x, that approximately fits the data set. The fitted curves to the data by f are most often chosen to be polynomials Süli & Mayers (2003).
Let y = f (x) be a polynomial function of degree n − 1 of the form f (x) = x 0 + x 1 x + . . . + x n−1 x n−1 , the candidate function to fit data. The procedure to fit the polynomial function to data, in the ℓ p −norm, determines the vector x = (x 0 , x 1 , x 2 , . . . , x n−1 ) ⊺ ∈ R n , where the superscript T represents transpose, that minimizes the ℓ p −norm of the residual error as follows

FRACTIONAL ORDER LOG BARRIER INTERIOR POINT ALGORITHM FOR POLYNOMIAL REGRESSION
where ∥·∥ p denotes ℓ p −norm, b = (b 1 , b 2 , . . . , b m ) ⊺ ∈ R m and A ∈ R m×n is a Vandermonde matrix, with rank (A) = m, which can be written as This is a nonlinear (or linear) regression model if n > 2 (or n = 2). If m ≤ n, there is an (n − 1)th degree polynomial satisfying f (a i ) = b i (i = 1, 2, . . . , m). If m > n, the problem cannot be exactly solved and one needs to find the vector x in (1).
In many industrial applications, missing data or anomalous values can arise as errors of measurement instruments during the data generation. In this cases, polynomial regression models in the ℓ p −norm, with p ̸ = 2, are more robust Forsythe (1972). It is important to choose an appropriate value for p and several criteria for the choice of p have been studied Rice & White (1964).
The calculus of integrals and derivatives of arbitrary order, named as fractional calculus, was conceptualized in connection with the infinitesimal calculus since 1695 Oldham & Spanier (1974 The goal of this study is to investigate the fractional order log barrier interior point algorithm, involving the Caputo fractional derivative. It is based on replacing some integer derivatives for the corresponding fractional ones on the first order optimality conditions for solving polynomial regression models in the ℓ p −norm for 1 < p < 2. Functions of the form g(x) = (x + κ) p , with κ > 0 and 1 < p < 2, arise when solving problem 1), according to the approach which will be discussed throughout this study. The Caputo fractional derivative, in addition to generalize the integer order derivative, is a useful tool to obtain the derivative for functions as g(x), as p is on the interval 1 < p < 2, then it will be a non integer number.
This paper is organized as follows. Preliminary concepts of fractional calculus are presented in Section 2. In Section 3, the fractional order log barrier interior point algorithm for solving polynomial regression models in the ℓ p −norm is discussed. Numerical experiments are performed to illustrate the proposed algorithm in Section 4, and the Section 5 contains the conclusions.

PRELIMINARY CONCEPTS OF FRACTIONAL CALCULUS
Some basic concepts and definitions involving special functions and the Caputo fractional derivatives will be presented in this section.
Definition 2. The beta function B (z, w) is defined by Erdélyi et al. (1981); Andrews et al. (1999): It is connected with the gamma function by the following relation The fractional calculus basically defines several integral and derivative operators of arbitrary order. The Caputo fractional derivative is one of the fractional derivative operators Kilbas et al. (2006).
Definition 3. The Caputo left-sided fractional derivative with respect to x, of order 0 < α < 1, of functions Note that the Caputo fractional derivative is a nonlocal fractional derivative, because it depends on the choice of the order α, function f (x), x, and also relies on the total effects on the interval [a, x]. Usually, this is called memory effect.

FRACTIONAL ORDER LOG BARRIER INTERIOR POINT ALGORITHM FOR POLYNOMIAL REGRESSION
The integral (9) under this change of variables and by means of (4)-(6) takes the form

FRACTIONAL ORDER LOG BARRIER INTERIOR POINT ALGORITHM
Raising Φ (x) = ∥b − Ax∥ p to the power p, the problem (1) can be rewritten in the alternative form as follows where x ∈ R n . The problem (10) is similar to the following nonlinear optimization problem where r = (r 1 , r 2 , . . . , r m ) ⊺ ∈ R m is the residual vector of regression and ℓ p −norm is defined of the form The parameter values p most commonly used to the problem (11)  Considering the unrestricted residual term r i as the difference between two nonne-gative variables u i and v i : for all i = 1, 2, . . . , m, then |r i | = u i + v i and the problem (11) can be converted into a convex programming problem Charnes et al. (1955); Cantante et al. (2012): where R m + denote the set of m-dimensional nonnegative vectors. Interior point methods are widely used to solve convex optimization problems because of their good performance in practice Biegler (2010); Gondzio (2012); Lilian et al. (2016). By adding a logarithmic barrier function to the objective function in (13), the barrier problem is given by where µ > 0 is the barrier parameter. An optimal solution of (13) can be found by solving a series of barrier problems of the form (14) while µ is decreasing and going to zero. The Lagrangian function associated with the problem (14) is where λ = (λ 1 , λ 2 , . . . , λ m ) ⊺ ∈ R m is a Lagrange multiplier vector.
Let φ and ϕ be the functions given by and then the Lagrangian function can be written as The fractional order log barrier interior point algorithm, involving the Caputo fractional derivative, is based on replacing some integer derivatives for the corresponding fractional ones on the first order optimality conditions (∇L = ∇φ + ∇ϕ = 0), of the following form where ∇ x , ∇ λ , ∇ u and ∇ v represent the gradient with respect to x, λ , u and v, respectively, ∇ α u and ∇ α v represent the fractional order gradient with respect to u and v, respectively, given by Caputo fractional derivative (7).
. By Property 1, for i = 1, 2, . . . , m, (g α ) i is given by 6 FRACTIONAL ORDER LOG BARRIER INTERIOR POINT ALGORITHM FOR POLYNOMIAL REGRESSION The nonlinear system of equations (19) can be rewritten in the alternative form as follows where For given µ > 0, if α = 1 in the system (21), then it recovers the first order optimality conditions, since lim When the classical gradient is replaced by the fractional one (0 < α < 1), then if the system (21) has a solution x α it can be called fractional solution and lim For given µ > 0 and 0 < α ≤ 1, suppose that the system (21) has a solution. Given an initial point where R m ++ denote the set of m-dimensional positive vectors, and Ax 0 + u 0 − v 0 = b, by applying Newton method to the system (21), then the search direction (∆x, ∆λ , ∆u, ∆v) can be found by solving the following Newton system where I m is the identity matrix of order m, where h α ∈ R m and for i = 1, 2, . . . , m, the ith component of the vector h α is given by The (h α ) i component is obtained by evaluating the derivative and taking (4) into account, Γ (p + 1 − α) can be rewritten in the form (p − α) Γ (p − α). So, (h α ) i in equation (24) can be obtained.
For given µ > 0, the new iterate barrier parameterμ is updated in the following form Wright (1996): where β > 1 is added to control your decay and to improve the convergence process.
To keep off the nextû andv in the border region is required to shorten the step size α uv by introducing a parameter σ , with 0 < σ < 1, which is often a value close to 1, as follows Biegler (2010); Vanderbei (2020): Given a point (x, λ , u, v), the new iterate x,λ ,û,v is given bŷ

Termination criteria
The fractional order log barrier interior point algorithm terminates if at least one of the two following convergence criteria is satisfied: where m is the number of rows of the matrix A, and where ε 2 and ε 3 are positive values close to zero, andN is the new iterate, given bŷ where ∇ αφ + ∇φ is obtained by considering the new iterates x :=x, λ :=λ , u :=û, v :=v and µ :=μ in equation (19).
In particular, when α = 1, the fractional order log barrier interior point algorithm recover the classical log barrier interior point algorithm.
Algorithm 1 Fractional order log barrier interior point algorithm for polynomial regression models in the ℓ p −norm.

NUMERICAL EXPERIMENTS
In order to illustrate the proposed fractional order log barrier interior point algorithm to solve polynomial regression models in the ℓ p −norm, numerical experiments were performed to compare it with the classical log barrier interior point algorithm. The implementation of the fractional order log barrier interior point algorithm was performed under Windows 10 and Matlab (R2016a) running on a desktop with 2.20 GHz Intel Core i5-5200 central processing unit (CPU) and 4G random-access memory (RAM).
A data set containing daily interest rates observed over 40 years was used for the analysis of polynomial regressions. The data set contains 10958 observations: , where a i represents the day of the week for a given date and b i the interest rate (in percentage) for the specific day a i . The a i values were normalized to the [0, 1] interval of the real line R to avoid numerical stability problems of the algorithm Oliveira & Cantante (2004);Cantane (2004). Figure 1 shows the data set.
For the numerical results below, a comparison of the fractional order log barrier interior point algorithm for different values of the order α considers: "It.", the number of iterations; "Res. Err.", the residual error, given by ∥r∥ p = ∥b − Ax∥ p ; and "Time (s)", the CPU time in seconds.
The number of iterations and the value for the residual error for different values of α and p, obtained from the fractional order log barrier interior point algorithm for linear regression (n = 2) in the ℓ p −norm, are shown in Tables 1-3. If the algorithm does not converge (divergence of Newton's method for solving the nonlinear system of equations (21) , or if the algorithm fails (ill conditioned matrix), then the results will be shown in Tables 1-3 as "-" or " * ", respectively. When α = 1, the results of the classical log barrier interior point algorithm are recovered.  The algorithm failed for α = 0.5 and p = 1.1, as can be seen in Table 1, but for example, for α = 0.55 and p = 1.1, the algorithm converges.
According to the Tables 1-3, the fractional order log barrier interior point algorithm for linear regression models in the ℓ p −norm (p = 1.1, 1.2, . . . , 1.7) yield smaller residual error when α = p − 0.8. It is a relationship between p and α, which can also be written as p − α = 0.8. In this cases, the algorithm takes more iterations until convergence.    Table 3), and the residual error is very close to the residual error obtained when α = 1.
The number of iterations, the residual error and the CPU time for (n − 1)th degree polynomial regressions (n = 2, 5, 10, 15), in the ℓ 1.3 −norm, for different values of α, obtained from the fractional order log barrier interior point algorithm, are given in the following tables.
The smallest residual errors for n = 2 and n = 5 were obtained with a greater number of iterations when α = 0.5 (see Table 4), while for n = 10 and n = 15 (see Table 5), the residual errors were smallest for α = 0.4.
The performance of the fractional order log barrier interior point algorithm appears to be computationally consistent. For example, the values of the residual error at each iteration k for linear regression in the ℓ 1.3 −norm are shown in Figure 2.
In the Figure 2, one can observe that the use of any one of the fractional order derivatives (α = 0, 4, 0.5, . . . , 0.9) produces smaller residual errors than with the use of the integer order derivative (α = 1). The use of the fractional derivatives does not interfere in the iterations running times, that compare similarly.  The approximate solution for (n − 1)th degree polynomial regression in the ℓ p −norm, obtained at the kth iteration of the fractional order log barrier interior point algorithm and given by x k = x k 0 , x k 1 , x k 2 , . . . , x k n−1 , provides the coefficients of the (n − 1)th degree polynomial that approximately fits the data set, given by f (x) = x 0 + x 1 x + . . . + x n−1 x n−1 . In particular, the approximate solutions x k , obtained at the kth iteration of the fractional order log barrier interior point algorithm, for (n − 1)th degree polynomial regressions (n = 2, 5, 10, 15) in the ℓ 1.3 −norm, with smaller residual errors are: x 20 = x 20 0 x 20 1 = 4.5 E + 00 5.9 E + 00 (n = 2; α = 0.5) , i.e., at the 20th iteration of the algorithm with α = 0.5, the linear regression (n = 2) provides the polynomial coefficients from the above vector x 20 . So, the linear regression is given by f (x) = 4.5 + 5.9 x.

CONCLUSIONS
The fractional order log barrier interior point algorithm for polynomial regression models in the ℓ p −norm (1 < p < 2), obtained by replacing some integer derivatives for the corresponding fractional ones on the first order optimality conditions, was investigated in this paper. This algorithm appears to be computationally consistent but depending on the fractional order α, the algorithm does not converge or fails. The numerical experiments showed that the use of the fractional derivatives can be beneficial for solving optimization problems. In future, further studies on this subject could be undertaken.