Abstract
The classic interpolation problem asks for polynomials to fit a set of given data. In this paper, quasi-polynomials are considered as interpolating functions passing through a set of spatial points. Existence and uniqueness is obtained by means of generalized Vandermonde determinants. By means of several estimates related to these determinants, we are also able to find closed balls for any given centers that enclose the approximating curves. By choosing proper centers based on the observed spatial points, these balls may lead us to applications such as satellite tracking and control. Mathematical subject classification: 41A05.
interpolation; reference point; error bound; quasi-polynomial
Closed balls for interpolating quasi-polynomials
Jiajin WenI; Sui Sun ChengII
ICollege of Mathematics and Information Science, Chengdu University, Chengdu 610106, P.R. China. E-mail: wenjiajin623@163.com
IIDepartment of Mathematics, Tsing Hua University, Hsinchu, Taiwan 30043, R.O. China. E-mail: sscheng@math.nthu.edu.tw
ABSTRACT
The classic interpolation problem asks for polynomials to fit a set of given data. In this paper, quasi-polynomials are considered as interpolating functions passing through a set of spatial points. Existence and uniqueness is obtained by means of generalized Vandermonde determinants. By means of several estimates related to these determinants, we are also able to find closed balls for any given centers that enclose the approximating curves. By choosing proper centers based on the observed spatial points, these balls may lead us to applications such as satellite tracking and control.
Mathematical subject classification: 41A05.
Key words: interpolation, reference point, error bound, quasi-polynomial.
1 Introduction
Given a set of m + 1 points (xi,yi), i = 0,...,m, where x0,x1,...,xmare mutually distinct, the classical interpolation problem asks for a polynomial p = p(x) of degree at most m such that
p(xi) = yi, i = 0,1,2,...,m.
The polynomial that does the job exists and is unique, and is called the Lagrange interpolating polynomial. Together with this existence and uniqueness theorem, there is now a fairly complete theory (see e.g. Davies [10]) associating the Vandermonde matrices, divided differences, error bounds, etc. with the classical interpolation problem.
In view of the many applications of the concept of interpolation, it is of interest to consider different types of interpolation functions. Among manyothers, in [5], "quasi-polynomials" as candidates of interpolating functions are considered. More specifically, let R and C be the set of real and complex numbers respectively. Let b1,b2,...,bn∈ C, α1, α2,..., αn∈ R such that 0 = α1 < α2 < ... < αn. The function f:[0, ∞) → C defined by
is called a ( α1, α2,..., αn)-polynomial1 1 It is called a generalized polynomial in [5], but it is better to avoid this term since there are many generalizations of polynomials. . It is shown that given a set of data pairs (x1,y1), (x2,y2), ..., (xn,yn) where 0 < x1 < x2 < ... < xn and y1,..., yn ∈ C, there then is a unique ( α1, α2,..., αn)-polynomial f that satisfies
Once existence is shown, it is then important to investigate the properties of the interpolating polynomial. Several properties are obtained in [5]. In particular, a bound for |f(u)|, where u ∈ [x1,xn], is obtained in [5].
In this paper, we will be interested in approximation of a spatial curve (described by a vector function) by ( α1, α2,..., αn)-polynomials, and their 'distances' from a reference point. More specifically, given a space curve in Rm described by a vector function g:[x1,xn] → Rm. If g is unknown, but the set of data pairs (x1,y1),(x2, y2),...,(xn,yn), where 0 < x1 < x2 < ... < xn and y1 = g(x1), ...,yn = g(xn) ∈ Rm, are available, we are interested in the existence of a function f:[0, ∞) → Rm of the form
where d1,d2,...,dn∈ Rm such that the condition
f(xi) = yi, i = 1,2,...,n,
is satisfied, as well as upper bounds for ||f(u) - ||, where ||·|| is the Euclidean norm for Rm and is a given vector in Rm.
We plan to do the following. In the next section, we will take care of the existence and uniqueness of the desired function by introducing generalized Vandermonde determinants. Then we will state the main theorem of our paper. In Section 3, we will derive several preparatory results. Then in Section 4, our main theorem is proved. The final section is devoted to additional remarks and illustrative examples.
2 Preliminary results
To facilitate discussions, we recall several definitions and results. Throughout the rest of our discussions, we assume that n > 2 (to avoid trivial cases). Let Rn be the standard set of real n-vectors endowed with the usual linear structure and the Euclidean norm. An n-vector in Rn is indicated by x,a,b,c, α, β,... etc. Given an n-vector, say x, its components are indicated by x1,x2,...,xn, so that
x = (x1,x2,...,xn)†,
where the dagger indicates transposition. The difference vector Δx is defined by
Δx = (( Δx)1,( Δx)2,...,( Δx)n-1)† = (x2 - x1, x3 - x2, ..., xn - xn-1)†.
For the sake of convenience, we also denote the i-th component ( Δx)i of Δx by the forward difference Δxi.
Several subsets of Rn will be used extensively. For this reason, we will set (cf. [1-9])
and
Another convenient notation has to do with the substitution of a component of a vector x = (x1,...,xn)† . Suppose the j-th component of x is replaced by u, we will denote the subsequent vector by x(j)(u), that is,
x(j)(u) = (x1, ..., xj-1, u, xj+1, ..., xn)†.
Given x ∈ Rn, recall that the Vandermonde determinant in x is defined by
Given x ∈ Rn and α ∈ Rn, we may extend the definition of Vandermonde determinant as follows:
In the above we need to make sure that each entry of the determinant is well defined. Such is the case when xj > 0 and αi> 0, where 00 = 1.
By means of these notations, given d1,...,dn∈ Rm and α ∈ n, a generalized α-polynomial is a function f :[0, ∞) → Rm defined by
where we have employed the fact that 00 = 1. Given x ∈ Ωn and y1,y2,..., yn∈ Rm, if we try to find a generalized α-polynomial, where α ∈ n, that satisfies
then we are led to a linear system of equations in the variables d1,...,dn. Solving this system of vector equations, we easily see that
Since Vn(x, α) > 0 for x ∈ Ωn and α ∈ n (see [3, p. 212, Theorem 1]), we see further that the desired α-polynomial satisfying (4) can be expressed as
Now that the existence and uniqueness of the desired interpolating polynomial is out of the way, the main theorem to be shown will be the following.
Theorem 1. Given x ∈ Ωn and α ∈ n as well as y1,...,yn ∈ Rm. The generalized interpolating α-polynomial f in (5) will satisfy
for any u ∈ [x1,xn] and any ∈ Rm, where θ = min1< i < n-1Δ αi.
In the above and later discussions, we employ the greatest integer function [·] .
3 Preparatory lemmas
We first recall the following result, which was already used in deriving the α-interpolating polynomial.
Lemma 1. ([3, p. 212, Theorem 1]). Let x ∈ Ωn and α ∈ n. Then Vn(x, α) > 0.
Lemma 2. ([5, p. 1047, Lemma 2.2]). Letx, α ∈ Ωn and β = ( β1, β2,..., βn-1) where βj = αj+1 - α1 - 1 for j = 1,2,...,n - 1. Then
where t = (t1, t2, ..., tn-1).
Lemma 3. Let x, α ∈ Ωn. If Δ αj>1 for j = 1, 2, ..., n - 1, then
where dn = max{1, αn - αn-1 - 1}.
Proof. When n = 2, we may see from (7) that
If 0 < α2 - α1 - 1 < 1, then d2 = 1 and the function q(t) = is concave over the interval [x1, x2]. In view of the Hadamard's inequality (see e.g. [2]) and the A-G inequality, we see that
If α2 - α1 - 1 > 1, then d2 = α2 - α1 - 1 and the function q(t) = is convex over the interval [x1, x2]. Again, from the Hadamard's inequality and the A-G inequality,
These show that (8) is valid for n = 2.
We now assume by induction that (8) holds when n is replaced by n - 1 (where n - 1 > 2). By our assumption that Δ αj > 1 for j = 1, 2, ..., n - 1, we see that
and
Hence,
Furthermore, if xj < tj < xj+1 for j = 1, 2, ..., n - 1, then
Next, if we take α = (0, 1, 2, ..., n - 1) in (7), we see that
and hence
With the above information at hand, we may now see that
that is,
Since x ∈ Ωn and dn > 1, we also have
and
Thus from (14), we may further assert that
The proof is complete.
Lemma 4. ([5, p. 1050, Lemma 2.4]). Letx ∈ Rn. Then
Lemma 5. ([5, p. 1051, Lemma 2.5]). Let
Then
and
Lemma 6. Let x ∈ Ωn. Then for u ∈ [x1, xn] and r ∈ {1, 2, ..., n},
Proof. Since u ∈ [x1, xn], there is s ∈ {1, 2, ..., n - 1} such that xs < u < xs+1. Let us move the r-th column of the determinant vn(x(r)(u)) and 'insert' it between the s-th and the (s + 1) -th column of vn(x(r)(u)). The resulting determinant will be denoted by vn(x*(r)(u)), where
Then vn(x*(r)(u)) > 0. Furthermore, in view of the A-G inequality [6-9] and (15),
where λk is defined in Lemma 5. Next we assert that
Indeed, if s = r - 1 or s = r, then
x*(r)(u) = x(r)(u).
If 2 < r < n - 1, then from Lemma 5,
If r = 1, then
If r = n, then
Next, suppose s < r - 2 or s > r + 1. Let
Since 1 < r < n and 1 < s < n - 1, we have 1 < q < n - 2. If 2 < r < n - 1, then 1 < p < n - 1, and hence by Lemma 5,
If r = 1 or r = n, from Lemma 5,
The inequality (20) is thus proved.
By combining (19) and (20), we see that
The proof of (18) is complete.
Lemma 7. Let x, α ∈ Ωn such that Δ αj>1 for j = 1, 2, ..., n - 1. Then for any u ∈ [x1, xn] and j ∈ {1, 2, ..., n},
Proof. Since x1< u < xn, there is s ∈ {1, 2, ..., n - 1} such that 0 < xs< u < xs+1. Let us move the j-th column of the determinant Vn(x(r)(u), α) and 'insert' it between the s-th and the (s + 1)-th columns of Vn(x(r)(u), α). The resulting determinant will be denoted by Vn(x*(r)(u), α) where
Since
by Lemma 1, we see that Vn(x*(j)(u), α) > 0. Furthermore, in view of the following simple fact,
we may see that
Thus, by (22), (23), (8) and (18), we have
The proof is complete.
4 Proof of Theorem 1
First we point out that the generalized α-polynomial in Theorem 1 satisfies
for any ∈ Rm. Indeed, the case where n = 2 is easy. Suppose n > 3. Since α ∈ n, we see that
By Laplace expansion, we then have
Hence
We consider two cases:
In the former case, Δ αj> 1 for j = 1, 2, ..., n - 1. Therefore, by (24) and (21),
where
In the latter case, we have
Let
and
Then
where
Our Theorem is thus proved.
As an immediate consequence of our Theorem and the inequality ||f(u)|| < |||| + ||f(u) - ||, we have the following
Corollary 1.Under the assumptions of Theorem 1, we have
We remark that Example 5.2 in [5, p. 1058] shows that the equality sign in (6) may hold.
5 Remarks and Examples
In Theorem 1, the number θ = min1 < i < n-1Δ αi can be an arbitrary positive number. However, in the case when the powers α1, α2, ..., αn are nonnegative integers,
θ = min { αj+1 - αj|j = 1, 2, ..., n - 1} > 1.
In particular, if there are two consecutive powers αi and αi+1 such that αi+1 - αi = 1, then θ = 1. Such a choice can simplify matters since
In another direction, the number θ may be an arbitrary positive number but the sequence in Theorem 1 of this paper may be uniform in the sense that
where h is a constant.
In [5, p. 1047, Theorem 1.1], we obtained the following result: If y ∈ Cn, x ∈ Ωn and α ∈ n, then for u ∈ [x1, xn], we have
Now we prove the inequality (6) is strengthening of the inequality (26) in case n = 5 or n > 7 and = h, i = 1, 2, ..., n - 1. To see this, take = 0 and
in (25), then
Thus it suffices to show that when n = 5 or n > 7,
Indeed, note first (from [5, p. 1053, Lemma 2.8]) that
Therefore, it suffices to show (28) by showing
when n = 5 or n > 7. The cases n = 5,7,8,9 can be proved by evaluating Φ5, Φ7, Φ8, Φ9 directly ( Φ5~ 1.0833, Φ7~ 1.4444, Φ8~ 1.0176, Φ9~ 1.1859). As for the cases where n > 10, let us first note that the number
Then by the well known fact that t > ln(1 + t) for t ∈ (-1,+ ∞)\{0}, we see that
Hence
as desired.
We remark that (26) has been shown in [5] when n > 2 and the sequence is not necessary uniform. However, the data points y1, y2, ..., yn were assumed to be complex numbers only and that the reference vector is assumed to be 0.
While it is important to estimate the size of the interpolating α-polynomial ||f(u)|| in reference to the origin, in applications it is also important to findupper bounds for ||f(u) - || where the reference point is an important landmark. For instance, given n vectors y1, ..., yn in Rm, the Fermat point relative to these vectors is the vector such that
is minimized. In view of its definition, the Fermat point is clearly of practical importance. We will illustrate our result by an example in which the Fermat point is used as our reference vector.
Let m = 3,
p1(u) = 20[ln(1 + u)]2.1,
p2(u) = -20(2 tan u - 2 sin u)1.1,
p3(u) = (eu - 1)5.4,
and p(u) = p1(u) + p2(u) + p3(u), for u ∈ [0,1]. Let
g(u) = (u, p(u), 3u2.1)†, u ∈ [0,1].
Take 6 data pairs
(x1, g(x1)†)†~ (0, 0, 0, 0)†,
(x2, g(x2)†)†~ (0.2, 0.2, 0.46124, 0.10216)†,
(x3, g(x3)†)†~ (0.4, 0.4, 1.0338, 0.43797)†,
(x4, g(x4)†)†~ (0.6, 0.6, 0.30170, 1.02622)†,
(x5, g(x5)†)†~ (0.8, 0.8, -2.36573, 1.87763)†,
(x6, g(x6)†)†~ (1, 1, -1.821068, 3)†.
Since
we will try to find the interpolating function in the form
where the exponent 4.3 is chosen so that θ = 1 (see the beginning remark in this section). By (5), we see that
for t ∈ [0,1]. See Figure 1.
Next, the Fermat point = (p, q, r)†relative to the vectors g(x1), g(x2), g(x3), g(x4), g(x5) and g(x6) is found by minimizing
By using the computing tool Mathematica,
min H(p, q, r) ~ H(0.48629, 0.36004, 0.55964) ~ 7.81586,
we see that
(p, q, r)†~ (0.48629, 0.36004, 0.55964)†.
Finally, the true bound is given by
and
with 4.25302 - 4.06571 = 0.18731.
The above example shows that Theorem 1 offers a closed ball with center that contains the approximating curve described by f over the interval [x1, xn]. By properly chosen as a reference point, tracking or control of spatial flying objects may then be feasible.
We close our investigation by remarking that results similar to Theorem 1 can be established if Rm is replaced by more general linear spaces endowed with appropriate algebraic operations and compatible norms, and many methods and techniques related to the mathematical inequalities used in this article can be found in [1-3, 5-9].
Received: 20/VI/10.
Accepted: 08/II/11.
#CAM-226/10.
- [1] P.S. Bullen, D.S. Mitrinnovic and P.M. Vasic, Means and Their inequalities. Reidel, Dordrecht/Boston/Lancaster/Tokyo (1988).
- [2] A.M. Fink and Z. Pales, What is Hadamard's inequality? Appl. Anal. Discrete Math., 1 (2007), 29-35. (Available at http://pefmath.etf.bg.ac.yu).
- [3] J.J. Wen and Z.H. Zhang, Vandermonde-type determinants and inequalities. AMEN, 6 (2006), 211-218.
- [4] R. Aldrovandi, Special Matrices of Mathematical Physics: Stochastic, Circulant and Bell Matrices. World Scientific (2001).
- [5] J.J. Wen and W.L. Wang, The inequalities involving generalized interpolation polynomial. Computer and Mathematics with Applications, 56 (2008), 1045-1058.
- [6] J.E. Pecaric, J.J. Wen, W.L. Wang and L. Tao, A generalization of Maclaurin's inequalities and its applications. Mathematical Inequalities and Applications, 8(4) (2005), 583-598.
- [7] J.J. Wen and Z.H. Zang. Jensen type inequalities involving homogeneous polynomials. Journal Inequalities and Applications. Volume 2010, Article ID 850215,21 pages doi: 10.1155/2010/850215.
- [8] J.J. Wen and W.L. Wang, The optimization for the inequalities of power means. J. Inequalities and Applications, Volume 2006, Article ID 46782, pages 1-25 doi: 10.1155/JIA/2006/46782.
- [9] J.J. Wen and W.L. Wang, Chebyshev type inequalities involving permanents and their applications. Linear Algebra and its Applcations, 422(1) (2007), 295-303.
- [10] P.J. Davies, Interpolation and Approximation. Dover (1975).
Publication Dates
-
Publication in this collection
06 Jan 2012 -
Date of issue
2011
History
-
Received
20 June 2010 -
Accepted
08 Feb 2011