## Serviços Personalizados

## Journal

## Artigo

## Indicadores

- Citado por SciELO
- Acessos

## Links relacionados

- Citado por Google
- Similares em SciELO
- Similares em Google

## Compartilhar

## Computational & Applied Mathematics

##
*versão On-line* ISSN 1807-0302

### Comput. Appl. Math. vol.30 no.3 São Carlos 2011

#### http://dx.doi.org/10.1590/S1807-03022011000300004

**Closed balls for interpolating quasi-polynomials **

**Jiajin Wen ^{I}; Sui Sun Cheng^{II}**

^{I}College of Mathematics and Information Science, Chengdu University, Chengdu 610106, P.R. China. E-mail: wenjiajin623@163.com

^{II}Department of Mathematics, Tsing Hua University, Hsinchu, Taiwan 30043, R.O. China. E-mail: sscheng@math.nthu.edu.tw

**ABSTRACT**

The classic interpolation problem asks for polynomials to fit a set of given data. In this paper, quasi-polynomials are considered as interpolating functions passing through a set of spatial points. Existence and uniqueness is obtained by means of generalized Vandermonde determinants. By means of several estimates related to these determinants, we are also able to find closed balls for any given centers that enclose the approximating curves. By choosing proper centers based on the observed spatial points, these balls may lead us to applications such as satellite tracking and control.

**Mathematical subject classification: ** 41A05.

**Key words:** interpolation, reference point, error bound, quasi-polynomial.

**1 Introduction**

Given a set of *m* + 1 points (*x _{i},y_{i}*),

*i*= 0,...,

*m*, where

*x*

_{0},

*x*

_{1},...,

*x*are mutually distinct, the classical interpolation problem asks for a polynomial

_{m}*p = p*(

*x*) of degree at most m such that

*p*(*x _{i}*) =

*y*= 0,1,2,...,

_{i}, i*m*.

The polynomial that does the job exists and is unique, and is called the Lagrange interpolating polynomial. Together with this existence and uniqueness theorem, there is now a fairly complete theory (see e.g. Davies [10]) associating the Vandermonde matrices, divided differences, error bounds, etc. with the classical interpolation problem.

In view of the many applications of the concept of interpolation, it is of interest to consider different types of interpolation functions. Among manyothers, in [5], "quasi-polynomials" as candidates of interpolating functions are considered. More specifically, let **R** and **C** be the set of real and complex numbers respectively. Let **b**_{1},**b**_{2},...,**b**_{n} ∈ **C**, α_{1}, α_{2},..., α_{n} ∈ **R** such that 0 = α_{1} < α_{2} < ... < α_{n}. The function **f**:[0, ∞) → **C** defined by

is called a ( α_{1}, α_{2},..., α_{n})-polynomial^{1}. It is shown that given a set of data pairs (*x*_{1},**y**_{1}), (*x*_{2},**y**_{2}), ..., (*x _{n}*,

**y**

*) where 0*

_{n}__<__

*x*

_{1}<

*x*

_{2}< ... <

*x*and

_{n}**y**

_{1},...,

**y**

*∈*

_{n}**C**, there then is a unique ( α

_{1}, α

_{2},..., α

*)-polynomial*

_{n}**f**that satisfies

Once existence is shown, it is then important to investigate the properties of the interpolating polynomial. Several properties are obtained in [5]. In particular, a bound for |**f**(*u*)|, where *u* ∈ [*x*_{1},*x _{n}*], is obtained in [5].

In this paper, we will be interested in approximation of a spatial curve (described by a vector function) by ( α_{1}, α_{2},..., α_{n})-polynomials, and their 'distances' from a reference point. More specifically, given a space curve in **R**^{m} described by a vector function **g**:[*x*_{1},*x _{n}*] →

**R**

^{m}. If

**g**is unknown, but the set of data pairs (

*x*

_{1},

**y**

_{1}),(

*x*

_{2},

**y**

_{2}),...,(

*x*,

_{n}**y**

_{n}), where 0

__<__

*x*

_{1}<

*x*

_{2}< ... <

*x*and

_{n}**y**

_{1}=

**g**(

*x*

_{1}), ...,

**y**

_{n}=

**g**(

*x*) ∈

_{n}**R**

^{m}, are available, we are interested in the existence of a function

**f**:[0, ∞) →

**R**

^{m}of the form

where **d**_{1},**d**_{2},...,**d**_{n} ∈ **R**^{m} such that the condition

**f**(*x _{i}*) =

**y**

_{i}

*, i*= 1,2,...,

*n*,

is satisfied, as well as upper bounds for ||**f**(*u*) - ||, where ||·|| is the Euclidean norm for **R**^{m} and is a given vector in **R**^{m}.

We plan to do the following. In the next section, we will take care of the existence and uniqueness of the desired function by introducing generalized Vandermonde determinants. Then we will state the main theorem of our paper. In Section 3, we will derive several preparatory results. Then in Section 4, our main theorem is proved. The final section is devoted to additional remarks and illustrative examples.

**2 Preliminary results**

To facilitate discussions, we recall several definitions and results. Throughout the rest of our discussions, we assume that *n* __>__ 2 (to avoid trivial cases). Let **R**^{n} be the standard set of real *n*-vectors endowed with the usual linear structure and the Euclidean norm. An *n*-vector in **R**^{n} is indicated by **x**,**a**,**b**,**c**, α, β,... etc. Given an *n*-vector, say **x**, its components are indicated by *x*_{1},*x*_{2},...,*x _{n}*, so that

**x** = (*x*_{1},*x*_{2},...,*x _{n}*)

^{†},

where the dagger indicates transposition. The difference vector Δ**x** is defined by

Δ**x** = (( Δ**x**)_{1},( Δ**x**)_{2},...,( Δ**x**)_{n-1})^{†} = (*x*_{2 }- *x*_{1}, *x*_{3 }- *x*_{2}, ..., *x _{n }- x_{n}*

_{-1})

^{†}.

For the sake of convenience, we also denote the *i*-th component ( Δ**x**)_{i} of Δ**x** by the forward difference Δ*x _{i}*.

Several subsets of **R**^{n} will be used extensively. For this reason, we will set (cf. [1-9])

and

Another convenient notation has to do with the substitution of a component of a vector **x** = (*x*_{1},...,*x _{n}*)

^{† }. Suppose the

*j*-th component of

**x**is replaced by

*u*, we will denote the subsequent vector by

**x**

^{(j)}(

*u*), that is,

**x**^{(j)}(*u*) = (*x*_{1}, ..., *x _{j}*

_{-1},

*u, x*

_{j}_{+1}, ...,

*x*)

_{n}^{†}.

Given **x** ∈ **R**^{n}, recall that the Vandermonde determinant in **x** is defined by

Given **x** ∈ **R**^{n} and α ∈ **R**^{n}, we may extend the definition of Vandermonde determinant as follows:

In the above we need to make sure that each entry of the determinant is well defined. Such is the case when *x _{j}*

__>__0 and α

_{i}

__>__0, where 0

^{0}= 1.

By means of these notations, given **d**_{1},...,**d**_{n} ∈ **R**^{m} and α ∈ ^{n}, a generalized α-polynomial is a function **f** :[0, ∞) → **R**^{m} defined by

where we have employed the fact that 0^{0} = 1. Given **x** ∈ Ω^{n} and **y**_{1},**y**_{2},..., **y**_{n} ∈ **R**^{m}, if we try to find a generalized α-polynomial, where α ∈ ^{n}, that satisfies

then we are led to a linear system of equations in the variables **d**_{1},...,**d**_{n}. Solving this system of vector equations, we easily see that

Since *V _{n}*(

**x**, α) > 0 for

**x**∈ Ω

^{n}and α ∈

^{n}(see [3, p. 212, Theorem 1]), we see further that the desired α-polynomial satisfying (4) can be expressed as

Now that the existence and uniqueness of the desired interpolating polynomial is out of the way, the main theorem to be shown will be the following.

**Theorem 1. ***Given ***x** ∈ Ω* ^{n} and * α ∈

^{n}as well as**y**

_{1},...,

**y**

*∈*

_{n}**R**

*α*

^{m}. The generalized interpolating*-polynomial*

**f**

*in*(5)

*will satisfy*

* for any u * ∈ [*x*_{1}*,x _{n}*]

*and any*∈

**R**

*θ*

^{m}, where*= min*

_{1}

_{ < i < n-}_{1}Δ α

_{i}.In the above and later discussions, we employ the greatest integer function [·] .

**3 Preparatory lemmas**

We first recall the following result, which was already used in deriving the α-interpolating polynomial.

**Lemma 1.** ([3, p. 212, Theorem 1]). *Let* **x** ∈ Ω^{n}* and* α ∈ ^{n}*. Then V _{n}*(

**x**, α) > 0.

**Lemma 2. **([5, p. 1047, Lemma 2.2]). *Let* **x**, α ∈ Ω* ^{n} and* β = ( β

_{1}, β

_{2},..., β

_{n-1})

*where*β

*= α*

_{j}_{j+1}- α

_{1 }- 1

*for j*= 1,2,...,

*n*- 1.

*Then*

* where ***t** = (*t*_{1}*, t*_{2}*, ..., t _{n}*

_{-1}).

**Lemma 3. ***Let ***x**, α ∈ Ω* _{n}. If * Δ α

*1*

_{j}__>__*for j =*1, 2

*, ..., n -*1

*, then*

* where d _{n} = *max{1, α

*α*

_{n }-

_{n}_{-1 }- 1}.

**Proof. ** When *n* = 2, we may see from (7) that

If 0 __<__ α_{2} - α_{1 }- 1 __<__ 1, then *d*_{2} = 1 and the function *q*(*t*) = is concave over the interval [*x*_{1}, *x*_{2}]. In view of the Hadamard's inequality (see e.g. [2]) and the A-G inequality, we see that

If α_{2} - α_{1 }- 1 > 1, then *d*_{2} = α_{2 }- α_{1 }- 1 and the function *q*(*t*) = is convex over the interval [*x*_{1}, *x*_{2}]. Again, from the Hadamard's inequality and the A-G inequality,

These show that (8) is valid for *n* = 2.

We now assume by induction that (8) holds when n is replaced by *n* - 1 (where *n* - 1 __>__ 2). By our assumption that Δ α_{j}__>__ 1 for *j* = 1, 2, ..., *n* - 1, we see that

and

Hence,

Furthermore, if *x _{j} < t_{j} < x_{j}*

_{+1}for

*j*= 1, 2, ...,

*n*- 1, then

Next, if we take α = (0, 1, 2, ..., *n* - 1) in (7), we see that

and hence

With the above information at hand, we may now see that

that is,

Since **x** ∈ Ω^{n} and *d _{n}*

__>__1, we also have

and

Thus from (14), we may further assert that

The proof is complete.

**Lemma 4. **([5, p. 1050, Lemma 2.4]). *Let* **x** ∈ **R**^{n}*. Then *

**Lemma 5. **([5, p. 1051, Lemma 2.5]). *Let*

* Then *

* and *

**Lemma 6. ***Let ***x** ∈ Ω* ^{n}. Then for *u ∈ [

*x*

_{1}

*, x*]

_{n}*and r*∈ {1, 2

*, ..., n*},

**Proof. ** Since *u* ∈ [*x*_{1}, *x _{n}*], there is

*s*∈ {1, 2, ...,

*n*- 1} such that

*x*

_{s}__<__

*u*

__<__

*x*

_{s}_{+1}. Let us move the

*r*-th column of the determinant

*v*(

_{n}**x**

^{(r)}(

*u*)) and 'insert' it between the

*s*-th and the (

*s*+ 1) -th column of

*v*(

_{n}**x**

^{(r)}(

*u*)). The resulting determinant will be denoted by

*v*(

_{n}**x**

^{*(r)}(

*u*)), where

Then *v _{n}*(

**x**

^{*(r)}(

*u*))

__>__0. Furthermore, in view of the A-G inequality [6-9] and (15),

where λ_{k} is defined in Lemma 5. Next we assert that

Indeed, if *s = r* - 1 or *s = r*, then

**x**^{*(r)}(*u*) = **x**^{(r)}(*u*).

If 2 __<__ *r* __<__ *n* - 1, then from Lemma 5,

If *r* = 1, then

If *r = n*, then

Next, suppose *s < r* - 2 or

*s*+ 1. Let

__>__r Since 1 __<__ *r < n *and 1

__<__

*s*- 1, we have 1

__<__n__<__

*q*- 2. If 2

__<__n__<__

*r*- 1, then 1

__<__n__<__

*p*- 1, and hence by Lemma 5,

__<__n If *r *= 1 or *r = n*, from Lemma 5,

The inequality (20) is thus proved.

By combining (19) and (20), we see that

The proof of (18) is complete.

**Lemma 7. ***Let ***x**, α ∈ Ω* ^{n} such that * Δ α

*1*

_{j}__>__*for j =*1, 2

*, ..., n -*1

*. Then for any u*∈ [

*x*

_{1}

*, x*]

_{n}*and j*∈ {1, 2

*, ..., n*},

**Proof. ** Since *x*_{1} __<__ *u < x_{n}*, there is

*s*∈ {1, 2, ...,

*n*- 1} such that 0

__<__

*x*

_{s}__<__u__<__x_{s}_{+1}. Let us move the

*j*-th column of the determinant

*V*(

_{n}**x**

^{(r)}(

*u*), α) and 'insert' it between the

*s*-th and the (

*s*+ 1)-th columns of

*V*(

_{n}**x**

^{(r)}(

*u*), α). The resulting determinant will be denoted by

*V*(

_{n}**x**

^{*(r)}(

*u*), α) where

Since

by Lemma 1, we see that *V _{n}*(

**x**

^{*(j)}(

*u*), α)

__>__0. Furthermore, in view of the following simple fact,

we may see that

Thus, by (22), (23), (8) and (18), we have

The proof is complete.

**4 Proof of Theorem 1**

First we point out that the generalized α-polynomial in Theorem 1 satisfies

for any ∈ *R ^{m}*. Indeed, the case where

*n*= 2 is easy. Suppose

*n*> 3. Since α ∈

^{n}, we see that

By Laplace expansion, we then have

Hence

We consider two cases:

In the former case, Δ α_{j} __>__ 1 for *j* = 1, 2, ..., *n* - 1. Therefore, by (24) and (21),

where

In the latter case, we have

Let

and

Then

where

Our Theorem is thus proved.

As an immediate consequence of our Theorem and the inequality ||**f**(*u*)|| __<__ |||| + ||**f**(*u*) - ||, we have the following

**Corollary 1.** *Under the assumptions of Theorem *1*, we have *

We remark that Example 5.2 in [5, p. 1058] shows that the equality sign in (6) may hold.

**5 Remarks and Examples**

In Theorem 1, the number θ = min_{1 < i < n-1} Δ α_{i} can be an arbitrary positive number. However, in the case when the powers α_{1}, α_{2}, ..., α_{n} are nonnegative integers,

θ = min { α_{j}_{+1 }- α* _{j}*|

*j*= 1, 2, ...,

*n*- 1}

__>__1.

In particular, if there are two consecutive powers α* _{i}* and α

_{i}_{+1}such that α

_{i}_{+1 }- α

*= 1, then θ = 1. Such a choice can simplify matters since*

_{i}In another direction, the number θ may be an arbitrary positive number but the sequence in Theorem 1 of this paper may be uniform in the sense that

where *h* is a constant.

In [5, p. 1047, Theorem 1.1], we obtained the following result: If y ∈ **C**^{n}, **x** ∈ Ω^{n} and α ∈ ^{n}, then for *u* ∈ [*x*_{1}, *x _{n}*], we have

Now we prove the inequality (6) is strengthening of the inequality (26) in case *n* = 5 or *n* __>__ 7 and = *h, i* = 1, 2, ..., *n* - 1. To see this, take = 0 and

in (25), then

Thus it suffices to show that when *n* = 5 or *n* __>__ 7,

Indeed, note first (from [5, p. 1053, Lemma 2.8]) that

Therefore, it suffices to show (28) by showing

when *n *= 5 or *n* __>__ 7. The cases *n* = 5,7,8,9 can be proved by evaluating Φ_{5}, Φ_{7}, Φ_{8}, Φ_{9} directly ( Φ_{5} __~__ 1.0833, Φ_{7} __~__ 1.4444, Φ_{8} __~__ 1.0176, Φ_{9} __~__ 1.1859). As for the cases where *n* __>__ 10, let us first note that the number

Then by the well known fact that *t* > ln(1 + *t*) for *t* ∈ (-1,+ ∞)\{0}, we see that

Hence

as desired.

We remark that (26) has been shown in [5] when *n* __>__ 2 and the sequence is not necessary uniform. However, the data points **y**_{1}, **y**_{2}, ..., **y*** _{n}* were assumed to be complex numbers only and that the reference vector is assumed to be 0.

While it is important to estimate the size of the interpolating α-polynomial ||**f**(*u*)|| in reference to the origin, in applications it is also important to findupper bounds for ||**f**(*u*) - || where the reference point is an important landmark. For instance, given *n* vectors **y**_{1}, ..., **y*** _{n}* in

**R**

^{m}, the Fermat point relative to these vectors is the vector such that

is minimized. In view of its definition, the Fermat point is clearly of practical importance. We will illustrate our result by an example in which the Fermat point is used as our reference vector.

Let *m* = 3,

*p*_{1}(*u*) = 20[ln(1 + *u*)]^{2.1},

*p*_{2}(*u*) = -20(2 tan *u* - 2 sin *u*)^{1.1},

*p*_{3}(*u*) = (*e ^{u }*- 1)

^{5.4},

and *p*(*u*) = *p*_{1}(*u*) + *p*_{2}(*u*) + *p*_{3}(*u*), for *u* ∈ [0,1]. Let

**g**(*u*) = (*u, p*(*u*), 3*u*^{2.1})^{†}, *u* ∈ [0,1].

Take 6 data pairs

(*x*_{1}, **g**(*x*_{1})^{†})^{†} __~__ (0, 0, 0, 0)^{†},

(*x*_{2},** g**(*x*_{2})^{†})^{†} __~__ (0.2, 0.2, 0.46124, 0.10216)^{†},

(*x*_{3}, **g**(*x*_{3})^{†})^{†} __~__ (0.4, 0.4, 1.0338, 0.43797)^{†},

(*x*_{4}, **g**(*x*_{4})^{†})^{†} __~__ (0.6, 0.6, 0.30170, 1.02622)^{†},

(*x*_{5}, **g**(*x*_{5})^{†})^{†} __~__ (0.8, 0.8, -2.36573, 1.87763)^{†},

(*x*_{6}, **g**(*x*_{6})^{†})^{†} __~__ (1, 1, -1.821068, 3)^{†}.

Since

we will try to find the interpolating function in the form

where the exponent 4.3 is chosen so that θ = 1 (see the beginning remark in this section). By (5), we see that

for *t * ∈ [0,1]. See Figure 1.

Next, the Fermat point = (*p, q, r*)^{†}relative to the vectors **g**(*x*_{1}), **g**(*x*_{2}), **g**(*x*_{3}), **g**(*x*_{4}), **g**(*x*_{5}) and **g**(*x*_{6}) is found by minimizing

By using the computing tool Mathematica,

min *H*(*p, q, r*) __~__ *H*(0.48629, 0.36004, 0.55964) __~__ 7.81586,

we see that

(*p, q, r*)^{†}__~__ (0.48629, 0.36004, 0.55964)^{†}.

Finally, the true bound is given by

and

with 4.25302 - 4.06571 = 0.18731.

The above example shows that Theorem 1 offers a closed ball with center that contains the approximating curve described by **f** over the interval [*x*_{1}, *x _{n}*]. By properly chosen as a reference point, tracking or control of spatial flying objects may then be feasible.

We close our investigation by remarking that results similar to Theorem 1 can be established if **R**^{m} is replaced by more general linear spaces endowed with appropriate algebraic operations and compatible norms, and many methods and techniques related to the mathematical inequalities used in this article can be found in [1-3, 5-9].

**REFERENCES**

[1] P.S. Bullen, D.S. Mitrinnovic and P.M. Vasic, *Means and Their inequalities.* Reidel, Dordrecht/Boston/Lancaster/Tokyo (1988). [ Links ]

[2] A.M. Fink and Z. Pales, *What is Hadamard's inequality?* Appl. Anal. Discrete Math., **1** (2007), 29-35. (Available at http://pefmath.etf.bg.ac.yu). [ Links ]

[3] J.J. Wen and Z.H. Zhang, *Vandermonde-type determinants and inequalities.* AMEN, **6** (2006), 211-218. [ Links ]

[4] R. Aldrovandi, *Special Matrices of Mathematical Physics: Stochastic, Circulant and Bell Matrices.* World Scientific (2001). [ Links ]

[5] J.J. Wen and W.L. Wang, *The inequalities involving generalized interpolation polynomial.* Computer and Mathematics with Applications, **56** (2008), 1045-1058. [ Links ]

[6] J.E. Pecaric, J.J. Wen, W.L. Wang and L. Tao, *A generalization of Maclaurin's inequalities and its applications.* Mathematical Inequalities and Applications, **8**(4) (2005), 583-598. [ Links ]

[7] J.J. Wen and Z.H. Zang. *Jensen type inequalities involving homogeneous polynomials.* Journal Inequalities and Applications. Volume 2010, Article ID 850215,21 pages doi: 10.1155/2010/850215. [ Links ]

[8] J.J. Wen and W.L. Wang, *The optimization for the inequalities of power means.* J. Inequalities and Applications, Volume 2006, Article ID 46782, pages 1-25 doi: 10.1155/JIA/2006/46782. [ Links ]

[9] J.J. Wen and W.L. Wang, *Chebyshev type inequalities involving permanents and their applications.* Linear Algebra and its Applcations, **422**(1) (2007), 295-303. [ Links ]

[10] P.J. Davies, *Interpolation and Approximation.* Dover (1975). [ Links ]

Received: 20/VI/10.

Accepted: 08/II/11.

#CAM-226/10.

1 It is called a generalized polynomial in [5], but it is better to avoid this term since there are many generalizations of polynomials.