## Services on Demand

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Cited by Google
- Similars in SciELO
- Similars in Google

## Share

## Computational & Applied Mathematics

*On-line version* ISSN 1807-0302

### Comput. Appl. Math. vol.31 no.2 São Carlos 2012

#### http://dx.doi.org/10.1590/S1807-03022012000200002

**Solutions to the recurrence relation u_{n+1} = v_{n+1} + u_{n} **

**⊗**

*v*in terms of Bell polynomials_{n}

**Christopher S. Withers ^{I}; Saralees Nadarajah^{II, }^{*}**

^{I}Applied Mathematics Group, Industrial Research Limited, Lower Hutt, New Zealand E-mails: c.withers@irl.cri.nz

^{II}School of Mathematics, University of Manchester, Manchester M13 9PL, UK E-mails: Saralees.Nadarajah@manchester.ac.uk

**ABSTRACT**

Motivated by time series analysis, we consider the problem of solving the recurrence relation *u _{n}*

_{+1}=

*v*

_{n}_{+1}+

*u*⊗

_{n}*v*for

_{n}*n*≠ 0 and

*u*, given the sequence

_{n}*v*. A solution is given as a Bell polynomial. When

_{n}*v*can be written as a weighted sum of nth powers, then the solution

_{n}*u*also takes this form.

_{n}**Mathematical subject classification:** 33E99.

**Key words:** autoregressive processes, Bell polynomials, convolution, maximum.

**1 Introduction and summary**

We define the convolution of sequences {*a _{n}*,

*b*:

_{n }*n*

__>__

*m*} as

for *n* __>__ 0. We consider the recurrence equation for *u _{n}*,

for *n* __>__ 0, where *u*_{0} = *v*_{0}, and *v _{n}* is a given sequence.

The need for solutions to (1.1) arises with respect to the distribution of the maximum of first order autoregressive processes and more generally to that of autoregressive processes of any order, see Withers and Nadarajah (2010). There are many papers studying the distribution of the maximum of autoregressive processes, see, for example, Chernick and Davis (1982), McCormick and Mathew (1989), McCormick and Park (1992), Borkovec (2000), Ol'shanskii (2004) and Elek and Zempléni (2008). However, the results either give the limiting extreme value distributions or assume that the errors come from a specific class (for example, uniform distributed errors, negative binomial errors, *ARCH*(1) errors, etc). We are aware of no work giving the *exact* distribution of the maximum of autoregressive processes. As explained in Withers and Nadarajah (2010), solutions to (1.1) will lead to the exact distribution of the maximum of autoregressive processes of any order.

The aim of this note to provide solutions for (1.1) accessible for all scientists, not just mathematicians. These solutions are given in terms of Bell polynomials. In-built routines for Bell polynomials are available in most computer algebra packages. For example, see BellB in Mathematica and IncompleteBellPoly in Matlab. So, the solutions given will be accessible to most practitioners.

In Section 2, a solution for (1.1) is given as a Bell polynomial. In order to investigate the behavior of *u _{n}* for large

*n*, an assumption needs to made on the behavior of

*v*for large

_{n}*n*. In Section 3, we show that when

*v*can be written as a weighted sum of

_{n}*n*th powers, then the solution

*u*also has this form. This assumption is extended in Section 4 to the case when the weights are not constants, but polynomials in

_{n}*n*. Some conclusions and future work are noted in Section 5.

**2 The solution as a Bell polynomial**

Theorem 2.1 provides an explicit solution of the recurrence relation (1.1) in terms of the *complete ordinary Bell polynomial*, _{n}(**w**), a function of (*w*_{1}, ..., *w _{n}*), defined for any sequence

**w**= (

*w*

_{1},

*w*

_{2}, ...) by the formal generating function

where

The solution given by Theorem 2.1 can be computed by a single call to the in-built routine, BellB, in Mathematica or some other equivalent computer package. We believe that this is the most direct and the most efficient way to calculate a solution for the recurrence relation (1.1). However, in the absence of computer packages, the complete ordinary Bell polynomial can be calculated using the recurrence relation derived by Theorem 2.2.

**Theorem 2.1.*** The recurrence relation *(1.1)* has the solution: *

*for n > *0

*, where w*

_{n}= v_{n-}_{1}.

**Proof. ** Multiply (1.1) by *t ^{n}* and sum from

*n*= 0 to obtain

(*U*(*t*) – *V* (*t*)) /*t* = *U*(*t*) *V*(*t*),

where

since *W*(*t*) = *tV*(*t*), and

Taking the coefficient of *t ^{n}* in the last line gives the explicit solution (2.1).

**Theorem 2.2.** *A recurrence relation for b _{n} = _{n}*(

**w**)

*is given by*

*b _{n}* =

*w*⊗

_{n}*b*

_{n}*for n* __>__ 1, *where w*_{0} = 0 *and b*_{0} = 1. *For example,*

b_{1} | = | w_{0}b_{1} + w_{1}b_{0} = w_{1}, |

b_{2} | = | w_{0}b_{2} + w_{1}b_{1} + w_{2}b_{0} = w_{1}b_{1} + w_{2} = + w_{2}, |

b_{3} | = | w_{0}b_{3} + w_{1}b_{2} + w_{2}b_{1} + w_{3}b_{0} = w_{1}b_{2} + w_{2}b_{1} + w_{3} |

= | + 2w_{1}w_{2} + w_{3}, |

*and so on*.

**Proof. ** Follows by taking the coefficient of *t ^{n}* in (1 –

*x*)

^{-1 }-1 =

*x*(1 -

*x*)

^{-1}, where

*x*=

*W*(

*t*).

An alternative to the complete ordinary Bell polynomial is the *partial ordinary Bell polynomial*, _{nj}(**w**), defined by

for 0 __<__ *j* __<__ *n*. For example,

where δ_{00} = 1 and δ_{n0} = 0 for *n* __>__ 0. These polynomials are tabled on page 309 of Comtet (1974) for 1 __<__ *n* __<__ 10. Recurrence formulas for them are also given by Comtet (1974). They can be computed by a single call to the in-built routine, IncompleteBellPoly, in Matlab.

Theorem 2.3 states the relationship between the complete ordinary Bell polynomial and the partial ordinary Bell polynomial. It also provides a recurrence relation for the latter. Corollary 2.1 derives the relationship between *u _{n}* and

*v*.

_{n} Corollary 2.2 derives the reciprocal relationship between *v _{n}* and

*u*.

_{n}**Theorem 2.3.** * We have *

*A recurrence relation for b _{nj} = _{nj }*(

**w**)

*is*

*for j*_{1},* j*_{2}* > *0

*. For example,*

*for j > *1.

**Proof. ** Take the coefficient of *t ^{n}* in the expansion for (1 -

*W*(

*t*))

^{-1}to obtain (2.3). The recurrence relation (2.4) follows by taking the coefficient of

*t*in . The proof is complete.

^{n}

**Corollary 2.1.*** We have*

**Proof. ** Applying (2.3) to (2.1), and reading the partial polynomials fromComtet's table, we obtain the results.

**Corollary 2.2.** * We have v _{n} = -_{n+}*

_{1}(

**x**)

*for n*0

__>__*, where x*

_{n}= –u_{n-}_{1}.

**Proof. ** This follows from *W*(*t*) = 1 - (1 - *X*(*t*))^{-1}, where *X*(*t*) = –*tU*(*t*) = *x _{n}t^{n}*. Alternatively, it follows by writing (1.1) as

*u'*

_{n+1}=

*v'*

_{n+1}+

*u'*⊗

_{n}*v'*,

_{n}*n*

__>__0,

*u'*

_{0}=

*v'*

_{0}, where

*u'*= -

_{n}*v*,

_{n}*v'*= -

_{n}*u*and applying (2.1).

_{n}

**3 The weighted sum of powers solution**

The solution (2.1) gives no indication of the behavior of *u _{n}* for large

*n*. To obtain this we need to make an assumption on the behavior of

*v*for large

_{n}*n*. Then if

*V*(

*t*) is tractable, we can obtain

*u*

_{n}_{-1},

*n*

__>__1, as the coefficient of

*t*in (2.2). Theorems 3.1 and 3.2 illustrate this by two methods.

^{n}**Theorem 3.1.** * Suppose v _{n} is a weighted sum of powers, at least for large enough n, say *

*for n > n*

_{0},

*where*1

*∞*

__<__r__<__*, n*

_{0}

*0*

__>__*. Suppose also*{δ

*}*

_{j}*are the roots of*

*where p*_{n+1}(δ) = δ^{n+1} -*v _{n}*⊗ δ

^{n}.

*Assume that*{

*b*}

_{j}*are all non-zero and that*{ν

_{j}}

*are all non-zero and distinct. Assume also that the r roots of*(3.2)

*are all distinct. Then u*

_{n}has the form*for n > m*

_{0}

*, where J*

__<__I' = r + n_{0}

*, m*

_{0}

*=*2

*n*

_{0}

*-*1

*if n*

_{0}

*1*

__>__*and m*

_{0}

*=*0

*if n*

_{0}= 0.

**Proof. ** Having found {δ_{j}}, {γ_{j}} are the roots of

for ν = ν_{1}, ..., ν_{I}, where *A _{jn}*(ν) = δ

*/(δ*

_{j}^{n}*-ν) and*

_{j}*q*

_{n}_{+1}(ν) = ν

^{n+1}+

*u*⊗ δ

_{n }*. Note (3.4) can be written*

^{n}where (**A**_{n})_{kj} = *A _{jn}*(ν

*),*

_{k}**Q**

*= (*

_{n}*Q*

_{n}_{1}, ...,

*Q*)' and

_{nI}*Q*(ν

_{nk}= q_{n}_{k}). So, if

*J = r*, a solution is

If *r *= ∞, numerical solutions can be found by truncating the infinite matrix **A**_{n} and infinite vectors (**Q*** _{n}*,

**γ**) to

*N × N*matrix and

*N*-vectors, then increasing

*N*until the desired precision is reached. The proof, which is by substitution, assumes that {δ

*,ν*

_{j}*} are all distinct. The proof relies on the fact that if = 0 for 1*

_{j}__<__

*n*

__<__

*J*and

*r*

_{1},...,

*r*are distinct, then

_{J}*a*

_{1}= ... =

*a*= 0, since det( : 1

_{J}__<__

*n, j*

__<__

*J*) ≠ 0.

If *J < r*, a solution is given by dropping *r - J* rows of (5). If *J > r*, there are not enough equations for a solution by this method. If *n*_{0} __>__ 2, the values of *u _{n}* for

*n*< 2

*n*

_{0 }- 2 can be found from (1.1) or (2.1) or the extension of (2.5).

Now suppose that *n*_{0} __>__ 1 and *n *= 2*n*_{0 }- 1. Then *s*_{2} = 0 so that the result remains true.

Set *f*(δ) equal to the left and right hand sides of (3.2). Then a sufficient condition that its roots are distinct, is that *f*(δ) = 0 implies (δ) = 0.

The second and more general method is to compute *u _{n}* from its generating function via the generating function of

*v*and (2.2). The advantages of this method are: (i) it always applies, provided that the generating functions exist near

_{n}*t*= 0; (ii) we shall see that it becomes clear how to extend the method when multiple roots exist.

**Theorem 3.2.** * Suppose *(3.1)* holds. Then u _{n} has the form *

*for some R, T _{j} and p_{j}*(

*n*)

*, a polynomial of some degree n*

_{j}. **Proof. ** Again we begin with (3.1). The generating function is *V*(*t*) = *V*_{0 }+ *V*_{1}, where *V*_{0} = *v _{n}t^{n}*,

*V*

_{1}= /(1 -

*s*),

_{j}*s*= ν

_{j}*. Set*

_{j}t*D*= (1 -

*s*),

_{j}*N = DV*

_{1}. Then

*D*(1 -

*tV*(

*t*)) =

*L*, where

*L = D*(1 -

*tV*

_{0}) -

*tN*is a polynomial of degree

*J = r + n*

_{0}>

*r*, assuming that

*n*

_{0}> 0. So, we can write

*L*= (1 -

*tt*) say, and

_{k}1 + *tU*(*t*) = (1 - *tV*(*t*))^{-1} = *D/L*.

Suppose first that {*t _{k}*} are distinct. Then by the usual partial fractions expansion,

where *q _{k}*(

*t*) =

*D/L*and

_{k}*L*/(1 -

_{k}= L*tt*). So,

_{k}for *n* __>__ 1. If *n*_{0} = 0 then *V*_{0} = 0, *U*(*t*) = *N/L* = *m _{k}*(

*t*)/(1 -

_{k}*tt*), where

_{k}*m*(

_{k}*t*) =

*M/L*so that

_{k}for *n* __>__ 0. Now suppose first that {*t _{k}*} are not distinct. Then we can write

*L*= , where

*n*and {

_{j}= J*T*} are distinct. By the general partial fraction expansion, (compare Section 2.10 of Gradshteyn and Ryzhik (2007)),

_{k}where *c _{j, nj-k}*

_{+1}= /(

*k*- 1)! and

*Q*(

_{j}*t*) = (1 -

*tT*)

_{j}^{nj}

*D/L*. So, (3.6) holds, where

for *n* __>__ 1. So, *p _{j}*(

*n*) is a polynomial of degree

*n*. If

_{j}*n*

_{0}= 0 just replace

*D/L*by

*N/L*as for the case of distinct roots; then

*u*is equal to the right hand side of (3.6) for

_{n}*n*

__>__0.

Corollaries 3.1 to 3.3 deduce the asymptotics of *u _{n}* and

*v*as

_{n}*n*→ ∞ when (3.1), (3.3) and (3.6) are satisfied. Further particular cases are considered by Corollaries 3.4 to 3.7.

**Corollary 3.1.** * If *(3.1)* holds then*

*as n *→ ∞,* where*

* Suppose in addition r = *2 *and n*_{0} = 1*. By *(3.1)*, V*(*t*)* = b _{j}*(1 - ν

*)*

_{j}t^{-1}

*= N*(

*t*)

*/D when*|ν

_{j}t| < 1

*, where N*(

*t*)

*= b*

_{1}(1 - ν

_{2}

*t*)

*+ b*

_{2}(1 - ν

_{1}

*t*)

*, D = d*

_{i}t^{i}for d_{0}= 1

*, d*

_{1}= -ν

_{1 }- ν

_{2}

*and d*

_{2}= ν

_{1}ν

_{2}.

*So*, 1 -

*tV*(

*t*) =

*M*(

*t*)/

*D, where M*(

*t*) =

*D - tN*(

*t*) = 1 -

*c*

_{1 }+

*c*

_{2}

*t*

^{2},

*c*

_{1}= ν

_{1}ν

_{2 }+

*b*

_{1 }+

*b*

_{2}and

*c*

_{2}= ν

_{1}ν

_{2}+

*b*

_{1}ν

_{2}+

*b*

_{2}ν

_{1}.

**Corollary 3.2.*** If *(3.3)* holds then*

*as n *→ ∞,* where*

**Corollary 3.3.*** If *(3.6)* holds, set T _{j} = r_{j} *exp(

*i*ψ

*)*

_{j}*, r = r*max{

_{j}and N =*n*}

_{j}: r_{j}= r*. Then*

*for *

**Corollary 3.4.** * If c*_{2} ≠ 0* and M*(*t*)* has two distinct roots then the solution *(3.3)* holds with J = *2.

**Corollary 3.5.** * Suppose c*_{2} = 0 ≠* c*_{1}*. Then M*(*t*)* has one root and *(3.3)* holds with J = *1*. Alternatively, since the right hand side of *(2.2)* is equal to D*(1 *- c*_{1}*t*)^{-1 }- 1*, we obtain u _{n-}*

_{1}

*= for n*1.

__>__**Corollary 3.6.** * Suppose c*_{1}* = c*_{2} = 0*. Then M*(*t*) = 1* and the right hand side of *(2.2)* is equal to D, giving u*_{0}* = d*_{1}*, u*_{1}* = d*_{2}* and u _{n} = *0

*for n*2.

__>__**Corollary 3.7.*** Suppose c*_{2} ≠ 0* and M*(*t*)* has two equal roots, say t = t*_{1}*. The root satisfies M*(*t*)* = *(*t*) = 0*, giving t*_{1}* = c*_{1}*/*(2*c*_{2}) = 2*/c*_{1}* and *4*c*_{2}* = . So, M*(*t*) = (1 *- t/t*_{1})^{2}* and the right hand side of *(2.2)* is equal to *

*giving u _{n-}*

_{1}

*=*{

*n +*1

*+ nd*

_{1}

*t*

_{1 }

*+*(

*n -*1)

*d*

_{2}}

*for n*1

__>__*. So, in this case, the solution is a weighted power with the weight linear in n.*

**4 An extension to polynomial weights**

The weighted sum of powers assumption for *v _{n}* arose naturally in Withers and Nadarajah (2010) assuming that a certain matrix had diagonal Jordan form, or at least if the eigenvalues of the non-diagonal Jordan blocks are zero. When this is not the case, we showed in Withers and Nadarajah (2010) that

for *n* __>__ 1, so that* v*_{1} = *w _{i}*

_{0}. For this more general case, the method of obtaining

*u*from

_{n}*v*via its generating function, (2.2), still holds, as shown by Theorem 4.1.

_{n}**Theorem 4.1.** * Suppose *(4.1) *holds. Then u _{n} has the form*

*for n* __>__ 1 *for some J, c _{k} and t_{k}. Note* (4.2)

*is of the form*(3.3)

*with n*

_{0}= 1.

**Proof. ** Setting *s = *ν*t* and *D = d/ds*,

where

for |*s*| < 1. So, setting *s _{i} = *ν

*for |*

_{i}t*s*| < 1, (4.1) gives the generating function for {

_{i}*v*}

_{n}*V*(*t*) = 1 + *V*_{0}(*t*) + *V*_{1}(*t*),

where

Let {θ* _{j}, j* = 1,...,

*R*} be the distinct non-zero values of {ν

*= 1,...,*

_{i}, i*r*}. Set

*n*_{0} = max{*m _{i} : v_{i}* = 0},

*J = M*+ 1 +

*n*

_{0},

*M*= max {

_{j}*m*θ

_{i}: v_{i}=*},*

_{j}Then *V*_{0}(*t*) is a polynomial of degree *n*_{0}, *N _{j}*(

*t*) is a polynomial of degree

*M*(

_{j}, D*t*) is a polynomial of degree

*M*, and

where

say, *Q _{j}*(

*t*) is a polynomial of degree

*M - M*, and

_{j}*N*(

*t*) is a polynomial of degree

*M*. So,

say, is a polynomial of degree *J*, giving

Expanding in partial fractions (see Section 2.10 of Gradshteyn and Ryzhik (2007)) and taking the coefficient of *t ^{n}* gives

*u*

_{n}_{-1}as a weighted sum

*n*th powers of {

*t*} with the weights polynomials in

_{k}*n*. If {

*t*} are all distinct, then the partial fraction expansion has the form

_{k}so that (4.2) follows. If {*t _{k}*} are not distinct, then proceed as for the case given in Section 3.

□

Suppose that *m*_{1} = 2, ν_{1} ≠ 0, (*m _{i},*ν

*) = (1,0) for*

_{i}*i*> 1. Then

*R*= 1, θ

_{1}= ν

_{1},

*M = M*

_{1}=

*m*

_{1}= 2,

*J*= 4,

*M*

_{2}=

*m*

_{2}= 1 and

*v*

_{n}= w_{10}+ (

*n*- 1)

*w*

_{11}for

*n*

__>__2. So, setting

*s =*ν

_{1}

*t*and

*c*

_{0}=

*w*

_{i}_{0},

*V*_{0}(*t*) = *c*_{0}*t*, *V*_{1}(*t*) = *w*_{10}*t*(1 - *s*)^{-1} + *w*_{11}*t*^{2}(1 - *s*)^{-2},

*V*(*t*) = *v*_{0} + *c*_{0}*t* + *N*_{1}(*t*)(1 - *s*)^{-2},

*N*(*t*) = *N*_{1}(*t*) = *w*_{10}*t*(1 - *s*) + *w*_{11}*t*^{2}, *Q*_{1}(*t*) = 1,

*D*(*t*)(1 - *tV*(*t*)) = (1 - *s*)^{2}(1 - *tv*_{0} - *t*^{2}*c*_{0}) - *tN*(*t*) = *L*(*t*) = (1 - *t _{i}t*).

So, if {*t _{i}*} are distinct, then

say, giving (2) with *J* = 4. For analytic expressions for the roots of a quartic, see Section 3.8.3, page 17 of Abramowitz and Stegun (1964).

**5 Conclusions**

We have solved the recurrence relation *u _{n}*

_{+1}=

*v*

_{n}_{+1}+

*u*⊗

_{n}*v*≠ 0 for

_{n}, n*u*, given the sequence

_{n}*v*. The solution for

_{n}*u*is in terms of Bell polynomials. This form is convenient since in-built routines for computing Bell polynomials are widely available. So, the solutions can be directly applied to derive the distribution of the maximum for autoregressive processes.

_{n} We have also established the behavior of *u _{n}* for large

*n*by assuming some form for the behavior of

*v*for large

_{n}*n*. The assumed forms are (3.1), a weighted sum of powers, and (4.1). As shown in Withers and Nadarajah (2010), one of these assumptions always holds. So, the assumptions are not at all restrictive.

The work presented can be extended in several ways: 1) consider solving *u _{n}*

_{+1}=

*v*

_{n}_{+1}+ ω

_{n+1}+

*u*⊗

_{n}*v*+

_{n}*u*⊗ω

_{n}*≠ 0 for*

_{n}, n*u*, given the sequences

_{n}*v*and ω

_{n}_{n}; 2) consider solving

*u*

_{n}_{+1}=

*v*

_{n}_{+1}+ ω

_{n+1}+µ

_{n}_{+1}+

*u*⊗

_{n}*v*⊗ω

_{n}+ u_{n}*⊗µ*

_{n}+ u_{n}*≠ 0 for*

_{n}, n*u*, given the sequences

_{n}*v*, ω

_{n}*and µ*

_{n}*; and, 3) consider solving multivariate forms of (1.1) taking the form*

_{n}**u**_{n+1} = **v**_{n+1} + **u*** _{n}*⊗

**v**

*,*

_{n}where the equality and the convolution operator are interpreted element wise. We hope to address some of these problems in a future paper.

**Acknowledgments. ** The authors would like to thank the Editor, the Associate Editor and the referee for careful reading and for their comments which greatly improved the paper.

**REFERENCES**

[1] M. Abramowitz and I.A. Stegun, *Handbook of Mathematical Functions*. U.S. Department of Commerce, National Bureau of Standards, Applied Mathematics Series, **55** (1964). [ Links ]

[2] M. Borkovec, *Extremal behavior of the autoregressive process with ARCH(1) errors.* Stochastic Processes and Their Applications, **85** (2000), 189-207. [ Links ]

[3] M.R. Chernick and R.A. Davis, *Extremes in autoregressive processes with uniform marginal distributions.* Statistics and Probability Letters, **1** (1982), 85-88. [ Links ]

[4] L. Comtet, *Advanced Combinatorics*. Reidel, Dordrecht (1974). [ Links ]

[5] P. Elek and A. Zempléni, *Tail behaviour and extremes of two-state Markov-switching autoregressive models.* Computers and Mathematics with Applications, **55** (2008), 2839-2855. [ Links ]

[6] I.S. Gradshteyn and I.M. Ryzhik, *Tables of Integrals, Series and Products*, seventh edition. Academic Press, New York (2007). [ Links ]

[7] W.P. McCormick and G. Mathew, *Asymptotic results for an extreme value estimator of the autocorrelation coefficient for a first order autoregressive sequence.* In: Extreme Value Theory (Oberwolfach, 1987), pp. 166-180, Lecture Notes in Statistics, **51** (1989), Springer, New York. [ Links ]

[8] W.P. McCormick and Y.S. Park, *Asymptotic analysis of extremes from autoregressive negative binomial processes.* Journal of Applied Probability, **29** (1992), 904-920. [ Links ]

[9] K.A. Ol'shanskii, *On the extremal index of a thinned autoregression process.* (Russian) Vestnik Moskovskogo Universiteta. Seriya I. Matematika, Mekhanika, ** 70** (2004), 17-23. Translation in Moscow University Mathematics Bulletin, **59** (2004), 18-24. [ Links ]

[10] C.S. Withers and S. Nadarajah, *The distribution of the maximum of a first order autoregressive process: the continuous case.* Metrika, doi: 10.1007/s00184-010-0301-0. (2010). [ Links ]

Received: 21/VIII/10.

Accepted: 02/V/11.

#CAM-249/10.

* Corresponding author.