# ABSTRACT

The main goal of this manuscript is to introduce a discrete dynamical system defined by symmetric matrices and a real parameter. By construction, we rediscovery the Power Iteration Method from the Projected Gradient Method. Convergence of the discrete dynamical system solution is established. Finally, we consider two applications, the first one consists in find a solution of non linear equation problem and the other one consists in verifies the optimality conditions when we solve quadratic optimization problems over linear equality constraints.

Keywords:
Projected gradient type-method; Power Iteration Method; Symmetric matrix; Discrete Dynamical System

# 1 INTRODUCTION

Discrete dynamical system appears as a tool in order to understand differential equations from numerically view point (for more details, see Galob (20073 GALOB O. 2007. Discrete Dynamical Systems. Springer Verlag, Berlin Heidelberg.) and chapter 6 in Loneli & Rumbos (20037 LONELI OHE & RUMBOS PIB. 2003. Métodos dinámicos en Economia. Otra busqueda del tiempo perdido. International Thompson Editores, Mexico.)). The classical model, in finite dimensional space, is as follows:

${x}_{k+1}=F\left({x}_{k}\right)$ (1)

where $F:\Omega \subset {\mathrm{ℝ}}^{n}\to {\mathrm{ℝ}}^{n}$ is an operator and Ω is a subset of the domain of the operator F (the domain of F is denoted by $dom\left(F\right)$). According to the literature, the equation 1 is not exclusive for differential equations, for example it appears in order to find fixed points for contractive operators (remember, F is contractive if $||F\left(x\right)-F\left(y\right)||\le \lambda ||x-y||$, with $\lambda \in \left(0,1\right)$ and $x,y\in dom\left(F\right)$). For details about contractive operators, see classical books in functional analysis or general topology or fixed point theorems as for instance Brezis (19831 BREZIS H. 1983. Analyse fonctionnelle - Théorie et applications. Masson, Paris.), Istrǎţescu (19815 ISTRǍŢESCU VI. 1981. Fixed point theory. D. Reidel Publishing Co., Dordrecht.), Kelley (19556 KELLEY JL. 1955. General Topology. Van Nostrand Co.. Princeton.). Other example is the autoregressive model (for more details see Shumway & Stoffer (20179 SHUMWAY RH & STOFFER DS. 2017. Time series analysis and its applications. Springer International Publishing AG.)).

Given a symmetric matrix A and a real number λ such that $-{\lambda }^{-1}\notin \sigma \left(A\right)$. We consider the following operator ${T}_{\lambda }:S\to S$ defined by

${T}_{\lambda }\left(x\right)=\frac{\left(I+\lambda A\right)x}{||\left(I+\lambda A\right)x||}$ (2)

where I is the identity matrix, $S=\left\{x\in {\mathrm{ℝ}}^{n}:||x||=1\right\}$ and $\sigma \left(A\right)$ is the eigenvalue set of matrix A.Note that, in this setting, $I+\lambda A$ is a non singular matrix. So, operator is well defined.

The focus of this manuscript is the operator defined by the equation 2, which is very interesting, because:

1. when, either $\lambda =0$ or A is the null matrix, the operator is the identity. Here, every $x\in S$ is a fixed point of .

2. when $B=I+\lambda A$ has a dominant eigenvalue (i.e there exists an eigenvalue α* such that $\left|\alpha *\right|>\left|\alpha \right|$ for all eigenvalue $\alpha \ne \alpha *$ ), the operator T was used in the famous Power Iteration Method introduced by R. Von Mises and H. Pollaczek-Geiringer in 19298 MISES RV & POLLACZEK-GEIRINGER H. 1929. Praktische Verfahren der Gleichungsauflösung. ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik, 9: 152-164. (see Mises & Pollaczek-Geiringer (1929)).

3. when ${\left|\lambda \right|}^{-1}\in \left(na,+\infty \right)$ , where $a=max\left\{\left|{a}_{i,j}\right|:A=\left[{a}_{i,j}\right]\right\}$ and n is the size of $A,B=I+\lambda A$ is a strong monotone operator. Moreover, each eigenvector of A belonging to S is a fixed point of T (we prove it in section 2).

## 1.1 The Power Iteration Method

In order to understand the Power Iteration Method, consider a function called “Rayleigh quotient” which is defined, as follows, for each $x\ne 0$

(3)

If x is an eigenvector, then $Bx=r\left(x\right)x$ (i.e. r(x) is the corresponding eigenvalue of x). Suppose that ${\left\{{v}_{i}\right\}}_{i=1}^{n}$is a set of eigenvectors of B which is a basis of ${\mathrm{ℝ}}^{n},B{v}_{i}={\lambda }_{i}{v}_{i}$ and $\left|{\lambda }_{1}\right|>\left|{\lambda }_{2}\right|\ge ···\ge \left|{\lambda }_{n}\right|$. Taking ${v}_{0}\ne 0$ a vector with $||{v}_{0}||=1$, we have that ${v}_{0}=\sum _{i=1}^{n}{\alpha }_{i}{v}_{i}$.

Then,

$B{v}_{0}={\alpha }_{1}{\lambda }_{1}{v}_{1}+{\alpha }_{2}{\lambda }_{2}{v}_{2}+···+{\alpha }_{n}{\lambda }_{n}{v}_{n},$

and so

${B}^{k}{v}_{0}={\lambda }_{1}^{k}\left({\alpha }_{1}{v}_{1}+{\alpha }_{2}{\left(\frac{{\lambda }_{2}}{{\lambda }_{1}}\right)}^{k}{v}_{2}+···+{\alpha }_{n}{\left(\frac{{\lambda }_{n}}{{\lambda }_{1}}\right)}^{k}{v}_{n}\right).$

Here, $\frac{{B}^{k}{v}_{0}}{||{B}^{k}{v}_{0}||}$ converges to v 1, because ${\mathrm{lim}}_{k\to +\infty }{\left(\frac{{\lambda }_{i}}{{\lambda }_{1}}\right)}^{k}=0\forall i\ge 2$. The Power Iteration method is elegant, simple and can be stated as follows

$\left(PIM\right)\left\{\begin{array}{c}pickastartingvector{x}_{0}with||{x}_{0}||=1\\ Fork=1,2,···\\ Let{x}_{k}={T}_{\lambda }\left({x}_{k-1}\right)\\ whereA=\frac{1}{\lambda }\left(B-I\right)\end{array}\right\$

but, convergence is only guaranteed if the following two assumptions hold:

1. Non singular matrix B has an eigenvalue that is strictly greater in absolute value than its other eigenvalues.

2. The starting vector x0 has a nonzero component in direction of an eigenvector associated with the dominant eigenvalue.

The reader can verify that for $B=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]$ (matrix B is nonsingular and symmetric), the sequences generated by Power Iteration Method diverge for any stated point x (different to any eigenvector of B), because does not have a dominant eigenvalue, but $\overline{B}=I+\left(1/3\right)B$ is positive definite (all its eigenvalues are strictly positive) and it has a dominant eigenvalue.

## 1.2 The Projected Gradient Method

The Projected Gradient method was introduced by Goldstein (for more detail see Goldstein (19644 GOLDSTEIN AA. 1964. Convex programmin in Hilbert space. Bulletin of The American Mathematical Society, 70: 709-710.)) for solving the following differentiable optimization problem

$\left(P\right)\left\{\begin{array}{l}maximizef\left(x\right)\\ x\in C\end{array}\right\$ (4)

where $f:C\to \mathrm{ℝ}$ is differentiable in each point of a nonempty closed subset C of ${\mathrm{ℝ}}^{n}$.

The essence of the Projected Gradient method is based on two facts:

1. First order Necessary Optimality condition: If $\overline{x}$ is a solution of P, then (Here $T\left(C,\overline{)x}\right)$ is the tangent cone of C at $\overline{x}$ , for more details see Crouzeix et al. (20112 CROUZEIX J, KERAGEL A & SOSA W. 2011. Programación Matematica diferenciable. Universidad Nacional de Ingenieria.))

2. Orthogonal projection: If u is an orthogonal projection of v over C, then $||u-v||\le ||y-v||,\forall y\in C$ (in short $u:=PC\left(v\right)$ ).

Now, given a symmetric matrix A, we know that all eigenvalues of A are real numbers and we can consider n eigenvectors of matrix A as a basis of ${\mathrm{ℝ}}^{n}$. Moreover if we consider problem (P) with $f:{\mathrm{ℝ}}^{n}\to \mathrm{ℝ}$ defined by and , where F is a subspace of ${\mathrm{ℝ}}^{n}$ generated by eigenvectors of matrix A. The optimal value is an eigenvalue of A and any maximizer is a normalized eigenvector associated to the optimal value.

Here, if $\overline{)x}\in C$, then $T\left(C,\overline{)x}\right)$ is an hyperplane in F defined by normal vector $\overline{)x}\ne 0$ and contain vector $\overline{)x}$ (from now on $H\left(\overline{x},1\right):={T}_{C}\left(\overline{x}\right)$). So, the necessary condition is reduced to , which is equivalent to and $\forall \lambda \ne 0$ fixed, which is also equivalent to $x={P}_{H\left(\overline{x},1\right)}\left(\overline{x}+\lambda \nabla f\left(\overline{x}\right)\right),\forall \lambda \ne 0$ fixed.

So,

${x}_{k+1}=\frac{{P}_{H\left({x}_{k+1},1\right)}\left({x}_{k}+\lambda A{x}_{k}\right)}{||{P}_{H\left({x}_{k+1},1\right)}\left({x}_{k}+\lambda A{x}_{k}\right)||}\phantom{\rule{0ex}{0ex}}=\frac{{x}_{k}+\lambda A{x}_{k}}{||{x}_{k}+\lambda A{x}_{k}||}\phantom{\rule{0ex}{0ex}}={T}_{\lambda }\left({x}_{k}\right)$

If we define $B=I+\lambda A$, then both matrices A and B have the same eigenvectors. Moreover, δ is an eigenvalue of A and u an associate eigenvector ($Au=\delta u$) if and only if $\delta u=Au={\lambda }^{-1}\left(B-I\right)u$ if and only if $Bu=\left(1+\delta \lambda \right)u$ if and only if $\left(1+\delta \lambda \right)$ is an eigenvalue of B and u an associate eigenvector to it. In the next section we introduce an easy result which establishes that for each symmetric matrix A and each $\lambda \ne 0$ such that $1+\lambda \delta >0$ for all eigenvalue δ of A, we have that $B=I+\lambda A$ is a Symmetric Positive Definite (SPD) matrix. It implies that $\left(1+\lambda \delta \right)>0$, for all eigenvalue δ of A.

In the section 2, we introduce a discrete dynamical system defined by symmetric matrices and a real parameter λ. We show that, under some conditions on the parameter λ, any sequence generated by the discrete dynamical system converges to a fixed point of the operator which define the discrete dynamical system. Moreover, there is an equivalence between the fixed point of the operator and the eigenvector of the symmetric matrix.

In section 3 we consider two applications, the first one consists in find a solution for the non linear equation problem and the second one consists in verifies the optimality conditions when we solve quadratic optimization problems over linear equality constraints.

# 2 A DISCRETE DYNAMICAL SYSTEM

We start this section with two elementary results.

Lemma 1. Let A be a no null symmetric matrix with size n and $a=\mathrm{max}\left\{\left|{a}_{ij}\right|:A=\left[{a}_{ij}\right]\right\}$ . If $\mathrm{\lambda }\in \mathrm{ℝ}$ with ${\left|\mathrm{\lambda }\right|}^{-1}>na$ , then $\left(1+\lambda \beta \right)>0\forall \beta \in \sigma \left(A\right)$ .

Proof. Take $\overline{\beta }=\mathrm{max}\left\{\left|\beta \right|:\beta \in \sigma \left(A\right)\right\}$ and consider an eigenvector $\overline{x}$ such that $\overline{\beta }=|{\overline{x}}^{T}A\overline{x}|$. Then,

So, we have that

$\left|\beta \right|\le a\sum _{i,j}\left|{x}_{i}\right|\left|{x}_{j}\right|=a{\left(\sum _{i}\left|{x}_{i}\right|\right)}^{2}$

But ${n}^{1/2}=\mathrm{argmax}\left\{\sum _{i}\left|{x}_{i}\right|:\sum _{i}{\left|{x}_{i}\right|}^{2}=1\right\}$ (follows directly applying optimality conditions). It implies that, $\left|\beta \right|\le na<{\left|\lambda \right|}^{-1}$. And so the statement follows. □

Note that the eigenvalue set σ(A) of A exists, but its elements are unknown explicitly in the previous Lemma.

Lemma 2. If A is a symmetric matrix and $\overline{\beta }=\mathrm{max}\left\{\left|\beta \right|:\beta \in \sigma \left|A\right|\right\}$ , then for all $\mathrm{\lambda }\in \mathrm{ℝ}$ with ${\left|\mathrm{\lambda }\right|}^{-1}>\overline{\beta }$ , then matrix $B=I+\lambda A$ is a SPD matrix. Moreover, for each $\delta \in \sigma \left(B\right)$ we have that $Bu=\delta u$ and $Au={\lambda }^{-1}\left(\delta -1\right)u\left(i.e.\sigma \left(A\right)={\lambda }^{-1}\left(\sigma \left(B\right)-1\right)\right)$ .

Proof. If $\beta \in \sigma \left(A\right)$, then

$Bx=\left(I+\lambda A\right)x=x+\lambda Ax=\left(1+\lambda \beta \right)x$

But, ${\left|\lambda \right|}^{-1}>\overline{\beta }\ge \left|\beta \right|$. So, $1+\lambda \beta >0\forall \beta \in \sigma \left(A\right)$. Then, the statement follows. □

From now, for each non null symmetric matrix A, define the following operator ${T}_{\lambda }:S\to S\left(S=\left\{x\in {\mathrm{ℝ}}^{n}:||x||=1\right\}\right)$ by

${T}_{\lambda }\left(x\right)=\frac{\left(I+\lambda A\right)x}{||\left(I+\lambda A\right)x||}=\frac{Bx}{||Bx||}$ (5)

where $\lambda \in \mathrm{ℝ}\\left\{0\right\}$ is such that $-{\lambda }^{-1}\notin \sigma \left(A\right)$ and $B=I+\lambda A$.

Theorem 1.Let A be a non null symmetric matrix and$\lambda \in \mathrm{ℝ}$such that$-{\lambda }^{-1}\notin \sigma \left(A\right)$. A vector xis a fixed point of Tλ if and only if there exists$\delta \in \sigma \left(A\right)$such that$Ax*=\delta x*$and$x*\in S$.

Proof. If x∗ is a fixed point of ${T}_{\lambda }\left(x*\right)=\frac{Bx*}{||Bx*||}$. So, $||Bx*||\ne 0$ and $||x*||=1$. Let $\sigma \left(B\right)=\left\{{\lambda }_{1},···,{\lambda }_{n}\right\}$ and let $\left\{{u}_{1},···,{u}_{n}\right\}\subset S$ be an eigenvector set of B such that $B{u}_{i}={\lambda }_{i}{u}_{i}\forall i\in \left\{1,...,n\right\}$. Here, $\left\{{u}_{1},···,{u}_{n}\right\}$ is a basis of ${\mathrm{ℝ}}^{n}$, then $x*=\sum _{i=1}^{n}{\alpha }_{i}{u}_{i}=\sum _{i\in I}{\alpha }_{i}{u}_{i}$, where $I=\left\{i\in \left\{1,···,n\right\}:{\alpha }_{i}\ne 0\right\}$. Note that $I\ne \overline{)0}$, because $||x*||\ne 0$. On the other hand $\sum _{i=1}^{n}{\alpha }_{i}{u}_{i}=x*={T}_{\lambda }\left(x*\right)=\sum _{i=1}^{n}\frac{{\alpha }_{i}{\lambda }_{i}}{||Bx*||}$. Since $\left\{{u}_{1},···,{u}_{n}\right\}$ is a basis of ${\mathrm{ℝ}}^{n}$, then ${\alpha }_{i}=\frac{{\alpha }_{i}{\lambda }_{i}}{||Bx*||}\forall i\in \left\{1,···,n\right\}$. It implies that $\forall i\in I,1=\frac{{\lambda }_{i}}{||Bx*||}$. So, $||Bx*||={\lambda }_{i}>0\forall i\in I$. Finally,

$Bx*=B\left(\sum _{i\in I}{\alpha }_{i}{u}_{i}\right)=\sum _{i\in I}{\alpha }_{i}B{u}_{i}=\sum _{i\in I}{\alpha }_{i}{\lambda }_{i}{u}_{i}=||Bx*||\sum _{i\in I}{\alpha }_{i}{u}_{i}=||Bx*||x*$

The statement follows from Lemma 2, taking $\delta ={\lambda }^{-1}\left(||Bx*||-1\right)$.

If there exists $\delta \in \sigma \left(A\right)$ such that $Ax*=\delta x*$ and $x*\in S$, then $||Bx*||=1+\lambda \delta$. The statement follows because ${T}_{\lambda }\left(x*\right)=\frac{Bx*}{||Bx*||}=\frac{\left(1+\lambda \delta \right)x*}{1+\lambda \delta }=x*$. □

Now, we are able to find the solution of the discrete dynamical system, for each non null symmetric matrix A and $\lambda =\frac{1}{na+1}$, where $n=size\left(A\right)$ and $a=\mathrm{max}\left\{\left|{a}_{i,j}\right|:A=\left[{a}_{i,j}\right]\right\}$.

Initial step Given a non null symmetric matrix A.

$n=size\left(A\right).$

$a=\mathrm{max}\left\{\left|{a}_{i,j}\right|:A=\left[{a}_{i,j}\right]\right\}.$

$\lambda =\frac{1}{na+1}.$

$k=0.$

${x}_{k}\in S.$

Iterative step Calculate:

${x}_{k+1}={T}_{\lambda }\left({x}_{k}\right)=\frac{\left(I+\lambda A\right){x}_{k}}{||\left(I+\lambda A\right){x}_{k}||}=\frac{B{x}_{k}}{||B{x}_{k}||}=\frac{{B}^{k}{x}_{0}}{||{B}^{k}{x}_{0}||}\phantom{\rule{0ex}{0ex}}k=k+1.$

The following result establishes that the sequence generated by the discrete dynamical system (the solution of the discrete dynamical system) is asymptotically stable for any starting point ${x}_{0}\in S$.

Theorem 2.Let A be a non null symmetric matrix. For each${x}_{0}\in S$, the sequence {xk} generated by the discrete scheme converges to an eigenvector of A belonging to S and the sequenceconverges to its respective eigenvalue.

Proof. From Lemma 1, we have that B is a SPD matrix. It implies that $B{x}_{k}\ne 0$ for all $k\in \mathrm{ℕ}$ and so ${x}_{k+1}=\frac{B{x}_{k}}{||B{x}_{k}||}$ is well defined for all $k\in \mathrm{ℕ}$. Let ${\lambda }_{1},···,{\lambda }_{n}$ be the eigenvalue set of B and $\left\{{u}_{1},···,{u}_{n}\right\}\subset S$ a respective eigenvector set. Without loss of generality consider $0<{\lambda }_{i}\le {\lambda }_{i+1}\forall i\in \left\{1,···,n-1\right\}$. Since $\left\{{u}_{1},···,{u}_{n}\right\}$ is an orthonormal basis of ${\mathrm{ℝ}}^{n}$, then ${x}_{0}=\sum _{i=1}^{n}{\epsilon }_{i}{u}_{i}$. Here, $||{B}^{k}{x}_{0}||=||\sum _{i=1}^{n}{\epsilon }_{i}{\lambda }_{i}^{k}{u}_{i}||={\left(\sum _{i\in I}{\epsilon }_{i}^{2}{\lambda }_{i}^{2k}\right)}^{1/2}$, where $I=\left\{i\in \left\{1,···,n\right\}:{\epsilon }_{i}\ne 0\right\}$. Taking $j=\mathrm{max}\left\{i:i\in I\right\}$ and $I\left(j\right)=\left\{l:{\lambda }_{l}={\lambda }_{j}\right\}$, then ${x}_{k+1}=\sum _{i\in I}\frac{{\epsilon }_{i}{\lambda }_{i}^{k}}{{\left(\sum _{i\in I}{\epsilon }_{i}^{2}{\lambda }_{i}^{2k}\right)}^{1/2}}{u}_{i}$. It implies that ${x}_{k+1}=\sum _{i\in I}\frac{{\epsilon }_{i}{\left(\frac{{\lambda }_{i}}{{\lambda }_{j}}\right)}^{k}}{{\left(\sum _{i\in I}{\epsilon }_{i}^{2}{\left(\frac{{\lambda }_{i}}{{\lambda }_{j}}\right)}^{2k}\right)}^{1/2}}{u}_{i}$. Note that for any $i\in I\I\left(j\right)$ we have that $0<\frac{{\lambda }_{i}}{{\lambda }_{j}}<1$. It implies that, the sequence {xk} converges to $\sum _{i\in I\left(j\right)}\frac{{\epsilon }_{i}}{{\left(\sum _{i\in I\left(j\right)}{\epsilon }_{i}^{2}\right)}^{1/2}}{u}_{i}$. It is easy to verify that the cluster point is a normalized eigenvector of B associated to an eigenvalue λj. Since A and B have the same eigenvectors set, then the statement follows. □

# 3 APPLICATIONS

In this section we consider two applications.

## 3.1 The Non Linear Equation Problem

This problem consists in find a feasible point of a nonlinear equation defined by a function $f:{\mathrm{ℝ}}^{n}\to \mathrm{ℝ}$ (here, f is twice differentiable on ${\mathrm{ℝ}}^{n}$) and $\lambda \in \mathrm{ℝ}$. The Mathematical Model is:

$\left(NLEP\right)\mathrm{\left\{}\mathrm{Find}x\mathrm{such}\mathrm{that}f\left(x\right)=\lambda$ (6)

Take $x\in {\mathrm{ℝ}}^{n}$, the representation of Taylor around x is

Taking $y=x+td$ with $||d||=1$, we have that

Using this approach, the problem consists in find a direction d such that the function $h:\mathrm{ℝ}\to \mathrm{ℝ}$ defined by $h\left(t\right)=f\left(x+td\right)$ has at least one real roots.

Definition 1. Given $f:{\mathrm{ℝ}}^{n}\to \mathrm{ℝ}$ be a function twice differentiable, let $\left(x,\lambda \right)\in {\mathrm{ℝ}}^{n+1}$ . A vector $d\in S$ is called a feasible direction for the problem (NLEP), if the function h defined by $h\left(t\right)=f\left(x+td\right)=\lambda$ has at least one real root.

We point out, in the case that f be a linear function. Here for $a\in {\mathrm{ℝ}}^{n}\\left\{0\right\}$. In

this case, . Note that $d=\frac{a}{||a||}$ is a feasible direction. Moreover, $x+\overline{t}d$ is the orthogonal projection of x over the hyperplane , where .

Lemma 3. Let $f:{\mathrm{ℝ}}^{n}\to \mathrm{ℝ}$ be a quadratic function and $\left(x,\lambda \right)\in {\mathrm{ℝ}}^{n+1}$ . The following statement follows:

1. The matrix $D=\nabla f\left(x\right)\nabla f{\left(x\right)}^{T}+2\left(\lambda -f\left(x\right)\right){\nabla }^{2}f\left(x\right)$ is symmetric.

2. If $\sigma \left(D\right)\subset \left(-\infty ,0\right)$ , then the problem (NLEP) has no solution.

3. If $\sigma \left(D\right)\subset \left(0,+\infty \right)$ , then the problem (NLEP) has at least one solution. Moreover, any eigenvector associated to positive eigenvector, is a feasible direction.

4. If d is an eigenvector associated to null eigenvalue and , then d is not a feasible direction when $f\left(x\right)\ne \lambda$ .

5. If d is an eigenvector associated to null eigenvalue and , then d is a feasible direction.

Proof. If f is a quadratic function, then

So, the equation $f\left(x+td\right)=\lambda$ has solution if the discriminant

All items follows because the discriminant need to be non negative, in order to find real roots of the quadratic equation. □

## 3.2 Linearly Constrained Quadratic Programming Problems

This problem can be formulated as follows:

(7)

without loss of generality Q is a $n×n$ non null symmetric matrix, P is a $m×n$ non null matrix, $a\in {\mathrm{ℝ}}^{n}$ and $b\in {\mathrm{ℝ}}^{m}$.

For the next result, we use the following notation: $\left\{{\lambda }_{1},···,{\lambda }_{n}\right\}$ is the eigenvalue set of matrix PT P, $\left\{{u}_{1},···,{u}_{n}\right\}\subset S$ an eigenvector set $\left(\mathrm{i}.\mathrm{e}.{P}^{T}P{u}_{i}={\lambda }_{i}{u}_{i}\forall i\right),I\left(0\right)=\left\{i:{\lambda }_{i}=0\right\}$ and $span\left\{{u}_{i}:i\in I\left(0\right)\right\}$ is the subspace generated by $\left\{{u}_{i}:i\in I\left(0\right)\right\}$. By convention $span\left(\overline{)0}\right)=\left\{0\right\}$.

Lemma 4. If P is a non null matrix with size $m×n$ , then

$Ke{r}^{P}=Ker\left({B}^{T}B\right)=span\left\{{u}_{i}:i\in I\left(0\right)\right\}.$

Proof. If $Ke{r}^{P}=\left\{0\right\}$, then PT P is non singular and then $I\left(0\right)=\overline{)0}$ and so $span\left\{{u}_{i}:i\in I\left(0\right)\right\}=\left\{0\right\}$. If not, take $h\in Ker\left(P\right)\\left\{0\right\}$, then $Ph=0$ and so PT$Ph=0$. It implies that h is an eigenvector of PT P and so $h\in span\left\{{u}_{i}:i\in I\left(0\right)\right\}$. Conversely, if $h\in span\left\{{u}_{i}:i\in I\left(0\right)\right\}$, then $h=\sum _{i\in I\left(0\right)}{\alpha }_{i}{u}_{i}$. Hence, PT$Ph=\sum _{i\in I\left(0\right)}{\alpha }_{i}{B}^{T}B{u}_{i}=0$. Then and so $Ph=0$. □

Now, consider a matrix V such that $V:{\mathrm{ℝ}}^{n}\to {\mathrm{ℝ}}^{n}$ is the orthogonal projection over $Ker\left(P\right)=Ker\left({P}^{T}P\right)$. Matrix V can be calculated as follows: Apply our scheme and obtain $\sigma \left({B}^{T}B\right)=\left\{{\lambda }_{1},···,{\lambda }_{n}\right\}$ and $\left\{{u}_{1},···,{u}_{n}\right\}$ such that $Q{u}_{i}={\lambda }_{i}{u}_{i}$. Then $V=\prod _{i\in I}\left(I-{u}_{i}{u}_{i}^{T}\right)$, where $I=\left\{i:{\lambda }_{i}\ne 0\right\}$. The following result is important, because we can verify the condition , verifying that VT QV is semi definite positive (i. e. all its eigenvalues of VQV are nonnegative real values).

Corollary 1. if and only if VQV is symmetric semi definite positive.

For the next result, consider $L=\left\{x\in {\mathrm{ℝ}}^{n}:Px=b\right\}$ and $f:{\mathrm{ℝ}}^{n}\to \mathrm{ℝ}$ defined by .

Theorem 3. The problem LCQPP has a solution if and only if

(8)

where $C=\left[\begin{array}{cc}Q& {P}^{T}\\ B& 0\end{array}\right]$ and $c=\left(\begin{array}{c}a\\ b\end{array}\right)$ . Moreover, if $\overline{z}=\left(\overline{x},\overline{y}\right)\in E$ , then $\overline{x}$ is a solution of LCQPP.

Proof. If $\overline{x}$ is a solution of LCQP, then the KKT optimality conditions imply that $\nabla f\left(\overline{x}\right)+{P}^{T}\overline{y}=Q\overline{x}-a+{P}^{T}\overline{y}=0$ and $B\overline{x}=b$. So $C\overline{z}=c$ for $\overline{z}=\left(\overline{x},\overline{y}\right)$. The first order necessary optimality

condition tells us that ${\left(Q\overline{x}-a\right)}^{T}h=0\forall h\in Ker\left(B\right)$ (because the tangent cone of L in the point $\overline{x}$ is equal to kernel of P, denoted by $Ker\left(P\right)$). So, for any $h\in Ker\left(P\right),x=\overline{x}+h\in L$ and and so .

Now, if $E=\left\{z\in {\mathrm{ℝ}}^{n+m}:Cz=c\right\}\ne \overline{)0}$ and , then taking $\overline{z}=\left(\overline{x},\overline{y}\right)\in S$ we claim that $\overline{x}$ is a solution of LCQP. Indeed, $C\overline{z}=c$ implies that $P\overline{x}=b$ and $Q\overline{x}-a=-{P}^{T}\overline{y}$. Defining $f\left(x\right)=\left(1/2\right){x}^{T}Qx-{a}^{T}x$, we need to show that $f\left(x\right)\ge f\left(\overline{x}\right)$ for all x such that $Px=b$.

Note that for $h=x-\overline{x}$ we have that $Ph=0$. So ${h}^{T}\left(\nabla f\left(\overline{x}\right)\right)={h}^{T}\left(Q\overline{x}-a\right)=-{h}^{T}{P}^{T}\overline{y}=0$. It implies that $f\left(x\right)=f\left(\overline{x}+h\right)=f\left(\overline{x}\right)+\left(1/2\right){h}^{T}Ah\ge f\left(\overline{x}\right)$ and the claim follows. □

# 4 NUMERICAL EXPERIMENTS

Here, we show numerical experiments using a program code written in SciLab software. Of course, this program code is very simple and developed by an amateur in Computer Science (Wilfredo Sosa).

The following numerical experiment concern to verify optimality condition when we solve linearly constrained quadratic programming problems. If

$A=\left[\begin{array}{cccc}0& 2& 7& -17\\ 2& 8& -6& -6\\ 7& -6& -6& 0\\ -17& -6& 0& -2\end{array}\right],a=\left[\begin{array}{c}1\\ 2\\ 3\\ 4\end{array}\right],b=\left[\begin{array}{c}3\\ 2\end{array}\right]andB=\left[\begin{array}{cccc}2& 0& -1& 2\\ 0& 1& 2& 1\end{array}\right].$

Running our program code in SciLab, we obtain the spectral set $\sigma \left({B}^{T}B\right)=\left\{9,6,0,0\right\}$ and the respective eigenvector set is

$\left\{\left[\begin{array}{c}-0.6666667\\ 1.351\left({10}^{-08}\right)\\ 0.3333334\\ -0.6666667\end{array}\right]\left[\begin{array}{c}2.371\left({10}^{-08}\right)\\ 0.4082483\\ 0.8164966\\ 0.4082483\end{array}\right]\left[\begin{array}{c}-7.366\left({10}^{-10}\right)\\ 0.9128709\\ -0.3651484\\ -0.1825742\end{array}\right]\left[\begin{array}{c}0.7453560\\ 0\\ 0.2981424\\ -0.5962848\end{array}\right]\right\}$

By definition

$V=\left(eye\left(4,4\right)-CP\left(:,1\right)\ast CP{\left(:,1\right)}^{\text{'}}\right)\ast \left(eye\left(4,4\right)-CP\left(:,2\right)\ast CP{\left(:,2\right)}^{\text{'}}\right)$

Again applying our program code to $V\ast A\ast V$ we have that

$\sigma \left(V\ast A\ast V\right)=\left\{19.018037,9.7597408,0,0\right\}$

It implies that, the vector

$\overline{z}=\left(0.6662676,0.4182381,0.2992118,0.9833383,7.3928963,5.0168612\right)$

is solution of $Cz=c$, and so $\overline{x}$ is solution of LCQP ($\overline{z}=\left(\overline{x},\overline{y}\right)$).

Also, we applied our scheme for find eigenvalues and eigenvectors of symmetric matrixes. We simulate symmetric matrices and then calculate their eigenvalues and eigenvectors using our program code. We build symmetric matrices as follows: Given each two matrices (data), the first one D is a diagonal matrix and the second one V is an unitary matrix (i.e. ${V}^{T}V=V{V}^{T}=I$), then we define $A:=VD{V}^{T}$, here diagonal entries of D are the eigenvalues of A and the column vectors of V are eigenvectors of A. We generate V using Gram-Schimidt process.

1. The first matrix was built with 10 eigenvalues equal to -30; 10 eigenvalues equal to zero; and 10 eigenvalues equal to 30.

2. The second matrix was built with 20 eigenvalues equal to -2000; 20 eigenvalues equal to zero; and 20 eigenvalues equal to 2000.

3. The next matrix was built with eigenvalues following the rule ${\lambda }_{i+1}={\lambda }_{i+1}$ , starting with ${\lambda }_{1}=-29$ until ${\lambda }_{59}=29$ and ${\lambda }_{60}=0$ .

4. The next matrix was built with 20 eigenvalues equal to -3000; 10 eigenvalues equal to -3; 20 eigenvalues equal to zero; 10 eigenvalues equal to 3; and 20 eigenvalues equal to 3000.

5. The next matrix was built with 50 eigenvalues equal to -100 and 50 eigenvalues equal to 100.

Of course, our scheme finds all eigenvalues and a eigenvector set. Unfortunately, the Power Method Iteration does not run for generated matrices, because the first four matrices has null eigenvalues and the last one has as absolute value of all eigenvalues equal to 100 (it is not dominant eigenvalue matrix).

# ACKNOWLEDGEMENTS

Wilfredo Sosa was partially supported by Fundação de Apoio à Pesquisa do Distrito Federal (FAP-DF) [grant 0193.001695/2017 and 00193.00002100/2018-51 and CNPq [Grants 302074/2012-0 and 471168/2013-0]. Part of this research was carried out during visits to IMPA.

# References

• 1
BREZIS H. 1983. Analyse fonctionnelle - Théorie et applications. Masson, Paris.
• 2
CROUZEIX J, KERAGEL A & SOSA W. 2011. Programación Matematica diferenciable. Universidad Nacional de Ingenieria.
• 3
GALOB O. 2007. Discrete Dynamical Systems. Springer Verlag, Berlin Heidelberg.
• 4
GOLDSTEIN AA. 1964. Convex programmin in Hilbert space. Bulletin of The American Mathematical Society, 70: 709-710.
• 5
ISTRǍŢESCU VI. 1981. Fixed point theory. D. Reidel Publishing Co., Dordrecht.
• 6
KELLEY JL. 1955. General Topology. Van Nostrand Co.. Princeton.
• 7
LONELI OHE & RUMBOS PIB. 2003. Métodos dinámicos en Economia. Otra busqueda del tiempo perdido. International Thompson Editores, Mexico.
• 8
MISES RV & POLLACZEK-GEIRINGER H. 1929. Praktische Verfahren der Gleichungsauflösung. ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik, 9: 152-164.
• 9
SHUMWAY RH & STOFFER DS. 2017. Time series analysis and its applications. Springer International Publishing AG.
• 1
Mathematics Subject Classification (2000): 15A18, 90C52, 90C20

# APPENDIX

In this section we present a program code of our scheme written in SciLab Software. Of course, this program code is very simple and we do not use numerical strategies in order to reduce the time of compilation or reduce the accumulation of errors. For theses reason, we do not compare our program code with others in the literature, because it is not our subject. Criterions for the program code are the following:

1. Try to find a great eigenvalue in absolute value, for do it we find j such that $abs\left(A\left(j,j\right)\right)\ge A\left(i,i\right)\forall i\ne j$.

2. For build a matrix B, we consider $\alpha =10$ and $L=\left({\left(max\left(abs\left(AA\right)\right)\ast n\right)}^{1/2}\right)+1$.

3. If $abs\left(A\right)<{10}^{-8}$, then, we consider matrix A as a null matrix.

4. The error to find an eigenvalue will be less to 10−16.

The following function calculate an eigenvector of a symmetric matrix A.

The main part of the code is the following. Of course, it is necessarily read a matrix DC.

# Publication Dates

• Publication in this collection
2 Dec 2019
• Date of issue
Sep-Dec 2019