Services on Demand
Article
Indicators
 Cited by SciELO
 Access statistics
Related links
 Cited by Google
 Similars in SciELO
 Similars in Google
Share
Computational & Applied Mathematics
Online version ISSN 18070302
Comput. Appl. Math. vol.28 no.1 São Carlos 2009
http://dx.doi.org/10.1590/S010182052009000100003
Numerical resolution of coneconstrained eigenvalue problems
A. Pinto da Costa^{I}; Alberto Seeger^{II}
^{I}Universidade Técnica de Lisboa, Instituto Superior Técnico Departamento de Engenharia Civil e Arquitectura and ICIST Avenida Rovisco Pais, 1049001 Lisboa, Portugal Email: apcosta@civil.ist.utl.pt
^{II}University of Avignon, Department of Mathematics, 33 rue Louis Pasteur 84000 Avignon, France Email: alberto.seeger@univavignon.fr
ABSTRACT
Given a convex cone K and matrices A and B, one wishes to find a scalar λ and a nonzero vector x satisfying the complementarity system
K ∋ x ⊥(Axλ Bx) ∈ K^{+}.
This problem arises in mechanics and in other areas of applied mathematics. Two numerical techniques for solving such kind of coneconstrained eigenvalue problem are discussed, namely, the Power Iteration Method and the Scaling and Projection Algorithm.
Mathematical subject classification: 65F15, 90C33.
Keywords: complementarity condition, generalized eigenvalue problem, power iteration method, scaling and projection algorithm.
1 Introduction
The Euclidean space ^{n} is equipped with the inner product 〈y, x〉 = y^{T} x and the associated norm ·. The dimension n is greater than or equal to 2. Orthogonality with respect to 〈·,·〉 is indicated by means of the symbol ⊥. Throughout this work, one assumes that:
We are interested in solving numerically the following coneconstrained generalized eigenvalue problem. As usual, K^{+} indicates the positive dual cone of K.
Problem 1.
Find λ ∈ and a nonzero vector x ∈ ^{n} such that
K ∋ x ⊥ (Ax λBx) ∈ K^{+}.
Problem 1 arises in mechanics [9, 10] and in various areas of applied mathematics [4, 5, 6, 12, 13]. The specific meanings of A and B depend on the context. It it important to underline that, in the present work, the matrices A and B are not necessarily symmetric.
An interesting particular case of Problem 1 is the socalled Pareto eigenvalue problem, whose precise formulation reads as follows.
Problem 2.
Find λ ∈ and a nonzero vector x ∈ ^{n} such that
∋ x ⊥ (Ax  x) ∈ .
For the general theory behind Problems 1 and 2, the reader is conveyed to the references [7, 11, 15, 16]. This note focuses only on numerical issues. To be more precise, we discuss two specific algorithms for solving coneconstrained eigenvalue problems, namely, the Scaling and Projection Algorithm (SPA) and the Power Iteration Method (PIM). Both techniques have some features in common, but they are different in spirit. A third algorithm, based on optimization techniques, is formulated and left open for future examination.
2 Numerical experience with the SPA
All computational tests are carried out for the particular choice B = I_{n}, but the formulation of the SPA is given for a general matrix B as in (1). The SPA is an iterative method designed for solving Problem 1 written in the following equivalent form:
Problem 3.
The normalization condition (2) prevents x from being equal to 0. Adding such condition does not change the solution set of the original coneconstrained generalized eigenvalue problem. As example of function ø, one may think of
but other options are also possible. Our favorite choice is the linear function (3) because in such a case the set
K_{ø} = { x ∈ K: ø(x)
is not just compact, but also convex. In the parlance of the theory of convex cones, the set K_{ø} corresponds to a "base" of K. Unless explicitly stated otherwise, we assume that the choice (3) is in force.
The SPA generates a sequence {x^{t}}_{t}_{ > 0} in K_{ø}, a sequence {λ_{t}}_{t > 0} of real numbers, and a sequence {y^{t}}_{t > 0} of vectors in ^{n}. The detailed description of the algorithm is as follows:

Initialization: Fix a positive scaling factor s. Pick u ∈ K\{0} and let x^{0} = u/ø(u).

Iteration: One has a current point x^{t} in K_{ø}. Compute the RayleighRitz ratio
λ_{t} = 〈x^{t},Ax^{t}〉/〈x^{t},Bx^{t}〉 ,
the vector y^{t} = Ax^{t }– λ_{t }Bx^{t}, and the projection
v^{t} = Π_{ K}[x^{t } sy^{t}].
The projection v^{t} is necessarily in K\{0}. Proceed now to a ønormalization:
x^{t}^{+1} = v^{t}/ø(v^{t}) .
One could perfectly well consider a scaling factor that is updated at each iteration, but in this section we stick to the case of a fixed scaling factor. The case of a variable scaling factor will be commented in Section 4. The next convergence result is taken from [11].
Theorem 1. If {x^{t}}_{t }_{>} _{0 }converges to a certain vector , then
(a) {λ_{t}}_{t}_{ > 0} converges to = 〈, A〉/ 〈, B〉.
(b) {y^{t}}_{t > 0} converges to = A  B.
(c) (, , ) solves Problem 3.
2.1 Applying the SPA to the Pareto eigenvalue problem
The Pareto eigenvalue problem already exhibits many of the mathematical difficulties arising in the context of a general convex cone K. While dealing with the Pareto cone one chooses
as normalizing function. This corresponds to the linear choice (3) with e = (1, 1, ..., 1)^{T} belonging to the interior of .
A few words on terminology are appropriate at this point in time. One usually refers to
σ_{pareto} (A) = {λ ∈: (x, λ) solves Problem 2}
as the Pareto spectrum (or set of Pareto eigenvalues) of A. The map σ_{pareto :} has some similarities with the classical spectral map, but the differences are even more numerous. It deserves to be stressed that
is finite, but grows exponentially with respect to the dimension n of the underlying Euclidean space. Thus, a matrix of a relatively small size could have a very large number of Pareto eigenvalues. In view of this observation, it is simply not reasonable to try to compute all the elements of σ_{pareto}(A). A more realistic goal is finding just one or a bunch of them.
2.1.1 Testing on a 2 × 2 matrix with 3 Pareto eigenvalues
We start by considering the 2 × 2 matrix
whose Pareto spectrum is displayed in Table 1. For ease of visualization, the Pareto eigenvalues are arranged in increasing order. The corresponding solutions are named starting from S_{1} up to S_{3}.
For this small size matrix, the SPA was able to detect all the solutions. Table 2 shows which solution was detected as function of a scaling parameter s varying in steps of 0.1 units.
A first conclusion that can be drawn from this example is that the detected solution does not need to be in the same face of the Pareto cone as the initial vector u. Which one is the detected solution depends on the initial point u and, perhaps more significantly, on the size of the scaling factor. This and subsequent tests show that no changes in the detected solution and in the number of iterations occur when the parameter s is taken beyond a certain threshold value.
Observe that the solutions S_{1} and S_{3} were found by several of the values assumed by s, whereas the solution S_{2} was detected only with the specific choice s = 1. In short, some Pareto eigenvalues are more likely to be found than others.
Remark 1. For a given u, the detected solution and the number of iterations required by the SPA to achieve convergence may be very sensitive with respect to the parameter s. In Table 2 one observes substantial changes while passing from s = 0.9 to s = 1.1.
2.1.2 Testing on a 3 × 3 matrix with 9 Pareto eigenvalues
We now test the SPA on the matrix
whose Pareto spectrum is displayed in Table 3. This specific matrix has 9 Pareto eigenvalues and, probably, this is the largest possible cardinality for the Pareto spectrum of a 3 × 3 matrix. We know for sure that a 3 × 3 matrix cannot have 11 or more Pareto eigenvalues. We have not found yet a 3 × 3 matrix with 10 Pareto eigenvalues and seriously doubt that such a matrix exists.
The challenge that (6) offers to the SPA is not the "large" number of Pareto eigenvalues, but the fact that some of them are hard to detect numerically. In Table 4 and in the sequel, the notation S_{j}/m indicates that the solution S_{j} was found within m iterations and the symbol ∞ indicates that convergence did not occur within 2000 iterations. Some of the conclusions that can be drawn from Table 4 are:
i) Given an initial point u, it is possible to recover several solutions to the Pareto eigenvalue problem by letting the scaling factor s to assume different values. One should not expect however to recover all the solutions (unless of course one changes also the initial point).
ii) Given an initial point u, there are values of s for which the SPA does not converge. It is very difficult to identify the convergence region
S(u) ≡ values of s for which SPA converges when initialized at u
for the scaling factor, and to see what kind of structure such a region possesses. Table 4 shows that S(u) is not necessarily an interval.
2.1.3 Testing on random matrices and random initial points
Table 5 has been produced as follows. For each dimension n ∈ {2,3,...,9}, one generates a collection of 1000 stochastically independent random matrices with uniform distribution on the hypercube [1, 1]^{n×n}. For each one of these 1000 random matrices, one generates a collection of 100 stochastically independent random initial vectors with uniform distribution on the hypercube [0, 1]^{n}. One applies the SPA to each one of the 10^{5} = 1000 × 100 random pairs (A, u) and counts the number of times in which convergence occurs within 2000 iterations.
As one can see from Table 5, the best performances of the SPA are obtained when the scaling factor s lies in the interval [0.1, 0.4]. When s is taken in such range, the percentages of convergence are all above 90 per cent. This good new speaks favorably of the SPA. The same experience as in Table 5 is carried out for n ∈ {10, 20, 30, 100}, but with a different set of values for the scaling factor s. Table 6 displays the obtained results. One sees that the overall performance of the SPA diminishes as the dimension n gets larger. For n = 100 the best score of convergence is 78%. This is still acceptable.
Table 7 reports on the case n ∈ {200, 300, 400, 500, 1000}. In this range of dimensions, the performance of the SPA is a bit less satisfactory. For instance, when n = 1000 the best score of convergence is 62%.
2.2 Applying the SPA to a circular eigenvalue problem
A circular eigenvalue problem is a prototype of eigenvalue problem with constraints described by a nonpolyhedral cone:
Problem 4.
Here, L_{θ} stands for the ndimensional circular cone whose halfaperture angle is θ ∈ ]0, π/2[, and whose revolution axis is the ray generated by the canonical vector e_{n} = (0, ..., 0, 1)^{T}. In short,
A circular eigenvalue problem differs substantially from a Pareto eigenvalue problem, the main difference being that a circular spectrum
σ _{θ} (A) = {λ ∈: (x, λ) solves Problem 4}
does not need to be finite. As shown in [16], the set σ_{θ} (A) could perfectly well contain one or more intervals of positive length.
As seen in Figure 1, the dual cone of L_{θ} is again a circular cone. In fact, as reported in [3], one has the general relation = L_{(}_{π}_{/2)}_{θ}. The projection of z ∈ ^{n} onto L_{θ} can be evaluated by using the explicit formula
with
The above expression of v can be found in [1] or obtained simply by working out the KarushKuhnTucker optimality conditions for the convex minimization problem
For dealing with the circular eigenvalue problem, one uses
as normalization function. This corresponds to the linear choice (3) with vector e = e_{n}. Note that e_{n} belongs to the interior of .
A convex cone whose halfaperture angle is 90 degrees is no longer pointed, so the SPA implemented with the linear normalizing function (9) could run into numerical troubles if θ increases beyond a certain threshold.
We are interested in examining the performance of the SPA as function of the halfaperture angle θ of the circular cone. The numerical experiment reported in Table 8 concerns a sample of 10^{5} random pairs (A, u) generated in the same way as in Tables 5, 6, and 7. The SPA is tested with three different values of s and with values of θ incremented in steps of 10 degrees.
Some of the conclusions that can be drawn from Table 8 are:
i) For each tested scaling factor s and dimension n, one observes that the rate of success of the SPA decreases as the halfaperture angle increases.The best results are obtained for circular cones with moderate halfaperture angle. As predicted by common sense, dealing with circular cones with halfaperture angle near 90 degrees is problematic.
ii) As we already saw in previous numerical experiences, a right choice of scaling factor is crucial for obtaining a good performance of the SPA.As a general rule, when dealing with large dimensional problems one should reduce the size of the parameter s. As one can see in the middle table and in the right table, the choice s = 1 is appropriate only if the halfaperture angle does not exceed 25 degrees. Beyond this angular threshold, the performance of the SPA deteriorates very quickly. By contrast, if one uses a smaller scaling factor, say s = 0.1, then the SPA works perfectly well in a wider range of halfaperture angles.
3 Theory and numerical experience with the PIM
The power iteration method is a well known algorithm for finding the largest eigenvalue in modulus of a given matrix. To fix the ideas, consider the case of an n × n real matrix M with real eigenvalues λ_{1},...,λ_{n} such that
λ_{1} > λ_{2} > ... > λ_{n}.
The classical power iteration method consists in generating a sequence {x^{t}}_{t}_{ > 0} by selecting an initial point x^{0} and applying the recursive formula
where ø is typically the Euclidean norm · or a certain linear function. One expects the sequence {x^{t}}_{t > 0} to converge to an eigenvector u_{1} associated to the eigenvalue λ_{1}. See, for instance, Section 13.4 in Schatzman's book [14] for a precise formulation of the convergence result. The limit u_{1} satisfies necessarily the normalization condition ø (u_{1}) = 1. The eigenvalue λ_{1} is obtained by using either the relation λ_{1} = ø (Mu_{1}) or the RayleighRitz ratio λ_{1} = 〈u_{1}, Au_{1}〉/u_{1}^{2}.
Remark 2. If one wishes to find a different eigenvalue, then one chooses a convenient shifting parameter β ∈ and applies the iteration scheme (10) to the shifted matrix M  βI_{n}. By the way, the socalled inverse power iteration method consists in applying (10) to the inverse of the matrix M  βI_{n}.
It is not difficult to adjust the power iteration method to the case of a generalized eigenvalue problem involving a pair (A, B) of matrices. However, incorporating a cone K into the picture is a matter that requires a more careful thinking. For the sake of clarity in the discussion, we leave aside the matrix B and concentrate on K. The problem at hand reads as follows.
Problem 5.
Find λ ∈ and a nonzero vector x ∈ ^{n} such that
K ∋ x ⊥ (Ax  λx) ∈ K^{+}.
In principle, the usual spectrum of A has very little to do with
σ_{K} (A) = {λ ∈ : (x, λ) solves Problem 5}
the latter set being called the K spectrum (or set of K eigenvalues) of A. However, regardless of the choice of K, one always has the inclusion
with λ_{min} (A_{sym}) and λ_{max} (A_{sym}) denoting, respectively, the smallest and the largest eigenvalue of the symmetric part
of the matrix A. The computation of the extremal eigenvalues of a symmetric matrix presents no difficulty.
Lemma 1. Consider any β ∈ such that
Then, the following conditions are equivalent:
(a) x is a K eigenvector of A.
(b) x and Π_{K }[βx  Ax] are nonzero vectors pointing in the same direction.
Proof. For all λ ∈ and x ∈ ^{n}, one has the general identity
βx  Ax = (β  λ)x  (Ax  λx).
This relation is at the core of the proof. Let x be a K eigenvector of A and let λ be the associated K eigenvalue. In particular, x is a nonzero vector in K. Furthermore, the combination of (11) and (12) shows that γ = β  λ is a positive scalar. By applying Moreau's theorem [8] to the orthogonal decomposition
one obtains
This confirms that x and Π_{K} [βx  Ax] are nonzero vectors pointing in the same direction. Conversely, suppose that x ≠ 0 and that (13) holds for some positive scalar γ. Then, one can draw the following conclusions. Firstly, x ∈ K because γ x ∈ K and γ > 0. Secondly, in view of Moreau's theorem, the equality (13) yields
γx  ( βx  Ax) ∈ K+,
〈γx, γx  ( βx  Ax)〉 = 0.
We have proven in this way that x is a nonzero vector satisfying the complementarity system
K ∋ x ⊥(Ax  (β  γ) x) ∈ K^{+}.
In other words, x is a K eigenvector of A and λ = β  γ is the associated K eigenvalue.
Condition (b) in Lemma 1 provides the inspiration for the formulation of the PIM in the context of a coneconstrained eigenvalue problem. The detailed description of this algorithm is as follows:

Initialization: Select a shifting parameter β as in (12). Pick a nonzero vector u ∈ K and let x^{0} = u/ø(u).

Iteration: One has a current point x^{t} in K_{ø}. Compute first the projection
and then the ønormalized vector
As was the case with the SPA, a sequence generated by the PIM remains in the compact set K_{ø}. The emphasis of the PIM is the computation of K eigenvectors. If one wishes, at each iteration one can also evaluate the RayleighRitz ratio
λ_{t} = 〈x^{t}, Ax^{t}〉/x^{t}^{2} ,
but this is not really necessary.
Theorem 2. Let {x^{t}}_{t }_{> 0} be generated by the PIM.
(a) The general term x^{t} is well defined, i.e., the projection v^{t} is different from 0.
(b) If {x^{t}}_{t }_{> 0} converges to a certain , then is a K eigenvector of A.
(c) The Keigenvalue of A associated to is given by
Proof. Suppose, on the contrary, that v^{t} = 0 at some iteration t. From the usual characterization of a projection, one has
〈βx^{t} – Ax^{t},w〉 < 0 ∀w ∈ K
In particular, 〈βx^{t } Ax^{t},x^{t}〉 < 0. Since x^{t} ≠ 0, one obtains
a contradiction with assumption (12). This takes care of (a). Suppose now that {x^{t}}_{t}_{ > 0} converges to . By passing to the limit in (14) one sees that {v^{t}}_{t}_{ > 0} converges to
= Π K [ β  A].
As done before with each term v^{t}, one can also prove that is different from 0. By passing now to the limit in (15), one gets
= /ø ()
Since = ø() is a positive scalar, the nonzero vectors and are pointing in the same direction. In view of Lemma 1, one concludes that is a K eigenvector of A. The first equality in (16) is obvious, while the second one is implicit in the proof of Lemma 1.
The inequality (12) has been helpful at several stages. Lemma 1 and Theorem 2 remain true if one works under the weaker assumption
but the evaluation of the above supremum could be a bothersome task. From a purely theoretical point of view, one could even work under the assumption
In practice, it makes no sense relying on (17) because σ_{K}(A) is unknown a priori.
3.1 Applying the PIM to the Pareto eigenvalue problem
3.1.1 Testing on a small size matrix
We start by testing the PIM on the matrix (6). Recall that (6) is a nonsymmetric 3 × 3 matrix with 9 Pareto eigenvalues. All of them are positive and the largest one is σ = 10. A straightforward computation shows that
The symbol ∞ in Table 9 means, as usual, that convergence does not occur within 2000 iterations. Observe that number of iterations required by the PIM to achieve convergence increases as β goes further and further away from the threshold value (17). So, it is not a good idea to take β excessively large.
On the other hand, the symbol E in Table 9 indicates that an error occurs while performing the first iteration of the PIM. To be more precise, the PIM must stop because projecting onto K produces the zero vector. This unfavorable situation does not contradict the statement (a) of Theorem 2. What happens is simply that the value of β under consideration does not satisfy the assumption (12), not even the weaker condition (17). In connection with this issue, it is worthwhile mentioning that if one uses a parameter β smaller than the righthand side of (17), then the PIM may still work in the sense that each projection v^{t} may be different from 0 and {x^{t}} may converge to a K eigenvector of A. Logically speaking, (17) is a sufficient condition for the PIM to work properly, but such condition is not necessary.
In Table 10 one applies the PIM to the matrix (6), but one considers a sample of 10^{8} random initial points. The question that is being addressed is the following one: to which Pareto eigenvalue of (6) is the PIM most likely to converge?
3.1.2 Testing on random matrices and random initial points
The same kind of numerical experiment as in Section 2.1.3 is now carried out with the PIM. To avoid excessive repetitions, we consider only the case of a dimension n ranging over {3,6,12}.
Since one works with a sample of random pairs (A, u), it is not entirely clear how to choose the parameter β in order to optimize the performance of the PIM. In Table 11 we explore what happens with the choices
Note that β_{1} is smaller than λmax (A_{sym}). The heuristic selection rule (18) may look strange at first sight, but it is based on statistical information concerning the distribution of the random variable λ_{max} (A_{sym}) when n ∈ {3, 6, 12}.
The figures on Table 11 refer to percentages of success. The word "success" means, of course, that the PIM has worked properly, failure corresponding to a case of lack of convergence or to the obtention of a projection equal to 0. As one can see, the percentages of success of the PIM do not change too much when β ranges over (18).
In Figure 2 we enlarge the field of values taken by the parameter β. The idea is identifying the largest interval over which β leads to a favorable and stable rate of success.
One observes a very interesting plateau phenomenon: for each dimension n, there is a corresponding range of values for β in which the PIM performs at its top level. The height of each plateau depends on the dimension: the height decreases as n increases. Note also that the interval is almost identical for the three tested values of n.
4 SPA versus PIM
From a conceptual point of view, the SPA and the PIM are different methods. However, they share in common at least two important steps: a projection onto the cone K and a ønormalization procedure.
Are the SPA and the PIM two different algorithms after all? The answer is yes if the scaling factor s and the shifting parameter β are not allowed to change from one iteration to the next. If one is more flexible and considers instead a sequence {s_{t}}_{t > 0} of scaling factors and a sequence {β_{t}}_{t > 0} of shifting parameters, then one can reformulate the SPA as a PIM, and viceversa.
Proposition 1. Let B = I_{n}. The following statements are true:
(a) Implementing the tth iteration of the PIM with β_{t} > 〈x^{t}, Ax^{t}〉/x^{t}^{2} produces the same outcome as implementing the tth iteration of the SPA with
(b) Conversely, implementing the tth iteration of the SPA with s_{t} > 0 produces the same outcome as implementing the tth iteration of the PIM with
Proof. The projection map Π_{K} and the normalizing function ø are both positively homogeneous. Hence, ønormalizing the projected vector Π_{K} [x^{t}  s_{t}y^{t}] produces the same result as ønormalizing the projected vector Π_{K}[(1/s_{t})x^{t } y^{t}]. It suffices then to observe that
with λ_{t} = 〈x^{t}, Ax^{t}〉/x^{t}^{2}. Of course, one rewrites the equality (21) in the form (19) for proving part (a), and in the form (20) for proving part (b).
□
4.1 Numerical experience with a sequence {s_{t}}_{t > 0}
We have tested again the SPA on random matrices and random initial points, but this time we have used a variable scaling factor s_{t}. The attention is focused on dimensions not exceeding n = 30. Inspired by the relation (19) and the experimental data reported in Figure 2, we choose a scaling factor s_{t} obeying to the feedback rule
The word "feedback" is used here to indicate that s_{t} is updated by taking in account the current value x^{t} of the state vector x.
Table 12 reports on the outcome of this experiment. The most important lesson of this experiment is the following one: the SPA implemented with the feedback rule performs equally well as the SPA implemented with the constant scaling factor s that is optimal for the prescribed dimension. For instance, when n = 9, the feedback rule leads to a rate of success of 93%, a percentage that matches the rate of success of the SPA implemented with the optimal choice s = 0.1 (cf. Table 5).
5 By way of conclusion
Our main comments on the SPA and the PIM are as follows:
1. The SPA generates a bounded sequence {(x^{t}, λ_{t}, y^{t})}_{t > 0} that depends on a certain initial point u and a certain scaling parameter s > 0. If the sequence converges, then its limit (, , ) is a solution to Problem 3.
2. The SPA does not aim at finding all the solutions to Problem 3. This is not just because the number of solutions could be very large, but also because some solutions could be extremely hard to be detected. It happens in practice that some solutions are hard to be found even if the SPA is initialized at a convenient initial point. The existence of such "rare" solutions is somewhat intrinsic to the nature of the coneconstrained eigenvalue problem. An interesting open question is understanding why some solutions are so hard to be detected and others are not. An appropriate theoretical analysis remains to be done.
3. The SPA was extensively tested in the Paretian case. One of the main conclusions of Section 2.1 is that the scaling factor s > 0 must be chosen with care if one wishes the SPA to produce a convergent sequence. Extensive numerical experimentation has been carried out for Pareto eigenvalue problems with random data. Specifically, the entries of the matrix A and the entries of the initial vector u were considered as independent random variables uniformly distributed over the interval [1, 1]. For each dimension n, there is a convenient range of values for the scaling factor s. A good choice of s is one for which the SPA converges in a vast majority of cases, see Tables 5, 6, and 7.
4. The PIM has the advantage of relying on a parameter β that is not too sensitive with respect to dimensionality. Thanks to the plateau phenomenon described in Section 3.1.2, there is no need of putting too much effort in selecting the shifting parameter β. One just needs to make sure that β lies in a certain range that is easily identifiable.
We end this work by mentioning an open alternative. Observe that Problem 3 is about finding a triplet (x, λ, y) that lies in the closed convex cone Q = K × × K^{+}, and that solves the system of nonlinear equations
Ax  λBx  y = 0,
〈x, y〉 = 0,
ø (x)  1 = 0.
In principle, one could apply any method for solving systems of nonlinear equations on closed convex cones. Do such methods exist in the literature? Are they efficient? These questions open the way to a broad discussion that deserves a detailed treatment in a separate publication.
A natural idea that comes to mind is finding a global minimum over Q of the residual term
f(x, λ, y) = µ_{1} Ax  λBx  y^{2} + µ_{2} 〈x, y〉^{2} + [ø(x) 1]^{2},
where µ_{i} are positive scalars interpreted as weight coefficients. For instance, Friedlander et al., [2] use a residual minimization technique for solving a socalled horizontal linear complementarity problem. However, in our particular context, one should not be too optimistic about residual minimization. Indeed, this technique may produce a local minimum that is not a global one.
Last but not the least, our original motivation was solving a very specific coneconstrained eigenvalue problem arising in mechanics (namely, detecting directional instabilities in elastic systems with frictional contact). The problem at hand is highly structured and will be treated in full extent in a forthcoming work. However, we want to mention here some features that could be of interest for nonspecialists:
– Firstly, the cone defining the constraints is a Cartesian product K = K_{1 }× ... × K_{N} of a large number of closed convex cones living in small dimensional Euclidean spaces. As a consequence of this, the vector x = (x_{[1]}, ..., x_{[N]}) breaks into N blocks and Problem 1 leads to a collection of subproblems (coupled, in general).
– Secondly, some of the blocks x_{[r]} are unconstrained, i.e., K_{r} = .This means that K is unpointed, a situation that is preferable to avoid. One way to overcome this difficulty is by applying our algorithms to the constrained subproblems and by treating the unconstrained ones as in classical numerical linear algebra. Of course, all this requires some necessary adjustments.
– Thirdly, the constrained blocks x_{[r]} obey to the unilateral contact law and the Coulomb friction law. Typically, K_{r} is a closed convex cone dependent on the coefficient of friction. It is also possible to accommodate the more general case of an anisotropic friction law.
Specially structured matrices are not reflected by the random data used in this paper. Our primary concern has been focussing in problems for which the matrix A has no structure whatsoever.
Remark 3. The results presented in Tables 6 and 7 were produced with the IST Cluster, which is a Cluster 1350 computing system from IBM. It has a total of 70 computing nodes each with 2 POWER dual core processors at 2.3 GHz, with 2GB per core. All the nodes are running the AIX 5.3 operating system. The parallel processes were connected using library MPI (Message Passing Interface). The programs developed during this study were writen in FORTRAN.
Acknowledgments. We thank Prof. Moitinho de Almeida (Departmento de Engenharia Civil e Arquitectura, IST, Portugal) for the parallelization of one of the programs developed for this study.
REFERENCES
[1] H.H. Bauschke and S.G. Kruk, Reflectionprojection method for convex feasibility problems with an obtuse cone. J. Optim. Theory Appl., 120 (2004), 503531. [ Links ]
[2] A. Friedlander, J.M. Martínez and S.A. Santos, Solution of linear complementarity problems using minimization with simple bounds. J. Global Optim., 6 (1995), 253267. [ Links ]
[3] J.L. Goffin, The relaxation method for solving systems of linear inequalities. Math. Oper. Res., 5 (1980), 388414. [ Links ]
[4] A. Iusem and A. Seeger, On pairs of vectors achieving the maximal angle of a convex cone. Math. Program., 104 (2005), 501523. [ Links ]
[5] J.J. Júdice, M. Raydan, S.S. Rosa and S.A. Santos, On the solution of the symmetric eigenvalue complementarity problem by the spectral projected gradient algorithm. Numer. Algor., 47 (2008), 391407. [ Links ]
[6] J.J. Júdice, I.M. Ribeiro and H.D. Sherali, The eigenvalue complementarity problem. Comput. Optim. Appl., 37 (2007), 139156. [ Links ]
[7] P. Lavilledieu and A. Seeger, Existence de valeurs propres pour les systèmes multivoques: résultats anciens et nouveaux. Annal. Sci. Math. Québec, 25 (2000), 4770. [ Links ]
[8] J.J. Moreau, Décomposition orthogonale d'un espace hilbertien selon deux cônes mutuellement polaires. C.R. Acad Sci. Paris, 255 (1962), 238240. [ Links ]
[9] A. Pinto da Costa, I.N. Figueiredo, J.J. Júdice and J.A.C. Martins, A complementarity eigenproblem in the stability of finite dimensional elastic systems with frictional contact. Complementarity: Applications, Algorithms and Extensions, 6783. Applied Optimization Series, vol. 50. Eds. M. Ferris, O. Mangasarian and J.S. Pang, Kluwer Acad. Publ., Dordrecht. (1999). [ Links ]
[10] A. Pinto da Costa, J.A.C. Martins, I.N. Figueiredo and J.J. Júdice, The directional instability problem in systems with frictional contacts. Comput. Methods Appl. Mech. Engrg., 193 (2004), 357384. [ Links ]
[11] A. Pinto da Costa and A. Seeger, Coneconstrained eigenvalue problems: theory and algorithms. Comput. Optim. Appl., 43/44 (2009), in press. [ Links ]
[12] M. Queiroz, J.J. Júdice and C. Humes, The symmetric eigenvalue complementarity problem. Math. Comp., 73 (2004), 18491863. [ Links ]
[13] P. Quittner, Spectral analysis of variational inequalities. Comm. Math. Univ. Carolinae, 27 (1986), 605629. [ Links ]
[14] M. Schatzman, Analyse Numérique, Dunod, Paris. (2001). [ Links ]
[15] A. Seeger, Eigenvalue analysis of equilibrium processes defined by linear complementarity conditions. Linear Algebra Appl., 292 (1999), 114. [ Links ]
[16] A. Seeger and M. Torki, On eigenvalues induced by a cone constraint. Linear Algebra Appl., 372 (2003), 181206. [ Links ]
Received: 26/V/08. Accepted: 23/IX/08.
#767/08.