Acessibilidade / Reportar erro

Euclidean distance matrices: special subsets, systems of coordinates and multibalanced matrices

Abstract

In this paper we present special subsets of positive semidefinite matrices where the linear function k becomes a geometric similarity and its inverse can be easily computed. The images of these special subsets are characterized geometrically. We also study the systems of coordinates for spherical matrices and at the end, we introduce the class of multibalanced distance matrices.

Euclidean distance matrices


Euclidean distance matrices: special subsets, systems of coordinates and multibalanced matrices

Pablo Tarazaga; Blair Sterba-Boatwright; Kithsiri Wijewardena

Department of Computing and Mathematical Sciences, Texas A&M University, Corpus Christi, TX 78412. E-mails: pablo.tarazaga@tamucc.edu / blair.sterba-boatwright@tamucc.edu / kithsiri.wijewardena@tamucc.edu

ABSTRACT

In this paper we present special subsets of positive semidefinite matrices where the linear function k becomes a geometric similarity and its inverse can be easily computed. The images of these special subsets are characterized geometrically. We also study the systems of coordinates for spherical matrices and at the end, we introduce the class of multibalanced distance matrices.

Mathematical subject classification: 15A57.

Key words: Euclidean distance matrices.

1 Introduction and preliminaries

Although of interest for over a century, most useful results concerning Euclidean distance matrices (EDMs) have appeared during the last thirty years, motivated by applications to the multidimensional scaling problem in Statistics and molecular conformation problems in Chemistry and Molecular Biology. These applications focus on the (re-)construction of sets of points in Ân such that the distances between these points are as close as possible to a given set of inter-point distances. Recent work by Tarazaga et. al. has focused on the interplay between configurations of points (coordinate matrices), the corresponding distance matrices, and the set of positive semidefinite (PSD) matrices.

We begin by introducing basic notation and definitions. The set of symmetric matrices of order n will be denoted by Sn, and by Wn we indicate the set of symmetric positive semidefinite matrices. It is important to recall that Wn is a closed convex cone. A subspace of a vector space generated by vectors v1,¼, vk will be denoted by span{v1,¼, vk} . The vector with all ones is denoted e, and M is the orthogonal complement of span{e} in Ân. This vector e and the subspace M play a key role in the theory of EDMs. The Frobenius inner product in the space of matrices is given by áA, BñF = trace(At B).

A matrix D is called a Euclidean Distance Matrix if there are n points x1,¼, xn Î Âr such that

Observe that the entries of D are squared inter-point distances. The set of all EDMs of order n form a convex cone that we denote by Ln. If a matrix is symmetric, nonnegative and the diagonal entries are zero, it is called a predistance matrix. We say that D is spherical if the points that generate it lie on the surface of a sphere.

There are well-known relations between the sets Wn and Ln that we now summarize. The EDMs are the image under a linear transformation of the cone Wn (see [3] and [5]).

Given B Î Wn we define the linear transformation

k(B) = bet + ebt - 2B

where b is the vector whose components are the diagonal entries of B. Then D = k(B) Î Ln, and k(Wn) = Ln.

Let s Î Ân be a vector. A maximal face of the set Wn is defined by the formula

Wn (s) = {X Î Wn|Xs = 0}.

Throughout this paper, we will assume without loss of generality that ste = 1. When k is restricted to such maximal faces, it becomes one-to-one, and the inverse transformation is given by

Every face Wn(s) with ste = 1 corresponds to a different location of the origin of coordinates (for more information, see section 2 of [5]).

A very important particular case is when s = . In that case we will denote te/n by just t, and t and k are inverse to each other between Ln and Wn(e). Matrices in Wn(e) are called centered positive semidefinite matrices and the origin of coordinates is set at the centroid of the configuration's points.

Given these preliminaries, we turn now to the paper at hand. First of all, we will show in Section 2 that the function k, when restricted to special subsets of Wn, becomes a geometric similarity. A key example of these special subsets is the set of correlation matrices. Further, the inverse of k when restricted to one of these subsets has a particularly simple form. Finally, we can characterize the image of these special subsets in Wn and establish additional sufficient conditions to see if a given distance matrix D belongs to one of these images. There is a difference between this approach and the classical approach which looks for right inverses for the linear function k. In the classical approach the origin of the system of coordinates is the key idea, here the diagonal values of positive semidefinite matrices are crucial.

In Section 3, we deal with the location of the origin of coordinates, determined by a vector s such that ste = 1. There, we characterize systems of coordinates associated with spherical matrices, and explore the set of EDMs that can be associated with a particular system of coordinates.

In the final section, we introduce the class of multibalanced matrices, a generalization of the class of balanced EDMs introduced by Hayden and Tarazaga in [4]. The geometrical structure of matrices in this class is given by points in k spheres with centers at the origin such that the centroid for the points in each sphere is also the origin of coordinates. As in the paper mentioned above, here we are able to characterize this class of distance matrices using only some spectral properties.

2 Similarities between subsets of EDMs and PSD matrices

In this section we show how the linear transformation k becomes a geometric similarity when restricted to a special subset of positive semidefinite matrices. We characterize the images of these subsets under k and we also find the inverse function on these subsets. We will denote by the vectors in Ân with positive components.

Given b Î , we define the set

In other words is the set of all positive semidefinite matrices with fixed diagonal b. Note that is the set of correlation matrices.

Lemma 2.1. Given b Î then k restricted to

is a geometric similarity.

Proof. Given X and Y Î , then

Corollary 2.2. The linear transformation k is one-to-one on

for every b Î .

We will denote by the image of under k, in other words = k(). Now we are interested in the exact form of the inverse of k on and a characterization of .

Since we are working with k restricted to , from the definition of k,

D = k(B) = ebt + bet - 2B.

We can solve for B and we obtain the following expression for the inverse

Lemma 2.3. The linear transformations k and tb are inverses of each other and similarities between

and .

Remark. Although not crucial here, it is worth noting that k and tb are similarities and inverse to each other between the following linear variety = {X Î Sn : xii = bi, i = 1,¼, n} and the subspace of hollow matrices Hn = {X Î Sn : xii = 0, i = 1,¼, n}. This fact is especially important when b = e since the correlation matrices are the intersection of Wn and .

Let us consider now the set . Now D Î if and only if D = k(B) with B Î which implies the existence of a coordinate matrix X such that B = XXt and the norm of ith row of X is exactly .

If we add the origin of coordinates to the configuration of points, these n + 1 points generate a new distance matrix in Ln+1.

Lemma 2.4. If D Î , then

Proof. Just note that the square of the distance from the origin to the n original points are exactly the components of b.

This condition is also sufficient.

Theorem 2.5. If D Î Ln, then D Î if and only if

Proof. Let

then take (the subindex of e indicates the dimension of the vector e here)

But this says that -

D + (bet + ebt) is in , which completes the proof.

Corollary 2.6. Given D Î Ln, then D Î if and only if

belongs to Wn.

A very special case is the set , the image of the correlation matrices. From Lemma 2.3 it is clear that

and the rank of te(D) is not always the embedding dimension of D. Note that because of Lemma 2.1 the set is a stretch of . Also note that is formed only by spherical distance matrices with radius less than or equal to one. This set was described as E - n by Alfakih and Wolkowicz and used to give a characterization of the EDMs in [2].

Let us assume that D Î Ln is spherical and r(D) < 1, where r(D) denotes the radius of the configuration of points that generates D. Then because of Theorem 3.4 from [6] there exist a s such that ste = 1 and Ds = 2r2e. Now if we compute ts(D), we obtain

Now the following result gives us some information on the geometry of .

Lemma 2.7. Given D Î Ln with radius less than or equal to one, then

1. If r(D) = 1, then ts(D) = te(D) and the rank of te(D) gives the embedding dimension of D.

2. If r(D) < 1, then rank(te(D)) = e.d. (D)+1.

Proof. The first part is immediate since for r(D) = 1,

and ts(D) always has its rank equal to the embedding dimension of D.

In order to prove the second part just note that

and because r < 1, then (1 - r2) > 0 and (1 - r2)eet is a rank one matrix. Thus rank(te(D)) = ts(D) + 1 = e.d. (D) + 1.

Now we go back to the general class of sets for b Î . Here we will introduce a very simple necessary condition for a matrix D to belong to , that can be checked using the entries of the matrix D.

Theorem 2.8. If D Î , then

for i ¹ j.

Proof. From the cosine law

dij = ||xi - xj||2 = ||xi||2 +||xj||2 - 2||xi|| ||xj||cos q

and because -1 < cos q < 1, then

A trivial consequence of this result is that is bounded. This necessary condition tells us also that there is a significant difference between for an arbitrary b Î and the case when b is a constant vector, a multiple of vector e or just the vector e. Note that l = and because the linearity of k we have that

This allows us to normalize the vector b, taking for example etb = 1.

Lemma 2.9. Given D Î , then lD with 0 < l < 1 belongs to

if and only if b is a constant vector.

Proof. The condition is clearly necessary since if b is not constant then for some i and j

and this implies that lD with l small enough is not in .

Let us now prove that the conditions is sufficient for a constant vector b = be.

Since D Î there exists B in such that k(B) = D. Besides this note that D is spherical since the diagonal of B is constant b = be. Now because k is linear

lD = lk(B) = k(lB) = k((1 - l) beet + lB) = k()

where = (1 - l)beet + lB. Now observe that k((1 - l)beet) = 0 since eet is in the null space of k. Besides this (1 - l)beet and lB are in Wn, then the addition is also in Wn. Even more every diagonal entry is equal to b and then Î and lD = k().

Finally we give a necessary condition for a matrix in to be in the topological boundary of the set.

Lemma 2.10. Given D Î with b Î , if one inequality of the set of inequalities given in Lemma 2.8 is satisfied exactly then D belongs to the boundary of

.

Proof. Suppose that dij = ( + )2 or dij = ( - )2. Now for D Î let us compute tb(D)

Since B Î , principal minors of every principal submatrix of B should be greater than or equal to zero. If q = {i, j} we define

and if we compute the determinant we have

But now clearly if one of the inequalities from Theorem 2.8 is in fact an equality then det(Mq×q) = 0 which implies B = tb(D) is in ¶ and then D Î ¶, which finishes the proof.

3 Characterization of spherical vectors and their corresponding distance matrices

As has been noted above, a matrix D is a spherical EDM if and only if there is a vector s such that ste = 1 and Ds = 2r2e. In this case, we say that the vector s is spherical also. Let Sn denote the set of spherical vectors of dimension n. In this section of the paper, we present a simple characterization of elements of Sn. We also investigate the sets = {D Î Ln|Ds = e} for fixed s Î Sn. For dimensions n < 4, we can completely describe for a given s.

Let's consider s Î Ân, n > 2, such that ste = 1. We say that s satisfies the Halves Condition if and only if the following hold:

(i) At least two components of s are positive; and

(ii) If s has p non-negative components, the sum of any p - 1 of them is > .

Note that if s Î , then the Halves Condition simplifies to that condition that each component of s is < . Note also that if Ds = 2r2e, then (D)s = e, so to understand the set of spherical vectors it suffices to consider the condition Ds = e. As a final preliminary comment, if Ds = e, then any configuration of points giving rise to D lies on a hypersphere of radius which we may take to be centered at the origin.

Since the property of being a distance matrix is preserved by permutations applied to rows and columns (they need to preserve symmetry), it follows that Sn is closed under permutations. Hence if s satisfies the Halves Condition so will any permutation of s. In the proofs below we permute s without further justification.

Lemma 3.1. If s is spherical, then s satisfies the Halves Condition.

Proof. Assume that the p non-negative elements of s occur at the beginning of the vector, and let n be the dimension of s. Since D is a distance matrix, all elements of D are non-negative; since Ds = e, s must have at least one non-negative element. If s has only one non-negative element, then the product of the first row of D with s would have a non-positive result. Thus p > 2.

To understand the second condition, consider the product of row p of D with s. Since the pth element of row p is 0, and the p + 1st through nth elements of s are negative, the product of the pth through nth elements of row p with s is non-positive. Since the product of row p with s should be 1, this means that the product of the first p - 1 elements of row p with the first p - 1 elements of s must be > 1. However, since Ds = e, every element of the matrix D is bounded above by 2. Therefore, the sum of the first p - 1 elements of s must be at least . Since this argument is unaffected by reordering the non-negative elements of s, the lemma is proved.

Lemma 3.2. If s Î Â2, then s is spherical if and only if s =

.

Proof. From Lemma 3.1, if s is spherical, then it must satisfy the Halves Condition, which implies in turn that s = . On the other hand, if s = then points x1 = and x2 = on the real line give rise to a distance matrix D such that Ds = e.

Lemma 3.3. Suppose s Î Ân, n > 3, has a negative element, which we assume to be the first one. Then,

(a) s Î Sn if and only if Î Sn, where = (-s1, s2,¼, sn-1, sn)t

(b) If s satisfies the Halves Condition, so does .

Proof. There is a spherical matrix D satisfying Ds = e if and only if there is a configuration of points {xi} centered at the origin such that Xts = 0 [3]. Replace point x1 with the point -x1, calling the resulting configuration . is still spherical, and satisfies = 0. The converse of part (a) follows in the same fashion.

To prove (b), part (i) of the Halves Condition is obviously true for so we must check part (ii). Let p represent the number of non-negative elements of s; thus, to see if satisfies the Halves Condition, we must check sums of p non-negative elements of . Without loss of generality, let the non-negative components of s be s2,¼, sp+1. If the sum of p components of contains components 2,¼, p + 1, then we have

Since is a (potentially empty) sum of negative elements, we see that the denominator of the fraction is < 2 , and therefore the fraction as a whole is greater than , as desired. If, on the other hand, we have a sum of p components of that includes 1 and p - 1 components from 2,¼, p + 1, we have

Remark. The effect of part (a) of Lemma 3.3 is that has fewer negative elements that s, but remains in Sn. Then, if s has m negative elements, m applications of Lemma 3.3 produces a vector , still in Sn, with no negative elements.

Lemma 3.4. Suppose s Î Ân, n > 3, has a zero element, say, sn= 0. Define s– = ( s1, s2,¼, sn-1)t.

(a) If s-Î Sn-1, then s Î Sn.

(b) If s satisfies the Halves Condition, so does s–.

Proof. Let X be a configuration of n - 1 points in Ân-2, such that D– = k(XXt) satisfies D–s– = e. Then the points of X lie on a hypersphere of radius and topological dimension n - 3. Let this hypersphere be the ''equator'' of a hypersphere of the same radius and dimension n - 2 in Ân-1. Augment X by adding an nth point at the ''north pole'' of this hypersphere; that is,

The distance matrix from this augmented configuration is then

It is easy to check that Ds = e. The proof of (b) is obvious.

Remark. The converse to part (a) of Lemma 3.4 will follow from Theorem 3.7 below.

Lemma 3.5. If s Î Â3, then s is spherical if and only if s satisfies the Halves Condition.

Proof. From Lemma 3.1, we need only consider the sufficiency of the Halves Condition. By Lemma 3.3, we may assume that s has no negative components.

Case 1: Suppose s has a zero component. Then the Halves Condition implies that the other two components of s are both , and we are done by 3.4 and 3.2.

Case 2: Suppose s Î . For notational convenience, let

and s = (u, v, w)t. Solving Ds = e and using the fact that u + v + w = 1 gives solutions

Since we are assuming that s has all positive components, then each of u, v, w is less than or equal to , so each of x, y, z is nonnegative. Similarly, to see that x = < 2, substitute w = 1 - u - v to get < 2. Cross-multiplying, re-writing and factoring the result gives (2u - 1)(2v - 1) > 0, which again is true because u and v are bounded above by . Therefore, we have 0 < < , which implies that each of is potentially a distance between two points on the circle of radius . Consult Figure 3.1.


are the sides of an inscribed triangle as shown if and only if the equation (z - x - y + xy)2 - xy(2 - x)(2 - y) = 0 holds. Substituting

into the left hand side of this equation produces 0, so the values for x, y, z derived from u, v, w do indeed describe a configuration of points on a circle of radius .

Lemma 3.6. Suppose s Î , n > 4, and suppose s1 and s2 are the smallest two components of s. Define = ( s1+s2, s3,¼, sn)tÎ .

(a) If Î Sn-1, then s Î Sn.

(b) If s satisfies the Halves Condition, then so does .

Proof. If Î Sn-1, there is a spherical configuration of points

Define a new configuration X in Ân-2 by taking x1 = x2 = and xj = , j = 3,¼, n. Then Xts = 0 and X is a configuration of points whose distance matrix D satisfies Ds = e. The proof of (b) is obvious.

Theorem 3.7. If s Î Ân, n > 2, then s is spherical if and only if s satisfies the Halves Condition.

Proof. The cases for n = 2,3 are covered above, as is the case if s is spherical. Therefore let s Î Ân, n > 4 satisfy the Halves Condition. We will use induction on n. From Lemma 3.3, we may assume that s has only non-negative components. If some component of s is 0, then reduce s to s– as in Lemma 3.4. s– satisfies the Halves Condition, so by induction, s– is spherical. Then Lemma 3.4 implies that s is spherical as well. If no component of s is 0, then reduce s to as in Lemma 3.6. Again, satisfies the Halves Condition, and thus is spherical. Therefore, by Lemma 3.6, s is spherical, and we are done.

We turn now to finding : that is, given a vector s Î Ân satisfying the Halves Condition, what is the set of all EDMs D such that Ds = e? We examine the resulting algebra for dimensions n = 2, 3, and 4 below.

Case n=2: Lemma 3.2 above covers the only case.

Case n=3: Again, for convenience, we use

Suppose first that one component of s is 0: say, the first. Then since s satisfies the Halves Condition, s = . Solving Ds = e gives us that x + y = 2 and z = 2: that is, in any configuration X corresponding to D, points 2 and 3 are antipodal on a circle of radius . Further, wherever point 1 is placed on that circle, the three points form the vertices of a right triangle, and the Pythagorean Theorem ensures that x + y = z = 2. Therefore, in this case

Next suppose that no component of s is 0. Then the solutions

are noted above in the proof of Lemma 3.5. The determinant of D is 2xyz. If no component of s is , then none of x, y, z is 0, and D is invertible and therefore the unique EDM for s.

If one component of s is , say u, then v + w = and we get x = y = 2, z = 0. Again there is a unique D for this s.

Case n=4: This case is substantially more difficult than the previous two, and only an outline of the argument is provided here. As with the argument for n = 3 above, different cases are used for different numbers of 0 components. For convenience, we will use

(i) Two components of s are 0: say, v = w = 0. The equations Ds = e and sTe = 1 reduce to

This implies that t = u = and x = 2. Then y + g = 2 and z + h = 2, while k is constrained only by 0 < k < 2. To see what configurations correspond to these distances, take any four points on a sphere of radius such that points 1 and 2 are antipodal. For any arbitrary point 3 on the sphere, points 1, 2, and 3 will form a right triangle, so y + g = 2 as in the case for n = 3. The same is true for points 1, 2, and 4, so z + h = 2. Therefore, choosing to be the distance from point 3 to point 4, we get a distance matrix D that satisfies Ds = e. Topologically, is parameterized by the difference of two spheres, S2 - S0.

(ii) One component of s is 0: say, t = 0. Then we can use the equations Ds = e and sTe = 1 to solve for z, g, h, k, and w in terms of x, y, u, and v:

Thus, we can parameterize the set of possible D's by the values of x and y.

A necessary (but not sufficient) condition for D to be an EDM is for each component of D to fall between 0 and 2; that is, 0 < x, y, z, g, h, k < 2. Call such a matrix D 2-bounded. It can be shown that for any s satisfying the Halves Condition with t = 0 that 0 < g, h, k < 2. The condition 0 < z < 2 creates a region in the x-y plane bounded by parallel lines with non-empty intersection of the square 0 < x, y < 2. The resulting polygonal region in the x-y plane parameterizes the set of all 2-bounded matrices D such that Ds = e.

To see which of these 2-bounded matrices are actually EDM's, we turn to B = ts(D) and, in particular, to the characteristic polynomial of B. If D is an EDM such that Ds = e, then

One eigenvalue of B will be zero, since (-D + eeT)s = 0. Therefore, the characteristic polynomial of B will take the form l(l3 - 2l2 + Ql + L). A necessary condition for D to be an EDM is for the remaining eigenvalues of B to be non-negative [3]. In turn, this requires that L < 0. The equation L = 0 defines an ellipse that is tangent to each side of the polygonal boundary of the 2-bounded matrices in the x-y plane. For the vector s = (0,0.7, -0.3,0.6), the ellipse and surrounding polygonal region appears in Figure 3.2.


An argument similar to the one used in case (ii) of Lemma 3.5 can be used to show that if D is a matrix generated by a pair (x, y) lying on the ellipse, then D is an EDM (the proof consists of showing that each potential triangle is in fact a triangle). Any D corresponding to a value of (x, y) inside the above ellipse is a convex combination of matrices generated by (x, y) on the ellipse itself. Since the set of EDM's L4 is convex, this implies that is parameterized by the ellipse and its interior.

(iii) No component of s is 0. The argument in this case follows the same general outline as the argument in the previous case. Using the equations Ds = e and sTe = 1 to solve for z, g, h, and k in terms of other variables, we get:

The conditions 0 < x, k < 2 creates a vertical strip in the x-y plane; 0 < y, h < 2 defines a horizontal strip; and 0 < z, g < 2 defines a diagonal strip. It can be shown that if s satisfies the Halves Condition, these three strips have non-empty intersection which parameterizes the set of 2-bounded matrices D. Again, the equation L = 0 defines a rational algebraic curve tangent to all six sides of this region with the property that each pair (x, y) on or interior to the curve L = 0 defines an EDM, and hence this set parameterizes . This concludes the case n = 4.

4 Multibalanced Euclidean distance matrices

In this section we will generalize a class of matrices introduced by Hayden and Tarazaga in [4] called balanced Euclidean distance matrices, since they are spherical and the centroid of the points is the center of the sphere.

We will need some notation that we introduce now. Given a positive integer n consider a partition in k subsets of the set {1,2,¼, n} with cardinalities of the subsets equal to n1, n2,¼, nk. Thus ni = n. Without loss of generality we can assume that the first n1 integers are in the first subset and so on. For a reason that will be obvious soon we ask ni > 2, i = 1,...,k.

We will start by defining a multibalanced configuration of n points, and then we will describe its properties and a characterization of the corresponding distance matrices.

A configuration of n points is multibalanced if there are k > 2 spheres with center in the origin such that the ith sphere contains ni points and the centroid of these ni points is the origin (the case k = 1 was introduced in [4]). A particular case when ni = 2, i = 1,¼, k, was studied by A. Alfakih [1] but from a different point of view.

We need another piece of notation. For i = 1,¼, k the vector ei is defined as follows

Of course e stands as always for the vector of all ones, and e = ei. A vector is called a blocked vector (with respect to the partition introduced above) if it belongs to span{e1,¼, ek}.

We now look to analytical properties of these multibalanced configurations. It is important to point out that these configurations are invariant under rotations and reflections, so in place of considering a coordinate matrix X we can work with the corresponding B matrix where B = XXt. Remember that the null space of X and B are the same.

Lemma 4.1. The coordinate matrix X represents a multibalanced configuration of points if and only if the vectors ei, i = 1,¼, k are in the null space of B = XXt.

Proof. Notice that Bei = 0 if and only if Xt ei = 0 and this happens if and only if the centroid of the points in the ith sphere is the origin (see [4]).

Corollary 4.2. If the coordinate matrix X represents a multibalanced configuration then e and b (the diagonal of B) are in the null space of X and B.

Proof. Just note that both vectors are linear combinations of the vectors ei, i = 1,¼, k.

Corollary 4.3. When computing D = k(B) = bet + ebt - 2B the rank two perturbation bet + ebt is orthogonal to B.

Proof. The matrix B has spectral decomposition B = , but only r eigenvalues are different from zero (rank(B) = r), so B = . Now any eigenvector xi, i = 1,¼, r is orthogonal to vectors in the null space of B. If we compute now the Frobenius inner product between bet + ebt and B we have

But we are computing the trace of the zero matrix since e and b are in the null space of B and they are orthogonal to xi, i = 1,¼, r.

Clearly the matrix bet + ebt is symmetric and has two eigenvalues different from zero. If (l, x) is an eigenpair of bet + ebt with l ¹ 0 then it is an eigenpair of D. Moreover, because trace of bet + ebt is positive one of the eigenvalues has to be positive (by the way D has only one positive eigenvalue) and the corresponding eigenpair has to be the Perron-Frobenius eigenpair. A somewhat lengthy but direct computation proves the following result. Let's denote by the vector in the direction of b but with the length of e, in other words = .

Lemma 4.4. The rank two perturbation bet + ebt has the following eigenpairs for nonzero eigenvalues

The first one is the Perron-Frobenius eigenpair of bet + ebt and also of D = k(B).

Corollary 4.5. Because e and b are blocked vectors, then these two eigenvectors have to be blocked vectors, in particular the Perron-Frobenius eigenvector has to be blocked.

In [7] Tarazaga showed that N(B) = N(D) Å span{x, e}, where x solves the linear system Dx = e. Note that for a nonspherical distance matrix (like multibalanced matrices with k > 2) x Î M as noted in [6]. For our class of multibalanced distance matrices, x is easy to compute.

Lemma 4.6. If D is a multibalanced Euclidean distance matrix, then a multiple of the projection of b over M solves the linear system Dx = e.

Proof. Since the projection of b on M is given by x = b - (etb/ete)e, then we only need to show that Dx is a multiple of the vector e, as we do in the following computation. First of all

where the last equality holds because x is a linear combination of e and b both in the null space of B. Now

Note that the coefficient of e is greater than or equal to zero because of the Cauchy-Schwarz inequality. And it is zero only when b is a positive multiple of e (since it is the diagonal of a positive semidefinite matrix). This case corresponds to only one sphere as introduced in [4].

Let's denote by the projection of b over M. Now it is possible to get a basis for the subspace of N(B) spanned by ei, i = 1,¼, k (remember e and b belong to that subspace as well as ) that includes e and . To begin with e Î N(B), Î N(B) and e ^ . A Gram-Schmidt procedure can complete an orthonormal basis for that subspace. Note that since all the vectors used are blocked, the orthogonal basis is formed by blocked vectors. We will call a basis like this a MB-basis. The previous argument allows us to establish the following result.

Lemma 4.7. There is an orthogonal basis for the span of ei, i = 1,¼, k that include multiples of e and

. Moreover all the vectors in the basis are blocked.

Because of the mentioned relation between null spaces of B and D given in [7] we have the following result.

Corollary 4.8. The vectors in a MB-basis different from e and

are null vectors of D. Moreover they span the null space of D.

Now we are ready to establish our main result in this section.

Theorem 4.9. The following statements are equivalent:

1. X is a multibalanced configuration of points.

2. B = XXt has the vectors ei, i = 1,¼, k in its null space.

3. (a) k(B) = D has a blocked Perron-Frobenius eigenvector .

(b) span{, e} is an invariant subspace.

(c) D has k - 2 blocked eigenvectors in its null space.

Proof. Lemma 4.1 shows that 1) and 2) are equivalent. On the other hand Lemma 4.4 and Corollary 4.8 show that 2) implies 3). We will prove now that 3) implies 2).

If span{, e} is invariant then there is another blocked eigenvector such that e Î span{, }. But using the definition of t we have that

which implies that b Î span{, }. Now the projection of b over M, in other words = also belongs to span{, }. But since e and are independent (and orthogonal) and as we mentioned t(D)e = t(D)b = 0, then we have two blocked eigenvectors in the null space of B. We also have another independent k - 2 blocked null vectors in the null space of B coming from the null space of D. Now it is clear that the blocked vectors ei, i = 1,¼, k must be in the null space of B which finishes the proof of the theorem.

Acknowledgments. The authors want to thank the referees for the comments and suggestions that improved the final version of this paper.

Received: 04/IV/06. Accepted: 16/IV/07.

#660/06.

  • [1] A.Y. Alfakih, On the nullspace, the rangespace and the characteristic polynomial of (e)uclidean distance matrices Linear Algebra and its Applications, 416 (2006), 348-354.
  • [2] A.Y. Alfakih and H. Wolkowicz, Two theorems on Euclidean distance matrices and Gale transform Linear Algebra and its Applications, 340 (2002), 149-154.
  • [3] J.C. Gower, Properties of Euclidean and non-Euclidean distance matrices Linear Algebra and its Applications, 67 (1985), 81-97.
  • [4] T.L. Hayden and P. Tarazaga, Distance matrices and regular figures Linear Algebra and its Applications, 195 (1993), 9-16.
  • [5] C.R. Johnson and P. Tarazaga, Connections between the real positive semidefinite and distance matrix completion problems Linear Algebra and its Applications, 223-224 (1995), 375-391.
  • [6] P. Tarazaga, T.L. Hayden and J. Wells, Circum-euclidean distance matrices and faces Linear Algebra and its Applications, 232 (1996), 77-96.
  • [7] P. Tarazaga, Faces of the cone of Euclidean distance matrices: Characterizations, structure and induced geometry Linear Algebra and its Applications, 408 (2005), 1-13.

Publication Dates

  • Publication in this collection
    14 Nov 2007
  • Date of issue
    2007

History

  • Accepted
    16 Apr 2007
  • Received
    04 Apr 2006
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br