Indicators

• Similars in SciELO

On-line version ISSN 1807-0302

Comput. Appl. Math. vol.28 no.1 São Carlos  2009

http://dx.doi.org/10.1590/S0101-82052009000100004

Unitary invariant and residual independent matrix distributions

Arjun K. GuptaI; Daya K. NagarII, *; Astrid M. Vélez-CarvajalII

IDepartment of Mathematics and Statistics, Bowling Green State University, Bowling Green, Ohio 43403-0221, USA, E-mail: gupta@bgnet.bgsu.edu
IIDepartamento de Matemáticas, Universidad de Antioquia, Calle 67, No. 53-108, Medellín, Colombia E-mail: dayaknagar@yahoo.com

ABSTRACT

Define Z13 = A1/2Y(A1/2)H (A and Y are independent) and Z15 = B1/2Y(B1/2)H (B and Y are independent), where Y, A and B follow inverted complex Wishart, complex beta type I and complex beta type II distributions, respectively. In this article several properties including expected values of scalar and matrix valued functions of Z13 and Z15 are derived.
Mathematical subject classification: Primary: 62E15; Secondary: 62H99.

Key words: beta distribution, inverted complex Wishart, complex random matrix, Gauss hypergeometric function, residual independent, unitary invariant, zonal polynomial.

1 Introduction

This paper deals with complex random quadratic forms involving complex Wishart, inverted complex Wishart, complex beta type I and complex beta type II matrices. To define these distributions we need some notations from complex matrix algebra. Let A = (aij) be an m × m matrix of complex numbers. Then, AH denotes conjugate transpose of A;

tr(A) = a11+···+ amm;
etr(A) = exp(tr(A));
det(A) = determinant of A;

A = AH > 0 means that A is Hermitian positive definite; 0 < A = AH < Im means that A and Im - A are Hermitian positive definite and A1/2 denotes square root of A = AH > 0.

Now, we define complex Wishart, inverted complex Wishart, complex matrix variate beta type I, complex matrix variate beta type II distributions.

Definition 1.1. The m × m random Hermitian positive definite matrix X is said to have a complex Wishart distribution with parameters m, ν( > m) and Σ = ΣH > 0, written as X ~ Wm(ν, Σ), if its p.d.f. is given by

{m (ν) det (Σ)ν}-1 det (X)ν-m etr (– Σ-1 X), X = XH > 0,

where m (a) is the complex multivariate gamma function defined by

Definition1.2. The m × m random Hermitian positive definite matrix Y is said to be distributed as inverted complex Wishart with parameters m, µ(> m) and Ψ = ΨH > 0, denoted by Y ~ IWm(µ, Ψ), if its p.d.f. is given by

{m (µ) }-1 det(Ψ)µ det(Y) –(µ+m) etr (-Y-1Ψ), Y = YH > 0.

Note that if Y ~ IWm(µ,Ψ), then Y-1 ~ Wm(µ,Ψ-1). The inverted complex Wishart distribution was first derived by Tan [17] as posterior distribution of Σ in a complex multivariate regression model. Later Shaman [16] studied some of its properties and applied it to spectral estimation. For m = 1, the inverted complex Wishart density slides to an inverted gamma density given by

{m (µ) }-1 ψµ y-(µ+1)exp (–ψy-1), y > 0, µ > 0, ψ > 0.

This distribution is designated by y ~ IG (µ, ψ).

Definition 1.3. The m × m random Hermitian positive definite matrix X is said to have a complex matrix variate beta type I distribution, denoted as X ~ (a, b), if its p.d.f. is given by

{m(a, b)}-1 det (X)a-m det (ImX)b-m, 0 < X = XH < Im,

where a > m - 1, b > m-1 and m(a, b) is the complex multivariate beta function defined by

Definition 1.4. The m × m random Hermitian positive definite matrix Y is said to have a complex matrix variate beta type II distribution, denoted as Y ~ (a,b), if its p.d.f. is given by

{m(a, b)}-1 det (Y)a-m det (ImY)-(a+b), Y = YH < 0,

where a > m - 1, and b > m - 1.

The complex matrix variate beta distributions arise in various problems in multivariate statistical analysis. Several test statistics in complex multivariate analysis of variance and covariance are functions of complex beta matrices. These distributions can be derived using complex Wishart matrices (Tan [18]). The relationship between beta type I and type II matrices can be stated as follows. If X ~ (a, b), then (Im-X)-1/2 X (Im- X)-1/2 ~ (a, b). Further, if Y ~ (a, b), then Y-1 ~ (b, a) and (Im+ Y)-1/2 Y (Im+ Y)-1/2 ~ (a, b).

It is well known in the statistical literature that the complex matrix variate beta type I and type II, complex Wishart (Σ = Im), and complex inverted Wishart (ψ = Im) distributions are unitary invariant and residually independent (Khatri [9], Tan [18], Nagar, Bedoya and Arias [14], Bedoya, Nagar and Gupta [1], Goodman [4], Shaman [16]) and therefore belong to the class of unitary invariant and residual independent matrix (UNIARIM) distributions, m, defined as follows (Khatri, Khattree and Gupta [12]).

Definition 1.5. The m × m random Hermitian positive definite matrix X is said to have an UNIARIM distribution if

(i) for any m×m unitary matrix U, distributions of X and U XUH are identical, and

(ii) for any lower triangular factorization X = TTH, T = (Tij), Tii (mi × mi), i = 1,..., k are independent, for any partition {m1, m2, ..., mk} of m.

When X has UNIARIM distribution we will write X m. Next, we state several properties of UNIARIM distributions.

Theorem 1.6. Let X m. Partition X as X = , X11 (q × q). Then X11 and X22·1 = X22 - X21 X12 are independent, X11 q, and X22·1 m-q.

Theorem 1.7. Let X m and Y m be independent. Then, for any square root T of Y, the distribution of Z = TXTH belongs to m. Further, if T1 and T2 are two different square roots of Y, then and have identical distributions.

From the above theorem it follows that if U = (U1, U2), Ui (m × mi), i = 1,2, m1 + m2 = m is a random unitary matrix independent of Z m, then m1 and m2 are independent. Further, for c m, c 0, (i) cH Zc/ cH c has same distribution as z11, where Z = (zij), and (ii) cHc/ cH Z-1c has same distribution as 1/z11 where Z-1 = (zij). Furthermore, if E(Z), E(Z-1), and E(Zα), α an integer, exist, then E(Z) = aIm, E(Z-1) = bIm and E(Zα) = α Im, where a = E(x11y11), b = E(x11) E (y11), and the constant α depends on moments of order less than or equal to α of X and Y.

Let Z[i] = (zαβ), 1 < α, β < i, i < m be a submatrix of Z = (zαβ), 1 < α, β < m and Y = TTH, X = RRH be lower triangular factorizations. Then

where det(Z[0]) = 1, are independent and E[det(Z)α] = provided the expectations involved exist.

In Section 2, we give several UNIARIM distributions by using Theorem 1.7. Section 3 gives a number of results on random quadratic forms A1/2 Y(A1/2)H (A and Y are independent) and B1/2Y(B1/2)H (B and Y are independent), where Y ~ IWm(µ, Im), A ~ (a, b) and B ~ (c, d) by exploiting the fact that they too belong to the class of UNIARIM distributions and using properties that are available for UNIARIM distributions. Finally, in Appendix, we give certain known results on complex matrix variate beta type I and type II, complex Wishart and inverted complex Wishart distributions, confluent hypergeometric functions and zonal polynomials of Hermitian matrix argument.

2 Generating UNIARIM Distributions

In the previous section it is stated that if X m and Y m are independent, then for any square root T of Y, the distribution of Z = T X TH belongs to m. In this section we exploit this property to generate a number of UNIARIM distributions.

Let Ai ~ (ai, bi), Bi ~ (ci, di), i = 1, 2, A ~ (a, b), B ~ (c, d) and define

and

 Z4 = B1/2 A(B1/2)H (A and B are independent).

Then, from Theorem 1.7, it follows that Zi m, i = 1, 2, 3, 4. From Cui, Gupta and Nagar [3], the p.d.f. of Z1 is available as

where 21 is the Gauss hypergeometric function of Hermitian matrix argument (James [8]). The density of Z2 and Z3 can be shown to be (Gupta, Nagar and Vélez-Carvajal [7]),

and

respectively. Note that the distribution of Z4 is same as that of Z3.

Next, let Xi ~ Wm(νi, Im), i = 1, 2, Yi ~ IWm(µi, Im), i = 1, 2, X ~ Wm(ν, Im), and Y ~ IWm(µ, Im), where X1 and X2 are independent, Y1 and Y2 are independent, and X and Y are independent. Let

and

Z8 = Y1/2 X(Y1/2)H.

Then, the p.d.f. of Z5 is (Gupta and Nagar [6]),

where δ(·) is the Herz's type II Bessel function of Hermitian matrix argument. Since ~ Wm(µ1, Im), ~ Wm(µ2, Im), the p.d.f. of Z6, obtained from the p.d.f. of Z5, is given as

Note that Z7 ~ (µ, ν) and Z8 ~ (µ, ν) which follows from Tan [18] and Cui, Gupta and Nagar [3].

Further, define the following complex random matrices which again belong to the class m:

Z9 = A1/2 X(A1/2)H,

Z10 = X1/2 A(X1/2)H,

Z11 = B1/2 X(B1/2)H,

Z12 = X1/2 B(X1/2)H,

Z13 = A1/2 Y(A1/2)H,

Z14 = Y1/2 A(Y1/2)H,

Z15 = B1/2 Y(B1/2)H,

and

Z16 = Y1/2 B(Y1/2)H.

The random matrices Z9 and Z10 have the same density given by

where is the confluent hypergeometric function of Hermitian matrix argument. The random matrices Z11 and Z12 have the same density derived as

Similarly, the random matrices Z13 and Z14 have the same density derived in Theorem 3.1 and the random matrices Z15 and Z16 have the same density obtained in Theorem 3.3.

3 Properties of UNIARIM Distributions

In the previous sections we have discussed a number of properties of UNIARIM distributions and generated them by noting that if X m and Y m are independent, then for any square root T of Y, the distribution of Z = T X TH belongs to m.

Several properties, distributional results, expected values, etc. of random matrices defined in Section 2 are available in the literature. The distributional results and properties of Z1, Z2, Z3 and Z4 were studied by Gupta, Nagar and Vélez-Carvajal [7]. Results related to Z5 and Z6 were derived by Gupta and Nagar [6] and properties of Z9, Z10, Z11 and Z12 were obtained by Khatri, Khattree and Gupta [12].

In this section we derive distributions of Z13 and Z15 and study their properties. First we obtain the densities of Z13 and Z15.

Theorem 3.1. Let A and Y be independent, A ~ (a, b) and Y ~ IWm(µ, Im ). Then, the density of Z13 = A1/2Y (A1/2)H is derived as

where 11 is the confluent hypergeometric function of Hermitian matrix argument.

Proof. The joint density of A and Y is given by

where 0 < A = AH < Im and Y = YH > 0. Making the transformation Z13 = A1/2Y (A1/2)H with the Jacobian J (A, Y A, Z13) = det(A)-m and integrating A we obtain

where Z13 = > 0. Now, integration using (A.1) yields the result.

The distribution of Z13 is designated by (µ, a, b).

Corollary 3.2. If x and y are mutually independent, x ~ BI(a, b ) and y ~ IG(µ, 1), then xy ~ (µ, a, b). Further, the p.d.f. of z13 = xy is given by

where 1F1 is the confluent hypergeometric function of scalar argument (Luke [13]).

Theorem 3.3. Let B and Y be independent, B ~ (c, d) and Y ~ IWm(µ, Im). Then, the density of Z15 = B 1/2Y (B1/2)H is derived as

Proof. The joint density of B and Y is given by

where B = BH > 0 and Y = YH > 0. Transforming Z15 = B1/2Y (B1/2)H with the Jacobian J(B, Y B, Z13) = det(B)-m and integrating B we obtain

where Z15 = > 0. The desired result is obtained by using (A.2).

The above distribution will be denoted by (µ, c, d).

Corollary 3.4. If x and y are independent, x ~ B II(c, d) and y ~ IG(µ, 1), then xy ~ (µ, c, d). Further, the p.d.f. of z13 = xy is given by

where ψ is the confluent hypergeometric function of scalar argument (Luke [13]).

Our next two results are of importance in deriving marginal distributions of certain submatrices of Z13 and Z15.

Theorem 3.5. Let A and Y be independent, A ~ (a, b) and Y ~ IWm1+m2 (µ, Im). Then,

are independent. Further,

are independent where A11·2, Y11·2, A22·1 and Y22·1 are Schur complements of A22, Y22, A11 and Y11, respectively.

Proof. From Theorem A.8 and Theorem A.9, A11 , Y11, A22·1 and Y22·1 are independent, A11 ~ (a, b), Y11 ~ I(µ - m2,Im1), A22·1 ~ (a - m1, b) and Y22·1 ~ I(µ, Im2). Now, application of Theorem 3.1 yields the desired result. The proof of the second part is similar.

Theorem 3.6. Let B and Y be independent, Y ~ I (µ, Im) and B ~ (c , d). Then,

are independent. Further,

are independent where Y11·2, B11·2, Y22·1 and B22·1 are Schur complements of Y22, B22, Y11 and B11, respectively.

Proof. From Theorem A.8 and Theorem A.10, Y11 , B11, Y22·1 and B22·1 are independent, Y11 ~ I(µ - m2, Im1), B11 ~ (c, d - m2) , Y22·1 ~ I(µ, Im2) and B22·1 ~ (c - m1, d). Now, application of Theorem 3.3 yields the desired result. The proof of the second part is similar.

3.1 Properties of Z13

In this section some properties of the random matrix Z13 are derived using the fact that Z13 belongs to the class of UNIARIM distributions.

(i) Let Z13 = , Z1311 (m1× m1) , m1 + m2 = m. Then, using Theorem 1.6, Theorem 1.7 and Theorem 3.5, Z1311 and its Schur complement Z1322·1 are independent, Z1311 ~ (µ - m2, a, b) and Z1322·1 ~ (µ, a - m1, b). Further, Z1322 and its Schur complement Z1311·2 are independent, Z1322 ~ (µ - m1, a, b) and Z1311·2 ~ (µ, a - m2, b).

(ii) For a q × m complex non-random matrix C of rank q ( < m),

(CCH)-1/2 CZ13CH (CCH)-1/2 ~ (µ - m2, a, b)

and

(iii) For c m, c 0,

and

Note that the distributions of

do not depend on c. Thus, if y (m × 1) is a complex random vector independent of Z13, and P(y 0) = 1, then it follows that

and

(iv) Let Z13 = (z13ij) and . Then, z13ii ~ (µ - m + 1, a, b), i = 1,..., m and 1/ ~ (µ, a - m + 1, b), i = 1,..., m.

(v) Let = (z13jk), 1 < j, k < i. Define

Then, the random variables v1, ..., vm are mutually independent and using Theorem A.11, Theorem A.7 and Corollary 3.1, vi ~ (µ - i + 1, a, b), i = 1,..., m. Further the distribution of det(Z13) is the same as that of .

3.2 Moments of functions of Z13

In this section we derive several expected values of scalar and complex matrix valued functions of the complex random matrix Z13.

Using the representation Z13 = A1/2 Y(A 1/2)H and (A.10)-(A.17), one obtains

Further, using (A.4), (A.5) and above expressions, the expected values of (tr Z13)2 and 2 are evaluated as

and

Now, applying the results [n](2) = n(n + 1), [n] = n(n - 1), [-n + m](2) = (n - m)(n - m - 1), [-n + m] = (n - m)(n - m + 1), (2)(Im) = m(m + 1)/2 and (Im) = m(m - 1)/2 in the above expressions and simplifying, we obtain

and

Similarly, using (A.6)-(A.8), the expected values of (tr Z13)3 and 3 are obtained as

and

respectively. Now, using the results [n](3) = n(n + 1)(n + 2), [n](2,1) = n(n + 1)(n - 1), = n(n - 1)(n - 2), [-n + m](3) = - (n - m) (n - m - 1)(n - m - 2), [- n + m](2,1) = -(n - m) (n - m - 1)(n - m + 1), = - (n - m) (n - m + 1)(n - m + 2), (3)(Im) = m(m + 1)(m + 2)/6, (2,1)(Im) = 2 m(m2 - 1)/3 and (Im) = m(m - 1)(m - 2)/6 in the above expressions and simplifying, we obtain

where µ > m + 1, and

where a > m + 2. Similarly, the expected values of tr(Z13) tr and tr are obtained as

and

respectively. Since, E = αIm, we have E[tr] = αm. Thus, the coefficient of m in E[tr] is α. Therefore, evaluating E[tr], E[tr], E[tr] and E[tr using the technique describe above, and computing the coefficients of m in the resulting expressions, one obtains

and

3.3 Properties of Z15

In this section we give several properties of Z15.

(i) Let Z15 = , Z1511 (m1× m1) , m1 + m2 = m. Then, using Theorem 1.6, Theorem 1.7 and Theorem 3.6, Z1511 and Z1522·1 are independent, Z1511 ~ (µ - m2, c, d - m2) and Z1522·1 ~ (µ, c - m1, d). Further, Z1522 and Z1511·2 are independent, Z1522 ~ (µ - m1, c, d - m1) and Z1511·2 ~ (µ, c - m2, d).

(ii) For a q × m complex non-random matrix C of rank q (< m),

(CCH)-1/2 CZ15CH (CCH)-1/2 ~ (µ - m2, c, d - m2)

and

(CCH)-1/2 (CCH)-1 (CCH)-1/2 ~ (µ , c - m1, d)

(iii) If y (m × 1) is a non-random complex vector with y 0, or a complex random vector independent of Z15 with P(y 0) = 1, then it follows that

and

(iv) Let Z15 = (z15ij) and = . Then, for i = 1,..., m, z15ii ~ (µ - m + 1, c, d - m + 1), and 1/ ~ (µ, c - m + 1, d).

(v) Let = (z15jk), 1 < j, k < i. Define

Then, the random variables v1,..., vm are mutually independent and using Theorem A.11, Theorem A.12 and Corollary 3.4, vi ~ (µ - i + 1, c, d - i + 1), i = 1,..., m. Further, the distribution of det(Z15) is the same as that of .

3.4 Moments of functions of Z15

This section deals with expected values of scalar and complex matrix valued functions of the complex random matrix Z15.

Using the representation Z15 = Y1/2 B(Y1/2)H ~ (µ, c, d), (A.10)-(A.13), (A.18), (A.19), and the technique of computing expected values given in Subsection 3.2, we obtain

and

Acknowledgements. The authors would like to thank the worthy referee for his comments and suggestions which improved the presentation of this article.

REFERENCES

[1] E. Bedoya, D.K. Nagar and A.K. Gupta, Moments of the complex matrix variate beta distribution. PanAmerican Mathematical Journal, 17(2) (2007), 21-32.         [ Links ]

[2] Y. Chikuse, Partial differential equations for hypergeometric functions of complex matrices and their applications. Annals of the Institute of Statistical Mathematics, 28 (1976), 187-199.         [ Links ]

[3] Xinping Cui, A.K. Gupta and D.K. Nagar, Wilks' factorization of the complex matrix variate Dirichlet distributions. Revista Matemática Complutense, 18(2) (2005), 315-328.         [ Links ]

[4] N.R. Goodman, Statistical analysis based on a certain multivariate complex Gaussian distribution (an introduction). Annals of Mathematical Statistics, 34 (1963), 152-177.         [ Links ]

[5] A.K. Gupta and D.K. Nagar, Matrix Variate Distributions. Chapman & Hall/CRC, Boca Raton (2000).         [ Links ]

[6] A.K. Gupta and D.K. Nagar, Product of independent complex Wishart matrices. International Journal of Statistics and System, 2(1) (2007), 59-73.         [ Links ]

[7] A.K. Gupta, D.K. Nagar and A.M. Vélez-Carvajal, Quadratic forms of independent complex beta matrices. International Journal of Applied Mathematics and Statistics, 13 (D08) (2008), 76-91.         [ Links ]

[8] A.T. James, Distributions of matrix variate and latent roots derived from normal samples. Annals of Mathematical Statistics, 35 (1964), 475-501.         [ Links ]

[9] C.G. Khatri, Classical statistical analysis based on a certain multivariate complex Gaussian distribution. Annals of Mathematical Statistics, 36 (1965), 98-114.         [ Links ]

[10] C.G. Khatri, On certain distribution problems based on positive definite quadratic functions in normal vectors. Annals of Mathematical Statistics, 37(2) (1966), 468-479.         [ Links ]

[11] C.G. Khatri, On the moments of traces of two matrices in three situations for complex multivariate normal populations. Sankhyā, A32 (1970), 65-80.         [ Links ]

[12] C.G. Khatri, R. Khattree and R.D. Gupta, On a class of orthogonal invariant and residual independent matrix distributions. Sankhyā, B53(1) (1991), 1-10.         [ Links ]

[13] Y.L. Luke, The Special Functions and Their Approximations. Volume I, Academic Press, New York (1969).         [ Links ]

[14] D.K. Nagar, E. Bedoya and E.L. Arias, Non-central complex matrix-variate beta distribution. Advances and Applications in Statistics, 4(3) (2004), 287-302.         [ Links ]

[15] D.K. Nagar and A.K. Gupta, Expectations of functions of complex Wishart matrix. Applied Mathematics and Computations (submitted for publication).         [ Links ]

[16] P. Shaman, The inverted complex Wishart distribution and its application to spectral estimates. Journal of Multivariate Analysis, 10 (1980), 51-59.         [ Links ]

[17] W.Y. Tan, On the complex analogue of Bayesian estimation of a multivariate regression model. Annals of Institute of Statistical Mathematics, 25 (1973), 135-152.         [ Links ]

[18] W.Y. Tan, Some distribution theory associated with complex Gaussian distribution. Tamkang Journal, 7 (1968), 263-302.         [ Links ]

#768/08.
* Supported by the Comité para el Desarrollo de la Investigación, Universidad de Antioquia research grant no. IN515CE.

Appendix

The confluent hypergeometric function 11 of Hermitian matrix argument has the integral representation (James [8] and Chikuse [2]),

valid for all Hermitian X, Re(α) > m - 1 and Re(γ - α) > m - 1. The confluent hypergeometric function of m × m Hermitian matrix R is defined by (Chikuse [2]),

where Re(R) > 0 and Re(α) > m - 1.

Let κ(X) be a zonal polynomial of an m × m Hermitian matrix X corresponding to the partition κ = (k1, ..., km), k1 + ... + km = k, k1 > ... > km > 0. Then, for small values of k, explicit formulas for κ(X) are available as (James [8], Khatri [11]),

Also, substituting X = Im in (A.3)-(A.8), it is easy to see that (1)(Im) = m, (2)(Im) = m(m + 1)/2, (Im) = m(m - 1)/2, (3)(Im) = m(m + 1)(m + 2)/6, (2,1)(Im) = 2m(m2 - 1)/3, (Im) = m(m - 1)(m - 2)/6. The complex generalized hypergeometric coefficient [a]κ is defined as

with (a)r = a(a + 1)...(a + r - 1) = (a)r-1(a + r - 1) for r = 1,2, ..., and (a)0 = 1. Further, substituting appropriately in (A.9), it is easy to observe that [a](2) = a(a + 1), = a(a - 1), [a](3) = a(a + 1)(a + 2), [a](2,1) = a(a + 1)(a - 1) and = a(a - 1)(a - 2).

If Y ~ IWm(µ, Ψ), A ~ (a, b) and B ~ (c, d) then (Bedoya, Nagar and Gupta [1], James [8], Khatri [10], Nagar and Gupta [15], Shaman [16]),

and

where R is an m × m Hermitian matrix.

Theorem A.7. Let A ~ Wm(ν, Im) and A = TTH, where T = (tij) is a complex lower triangular matrix with tii > 0 and tij = t1ij + t2ij, j < i. Then, tij, 1 < j < i < m are independently distributed, ~ G(ν - i + 1), 1 < i < m and tij ~ N(0,1), 1 < j < i < m.

Proof. See Goodman [4].

The univariate gamma distribution denoted by G(a) is defined by the p.d.f.

{Γ (a)}-1 exp(-x)xa-1, x > 0, a > 0.

Theorem A.8. Let Y ~ IWm(µ,Ψ) and partition Y and Ψ as

where Y11 and Ψ11 are q × q matrices. Then, Y11 and its Schur complement Y22·1 are independent, Y11 ~ IWq(µ - m + q, Ψ11) and Y22·1 ~ IWm-q(µ, Ψ22·1). Further, Y22 and its Schur complement Y11·2 are independent, Y22 ~ IWm-q(µ - q, Ψ22) and Y11·2 ~ I Wq(µ, Ψ11·2).

Proof. See Tan [17].

Theorem A.9. Let A ~ (a, b) and partition A as

Then,

(i) A11 and its Schur complement A22·1 are independent, A11 ~ (a, b) and A22·1 ~ (a - q, b) and

(ii) A22 and its Schur complement A11·2 are independent, A22 ~ (a, b) and A11·2 ~ (a - m + q, b).

Proof. See Tan [18].

Theorem A.10. Let B ~ (c, d) and partition B as

Then, (i) B11 and its Schur complement B22·1 are independent, B11 ~ (c, d - m + q) and B22·1 ~ (c - q, d) and (ii) B22 and its Schur complement B11·2 are independent, B22 ~ (c, d - q) and B11·2 ~ (c - m + q, d).

Proof. See Tan [18].

Theorem A.11. Let A ~ (a, b) and A = TTH, where T = (tij) is a complex lower triangular matrix with positive diagonal elements. Then, are independently distributed, ~ BI(a - i + 1, b), i = 1,..., m.

The univariate beta type I distribution, denoted by BI(a, b), is defined by the p.d.f.

{B(a, b)}-1 xa-1 (1 - x)b-1, 0 < x < 1,

where B(a, b) = .

Theorem A.12. Let B ~ (c, d) and B = TTH, where T = (tij) is a complex lower triangular matrix with positive diagonal elements. Then, are independently distributed, ~ BII(c - i + 1, d - m + i), i = 1,..., m.

The univariate beta type II distribution, denoted by BII(a, b), is defined by the p.d.f.

{B(a, b)}-1 xa-1 (1 + x)-(a+b), x > 0.