Advances in Linear Algebra & Matrix Theory
Vol.06 No.01(2016), Article ID:64247,10 pages
10.4236/alamt.2016.61001

Dykstra’s Algorithm for the Optimal Approximate Symmetric Positive Semidefinite Solution of a Class of Matrix Equations*

Chunmei Li#, Xuefeng Duan, Zhuling Jiang

College of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin, China

Copyright © 2016 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 4 December 2015; accepted 4 March 2016; published 7 March 2016

ABSTRACT

Dykstra’s alternating projection algorithm was proposed to treat the problem of finding the projection of a given point onto the intersection of some closed convex sets. In this paper, we first apply Dykstra’s alternating projection algorithm to compute the optimal approximate symmetric positive semidefinite solution of the matrix equations AXB = E, CXD = F. If we choose the initial iterative matrix X0 = 0, the least Frobenius norm symmetric positive semidefinite solution of these matrix equations is obtained. A numerical example shows that the new algorithm is feasible and effective.

Keywords:

Matrix Equation, Dykstra’s Alternating Projection Algorithm, Optimal Approximate Solution, Least Norm Solution

1. Introduction

Throughout this paper, we use and to stand for the set of real matrices and symmetric positive semidefinite matrices, respectively. We denote the transpose and Moore-Penrose generalized inverse of the matrix A by and, respectively. The symbol stands for identity matrix. For denotes the inner product of the matrix A and B. The induced norm is the so-called Frobenius norm, that is, then is a real Hilbert space. In order to develop this paper, we need to give the following definition.

Definition 1.1. [1] Let M be a closed convex subset in a real Hilbert space H and u be a point in H, then the point in M nearest to u is called the projection of u onto M and denoted by, that is to say, is the solution of the following minimization problem

(1.1)

i.e.

(1.2)

In this paper, we consider the matrix equations

(1.3)

and their matrix nearness problem.

Problem I. Given matrices and find such that

(1.4)

where

Obviously, is the symmetric positive semidefinite solution set of the matrix equations (1.3). It is easy to verify that is a closed convex set, then the solution of Problem I is unique. In this paper, the unique solution is called the optimal approximate symmetric positive semidefinite solution of Equation (1.3). In particular, if then the solution of Problem I is just the least Frobenius norm symmetric positive semidefinite solution of the matrix equations (1.3).

This kind of matrix nearness problem occurs frequently in experimental design, see for instance [2] [3] . Here may be obtained from experiments, but not satisfy Equation (1.3). The nearest matrix satisfies Equa- tion (1.3) and is nearest to the given matrix. Up to now, Equation (1.3) and their matrix nearness problem I have been extensively studied for the past 40 or more years. Navarra-Odell-Young [4] and Wang [5] gave necessary and sufficient conditions for Equation (1.3) having a solution and presented the expression for a general solution. By the projection theorem and matrix decompositions, Liao-Lei-Yuan [6] [7] gave some analytical expressions of the optimal approximate least square symmetric solution of Equation (1.3). Sheng- Chen [8] presented an efficient iterative method to compute the optimal approximate solution for the matrix equations (1.3). Ding-Liu-Ding [9] considered the unique solution of Equation (1.3) and used gradient based iterative algorithm to compute the unique solution. Peng-Hu-Zhang [10] and Chen-Peng-Zhou [11] proposed some iterative methods to compute the symmetric solutions and optimal approximate symmetric solution of Equation (1.3). The (least square) solution and the optimal approximate (least square) solution of Equation (1.3), which is constrained as bisymmetric, reflexive, generalized reflexive, generalized centro-symmetric, were studied in [11] - [17] . Nevertheless, to the best of our knowledge, the optimal approximate solution of Equation (1.3), which is constrained as symmetric positive semidefinite, (i.e. Problem I) has not been solved. The difficulty of Problem I lies in how to characterize the convex set. In this paper, we first divided the set into three sets and then adopt alternating projections to overcome the difficulty.

Dykstra’s alternating projection algorithm was proposed by Dykstra [18] to treat the problem of finding the projection of a given point onto the intersection of some closed convex sets. It is based on a clear modification of the classical alternating projection algorithm first proposed by Von Neumann [19] , and studied later by Cheney and Goldstein [20] . For an application of Dykstra’s alternating projection algorithm to compute the nearest diagonally dominant matrix see [21] . For a complete survey on Dykstra’s alternation projection algorithm and applications see Deutsch [22] .

In this paper, we propose a new algorithm to compute the optimal approximate symmetric positive semidefinite solution of Equation (1.3). We state Problem I as the minimization of a convex quadratic function over the intersection of three closed convex sets in the vector space From this point of view, Problem I can be solved by the Dykstra’s alternating projection algorithm. If we choose the initial iterative matrix the least Frobenius norm symmetric positive semidefinite solution of the matrix equations is obtained. In the end, we use a numerical example to show that the new algorithm is feasible and effective.

2. Dykstra’s Algorithm for Solving Problem I

In this section, we apply Dykstra’s alternating projection algorithm to compute the optimal approximate symmetric positive semidefinite solution of Equation (1.3). We first introduce Dykstra’s alternating projection algorithm and its convergence theorem.

In order to find the projection of a given point onto the intersection of a finite number of closed convex sets Dykstra [18] proposed Dykstra’s alternating projection algorithm which can be stated as follows. This algorithm can be also seen in [1] [23] - [25] .

Dykstra’s Algorithm 2.1

1) Given the initial value;

2) Set

3) For

For

,

End

End

The utility of Dykstra’s algorithm 2.1 is based on the following theorem (see [23] - [25] and the references therein).

Lemma 2.1. ( [23] , Theorem 2) Let be closed convex subsets of a real Hilbert space H such that For any and any the sequences generated by Dykstra’s algorithm 2.1 converge to that is,

Now we begin to use Dykstra’s algorithm 2.1 to solve Problem I. Firstly, we define three sets

It is easy to know that and if the set is nonempty, then

(2.1)

On the other hand, it is easy to verify that and are closed convex subsets of the real Hilbert space.

After defining the sets and, Problem I can be rewritten as finding such that

(2.2)

By Definition 1.1 and noting that the equalities (2.2) and (1.2), it is easy to find that

(2.3)

Therefore, Problem I can be converted equivalently into finding the projection. Now we will use Dykstra’s algorithm 2.1 to compute the projection. By (2.3), we can get the optimal approximate symmetric positive semidefinite solution of the matrix equations (1.3).

We can see that the key problems to realize Dykstra’s algorithm 2.1 are how to compute the projections, and of a matrix Z onto and, respectively. Such problems are perfectly solvable in the following theorems.

Theorem 2.1. Suppose that the set is nonempty. For a given matrix Z, we have

Proof. By Definition 1.1, we know that the projection is the solution of the following minimization problem

(2.4)

Now we begin to solve the minimization problem (2.4). We first characterize the solution set and then find such that (2.4) holds. Noting that the set is a closed convex set, then the minimization problem (2.4) has a unique solution. Hence The singular value decomposition of the matrices A and B are given by

(2.5)

where are orthogonal matrices, , , , and are orthogonal matrices,. According to the definition of the Moore-Penrose generalized inverse of a matrix, we have

(2.6)

and

(2.7)

Substituting (2.5) into the matrix equation we obtain

which implies

Let

Then the matrix equation can be equivalently written as

which implies that

(2.8)

(2.9)

(2.10)

(2.11)

By (2.8) we have

Noting that the set is nonempty, by (2.5) it is easy to verify that (2.9), (2.10) and (2.11) are identical equations. Hence the general solutions of the matrix equation can be expressed as

(2.12)

where are arbitrary, which implies that the entries of the set can be stated as (2.12).

Consequently,

(2.13)

By (2.13) we know that if and only if

Therefore, the solution of the minimization problem (2.4) is

(2.14)

Combining (2.14) and (2.5)-(2.7), we have

The theorem is proved.

Theorem 2.2. Suppose that the set is nonempty. For a given matrix Z, we have

Proof. The proof is similar to that of Theorem 2.1 and is omitted here.

For any it is easy to verify that is a symmetric matrix. Then the spectral decomposition of the matrix E is

where and Then by Theorem 2.1 of Higham [26] and Definition 1.1, we have

Theorem 2.3. For a given matrix Z, we have

where

By Dykstra’s algorithm 2.1 and noting that the projection and in Theorems 2.1, 2.2 and 2.3, we get a new algorithm to compute the optimal approximate symmetric positive semidefinite solution of the matrix equations (1.3) which can be stated as follows.

Algorithm 2.2

1) Set the initial value

2) Set

3) For

For

,

End

End

By Lemma 2.1 and (2.1), and noting that and are closed convex sets, we get the convergence theorem for Algorithm 2.2.

Theorem 2.4. If the set is nonempty, then the matrix sequences and generated by Algorithm 2.2 converge to the projection that is

Combining Theorem 2.4 and the equalities (2.3) and (2.2), we have

Theorem 2.5. If the set is nonempty, then the matrix sequences and generated by Algorithm 2.2 converge to optimal approximate symmetric positive semidefinite solution of the matrix equations (1.3). Moreover, if the initial matrix then the matrix sequences and converge to the least Frobenius norm symmetric positive semidefinite solution of the matrix equations

3. Numerical Experiments

In this section, we give a numerical example to illustrate that the new algorithm is feasible and effective to compute the optimal approximate symmetric positive semidefinite solution of the matrix equation (1.3). All programs are written in M ATLAB 7.8. We denote

and use the practical stopping criterion.

Example 3.1. Consider the matrix equation (1.3) with

Here we use and to stand for matrix of ones and zeros. It is easy to verify that is a solution of the matrix equations (1.4), that is to say, the set is nonempty. Therefore we can use Algorithm 2.2 to compute the optimal symmetric positive semidefinite solution of the matrix equation (1.3).

1) Let After 41 iterations of Algorithm 2.2, we get the optimal approximate symmetric positive semidefinite solution

and its residual error

By concrete computations, we know that the distance from to the solution set is

2) Let After 88 iterations of Algorithm 2.2, we get the optimal approximate symmetric positive semidefinite solution

and its residual error

By concrete computations, we know that the distance from to the solution set is

3) Let After 116 iterations of Algorithm 2.2, we get the optimal approximate solution

which is also the least Frobenius norm symmetric positive semidefinite solution of the matrix equations (1.3), and its residual error

By concrete computations, we know that the distance from to the solution set is

Example 4.1 shows that Algorithm 2.2 is feasible and effective to compute the optimal approximate symmetric positive semidefinite solution of the matrix equations (1.3).

4. Conclusion

In this paper, we state Problem I as the minimization of a convex quadratic function over the intersection of three closed convex sets in the Hilbert space, then we can use Dykstra’s alternating projection algorithm to compute the optimal approximate symmetric positive semidefinite solution of the matrix equations (1.3). If we choose the initial matrix the least Frobenius norm symmetric positive semidefinite solution of the matrix equations (1.3) can be obtained. A numerical example show that the new algorithm is feasible and effec- tive to compute the optimal approximate symmetric positive semidefinite solution of the matrix equations (1.3).

Cite this paper

ChunmeiLi,XuefengDuan,ZhulingJiang, (2016) Dykstra’s Algorithm for the Optimal Approximate Symmetric Positive Semidefinite Solution of a Class of Matrix Equations. Advances in Linear Algebra & Matrix Theory,06,1-10. doi: 10.4236/alamt.2016.61001

References

  1. 1. Bauschke, H.H. and Borwein, J.M. (1994) Dykstra’s Alternating Projection Algorithm for Two Sets. Journal of Approximation Theory, 79, 418-443.
    http://dx.doi.org/10.1006/jath.1994.1136

  2. 2. Dai, H. and Lancaster, P. (1996) Linear Matrix Equations from an Inverse Problem of Vibration Theory. Linear Algebra and Its Applications, 246, 31-47.
    http://dx.doi.org/10.1016/0024-3795(94)00311-4

  3. 3. Meng, T. (2001) Experimental Design and Decision Support. In: Leondes, C., Ed., Expert System the Technology of Knowledge Management and Decision Making for 21st Century, Volume 1, Academic Press, San Diego.

  4. 4. Navarra, A., Odell, P.L. and Young, D.M. (2001) A Representation of the General Common Solution to the Matrix Equations A1XB1=C1, A2XB2=C2 with Applications. Computers & Mathematics with Applications, 41, 929-935.
    http://dx.doi.org/10.1016/S0898-1221(00)00330-8>

  5. 5. Wang, Q.W. (2004) A System of Matrix Equations over Arbitrary Regular Rings with Identity. Linear Algebra and Its Applications, 384, 44-53.
    http://dx.doi.org/10.1016/j.laa.2003.12.039

  6. 6. Liao, A.P., Lei, Y. and Yuan, S.F. (2006) The Matrix Nearness Problem for Symmetric Matrices Associated with the Matrix Equations [ATXA,BTXB]=[C,D]. Linear Algebra and Its Applications, 418, 939-954.
    http://dx.doi.org/10.1016/j.laa.2006.03.032

  7. 7. Liao, A.P. and Lei, Y. (2005) Least-Squares Solution with the Minimum-Norm for the Matrix Equations [AXB,GXH]=[C,D]. Computers & Mathematics with Applications, 50, 539-549.
    http://dx.doi.org/10.1016/j.camwa.2005.02.011

  8. 8. Sheng, X.P. and Chen, G.L. (2007) A Finite Iterative Method for Solving a Pair of Linear Matrix Equation (AXB,CXD)=(C,D). Applied Mathematics and Computation, 189, 1350-1358.
    http://dx.doi.org/10.1016/j.amc.2006.12.026

  9. 9. Ding, J., Liu, Y.J. and Ding, F. (2010) Iterative Solutions to Matrix Equation of the Form AiXBi = fi. Computers & Mathematics with Applications, 59, 3500-3507.
    http://dx.doi.org/10.1016/j.camwa.2010.03.041

  10. 10. Peng, Y.X., Hu, X.Y. and Zhang, L. (2006) An Iterative Method for Symmetric Solutions and Optimal Approximation Solution of the System of Matrix Equations A1XB1=C1, A2XB2=C2. Applied Mathematics and Computation, 183, 1127-1137.
    http://dx.doi.org/10.1016/j.amc.2006.05.124

  11. 11. Chen, Y.B., Peng Z.Y. and Zhou, T.J. (2010) LSQR Iterative Method Common Symmetric Solutions to Matrix Equations AXB =E and CXD = F. Applied Mathematics and Computation, 217, 230-236.
    http://dx.doi.org/10.1016/j.amc.2010.05.053

  12. 12. Cai, J. and Chen, G.L. (2009) An Iteraive Algorithm for the Least Squares Bisymmetric Solutions of the Matrix Equations 1XB1=C1, A22=2. Mathematical and Computer Modelling, 50, 1237-1244.
    http://dx.doi.org/10.1016/j.mcm.2009.07.004

  13. 13. Dehghan, M. and Hajarian, M. (2008) An Iteraive Algorithm for Solving a Pair of Matrix Equations AXB = E, CXD = F over Generalized Centro-Symmetric Matrices. Computers & Mathematics with Applications, 56, 3246-3260.
    http://dx.doi.org/10.1016/j.camwa.2008.07.031

  14. 14. Li, F.L., Hu, X.Y. and Lei, Z. (2008) The Generalized Reflexive Solution for a Class of Matrix Equations (AX=B, XC=D). Acta Mathematica Scientia, 28, 185-193.
    http://dx.doi.org/10.1016/S0252-9602(08)60019-3

  15. 15. Peng, Z.H., Hu, X.Y. and Zhang, L. (2006) An Efficient Algorithm for the Least-Squares Reflexive Solution of the Matrix Equation A1XB1=C1, A2XB2=C2. Applied Mathematics and Computation, 181, 988-999.
    http://dx.doi.org/10.1016/j.amc.2006.01.071

  16. 16. Yuan, S.F., Wang, Q.W. and Xiong, Z.P. (2013) Linear Parameterized Inverse Eigenvalue Problem of Bisymmetric Matrices. Linear Algebra and Its Applications, 439, 1990-2007.
    http://dx.doi.org/10.1016/j.laa.2013.05.026

  17. 17. Yuan, S.F. and Liao, A.P. (2014) Least Squares Hermitian Solution of the Complex Matrix Equation AXB+CXD=E with the Least Norm. Journal of the Franklin Institute, 351, 4978-4997.
    http://dx.doi.org/10.1016/j.jfranklin.2014.08.003

  18. 18. Dykstra, R.L. (1983) An Algorithm for Restricted Least-Squares Regression. Journal of the American Statistical Association, 78, 837-842.
    http://dx.doi.org/10.1080/01621459.1983.10477029

  19. 19. Von Numann, J. (1950) Functional Operators Volume II. Princeton University Press, Princeton.

  20. 20. Cheney, W. and Goldstein, A. (1959) Proximity Maps for Convex Sets. Proceedings of the American Mathematical Society, 10, 448-450.
    http://dx.doi.org/10.1090/S0002-9939-1959-0105008-8

  21. 21. Mendoza, M., Raydan, M. and Tarazaga, P. (1998) Computing the Nearest Diagonally Dominant Matrix. Numerical Linear Algebra with Applications, 5, 461-474.
    http://dx.doi.org/10.1002/(SICI)1099-1506(199811/12)5:6<461::AID-NLA141>3.0.CO;2-V

  22. 22. Deutsch, F. (2001) Best Approximation in Inner Product Spaces. Springer, New York.
    http://dx.doi.org/10.1007/978-1-4684-9298-9

  23. 23. Boyle, J.P. and Dykstra, R.L. (1986) A Method for Finding Projections onto the Intersections of Convex Sets in Hilbert Spaces. Lecture Notes in Statistics, 37, 28-47.

  24. 24. Escalante, R. and Raydan, M. (1996) Dykstra’s Algorithm for a Constrained Least-Squares Matrix Problem. Numerical Linear Algebra with Applications, 3, 459-471.
    http://dx.doi.org/10.1002/(SICI)1099-1506(199611/12)3:6<459::AID-NLA82>3.0.CO;2-S

  25. 25. Morillas, P.M. (2005) Dykstra’s Algorithm with Strategies for Projecting onto Certain Polyhedral Cones. Applied Mathematics and Computation, 167, 635-649.
    http://dx.doi.org/10.1016/j.amc.2004.06.136

  26. 26. Higham, N. (1988) Computing a Nearest Symmetric Positive Semidefinite Matrix. Linear Algebra and Its Applications, 103, 103-118.
    http://dx.doi.org/10.1016/0024-3795(88)90223-6

NOTES

*The work was supported by National Natural Science Foundation of China (No.11561015; 11261014; 11301107), Natural Science Foundation of Guangxi Province (No.2012GXNSFBA053006; 2013GXNSFBA019009).

#Corresponding author.