Advances in Pure Mathematics
Vol.07 No.01(2017), Article ID:73737,10 pages
10.4236/apm.2017.71002

Positive-Definite Sparse Precision Matrix Estimation

Lin Xia1, Xudong Huang1, Guanpeng Wang1, Tao Wu2

1School of Mathematics and Computer Science, Anhui Normal University, Wuhu, China

2School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 29, 2016; Accepted: January 19, 2017; Published: January 23, 2017

ABSTRACT

The positive-definiteness and sparsity are the most important property of high-dimensional precision matrices. To better achieve those property, this paper uses a sparse lasso penalized D-trace loss under the positive-definiteness constraint to estimate high-dimensional precision matrices. This paper derives an efficient accelerated gradient method to solve the challenging optimization problem and establish its converges rate as O ( 1 k 2 ) . The numerical simulations illustrated our method have competitive advantage than other methods.

Keywords:

Positive-Definiteness, Sparsity, D-Trace Loss, Accelerated Gradient Method

1. Introduction

In the past twenty years, the most popular direction of statistics is high- dimensional data. In functional magnetic resonance imaging (FMRI), bioin- formatics, Web mining, climate research, risk management and social science, it not only has a wide range of applications, but also the main direction of scientific research at present. In theoretical and practical, high-dimensional precision matrix estimation always plays a very important role and has wide applications in many fields.

Thus, estimation of high-dimensional precision matrix is increasingly becoming a crucial question in many field. However, estimation of high- dimensional precision matrix has two difficulty: 1) sparsity of estimator; (ii) the positive-definiteness constraint. Huang et al. [1] considered using Cholesky decomposition to estimate the precision matrix. Although the regularized Cholesky decomposition approach can achieve a positive-semidefiniteness, it can not guarantee sparsity of estimator. Meinshausen et al. [2] use a neigh- bourhood selection scheme in which one can sequentially estimate the support of each row of precision matrix by fitting lasso penalized least squares regression model. Peng et al. [3] considered a joint neighbourhood estimator by using the lasso penalization. Yuan [4] considered the Dantzig selector to replace the lasso penalized least squares in the neighbourhood selection scheme. Cai et al. [5] considered a constrained l 1 minimization estimator for estimating sparse precision matrices. However, this methods mentioned are not always achieve a positive-definiteness.

To overcome the difficulty (ii), one possible method is using the eigen- decomposition of Θ ^ and designing Θ ^ to satisfy condition { Θ 0 } . Assume that Θ ^ has the eigen-decomposition Θ ^ = i = 1 p λ i v i T v i and then a positive semi- definite estimator was gained by setting Θ ^ = i = 1 p max ( λ i , 0 ) v i T v i . However, this strategy destroys the sparsity pattern of Θ ^ for sparse precision matrix estimation. Yuan et al. [6] considered the lasso penalized likelihood criterion and used the maxd et al. gorithm to compute the estimator. Friedman et al. [7] considered the graphical lasso algorithm for solving the lasso penalized Gaussian likelihood estimator. Witten et al. [8] optimized the graphical lasso. By the lasso or l 1 penalized Gaussian likelihood estimator, thoses methods simultaneously achieve positive-definiteness and sparsity.

Recently, Zhang et al. [9] consider a constrained convex optimization frame- work for high-dimensional precision matrix. They used lasso penalized D-trace loss replace traditional lasso function, and enforced the positive-definite constraint { Θ ε I } for some arbitrarily small ε > 0 . In their work, focusing on solving problem as follow:

Θ ^ + = arg min Θ ε I 1 2 Θ 2 , Σ ^ n tr ( Θ ) + λ Θ 1, off (1)

It is important to note that ε is not a tuning parameter like λ . We simply include ε in the procedure to ensure that the smallest eigenvalue of the estimator is at least ε . They developed an efficient alternating direction method of multipliers (ADMM) to solve the challenging optimization problem (1) and establish its convergence properties.

To gain a better estimator for high-dimensional precision matrix and achieve the more optimal convergence rate, this paper mainly propose an effective algorithm, an accelerated gradient method ( [10] ), with fast global convergence rates to solve problem (1). This method mainly basis on the Nesterov's method for accelerating the gradient method ( [11] [12] ), showing that by exploiting the special structure of the trace norm, the classical gradient method for smooth problems can be adapted to solve the trace regularized nonsmooth problems. However, for our problem (1), we have not trace norm, instead of is l 1 norm form, but this method have the similar efficiently result for our problem. Numerical results show that this method for our problem (1) not only has significant computational advantages, but also achieves the optimal convergence

rate as O ( 1 k 2 ) .

The paper is organized as follows: Section 2 introduces our methodology, including model establishing in Section 2.1; step size estimation in Section 2.2; an accelerate gradient method algorithm in Section 2.3; the convergence analysis results of this algorithm in Section 2.4. Section 3 introduced numerical results for our method in comparing with other methods. And discussion are made in Section 4. All proofs are given in the Appendix.

2. An Accelerated Gradient Method

2.1. Model Establishing

According to introduction, our optimization problem D-trace Loss function as follow:

min Θ ε I F ( Θ ) : = 1 2 Θ 2 , Σ ^ n tr ( Θ ) + λ Θ 1,off (2)

where λ is a nonnegative penalization parameter, Σ ^ n is the sample cova-

riance matrix. Σ ^ n = 1 n i = 1 n X i X i T , and | Θ | 1 = Θ 1 , o f f = i j Θ i j is the l 1 off- diagonal penalty. Defining f ( Θ ) = 1 2 Θ 2 , Σ ^ n tr ( Θ ) , and f ( Θ ) is a con-

tinuously differentiable function. Considering the gradient step

Θ k = Θ k 1 1 t k f ( Θ k 1 ) (3)

where t k 0 is a stepsize, f ( Θ ) = 1 2 ( Θ Σ ^ n + Σ ^ n Θ ) I . The smooth part (3)

can be reformulated equivalent as a proximal regularization of the linearized function f ( Θ ) at Θ k 1 :

Θ k = arg min Θ ε I Φ t k ( Θ , Θ k 1 ) (4)

where

Φ t k ( Θ , Θ k 1 ) = f ( Θ k 1 ) + Θ Θ k 1 , f ( Θ k 1 ) + t k 2 Θ Θ k 1 F 2 (5)

Based on this equivalence relationship, solving the optimization problem (2) by the following iterative step:

Θ k = arg min Θ ε I Ψ t k ( Θ , Θ k 1 ) arg min Θ ε I Φ t k ( Θ , Θ k 1 ) + λ | Θ | 1 = arg min Θ ε I 1 2 Θ k 1 2 , Σ ^ n tr ( Θ k 1 ) + Θ Θ k 1 , 1 2 ( Θ k 1 Σ ^ n + Σ ^ n Θ k 1 ) I + t k 2 Θ Θ k 1 F 2 + λ | Θ | 1 = arg min Θ ε I t k 2 Θ [ Θ k 1 1 t k ( 1 2 ( Θ k 1 Σ ^ n + Σ ^ n Θ k 1 ) I ) ] F 2 + λ | Θ | 1 (6)

with equality in the last line by ignoring terms that do not depend on Θ .

Defining ( C ) + as the projection of a matrix C onto the convex cone { C ε I } . Assuming that C has the eigen-decomposition j = 1 p λ j v j T v j , and then ( C ) + can be obtained as j = 1 p max ( λ j , ε ) v j T v j . Defining an entry-wise soft-thresholding rule for all the off-diagonal elements of a matrix

S ( Z , τ ) = { s ( z j l , τ ) } 1 j , l p with

s ( z j l , τ ) = sign ( z j l ) max ( | z j l | τ ,0 ) I { j l } + z j l I { j = l } . Thus, the above problem can be summarized in the following theorem:

Theorem 1: Let B n × n , and Θ is symmetric covariance matrix, then:

S ( B ) = arg min Θ ε I { 1 2 Θ B F 2 + λ | Θ | 1 } (7)

is given by S ( B ) = ( S ( B , λ ) ) + , where S ( B , λ ) = { s ( B j l , λ ) } 1 j , l p with s ( B j l , λ ) = sign ( B j l ) max ( | B j l | λ ,0 ) I { j l } + B j l I { j = l } .

The proof of this theorem is easy by applying the soft-thresholding method.

2.2. Step Size Estimation

To guarantee the convergence rate of the resulting iterative sequence, Firstly giving the relationship between our proximal function Ψ t k and the objection function F at the certain point.

Lemma 1: Let

T μ ( Θ ˜ ) = arg min Θ ε I Ψ μ ( Θ , Θ ˜ ) (8)

where Ψ is defined in Equation (6). Assuming the following inequality holds:

F ( T μ ( Θ ˜ ) ) Ψ u ( T μ ( Θ ˜ ) , Θ ˜ ) (9)

then for any Θ n × n , then:

F ( Θ ) F ( T μ ( Θ ˜ ) ) μ 2 T μ ( Θ ˜ ) Θ ˜ F 2 + μ Θ ˜ Θ , T μ ( Θ ˜ ) Θ ˜ (10)

This lemma is proved in the Appendix.

At each iterative step of the algorithm, an appropriate step size μ is needed to satisfy Θ k = T u ( Θ k 1 ) and

F ( Θ k ) Ψ μ ( Θ k , Θ k 1 ) (11)

Since the gradient of f(・) satisfies Lipschitz continuous, according to Nesterov et al. [11] work, having follow lemma.

Lemma 2: Supposing that f ( X ) is a convex function, and the gradient of f ( X ) denote f ( X ) is Lipschitz continuous with constant L , then:

f ( X ) f ( Y ) + X Y , f ( Y ) + L 2 X Y F 2 X , Y n (12)

so, we have

f ( T L ( Θ ˜ ) ) f ( Θ ˜ ) + T L ( Θ ˜ ) Θ ˜ , f ( Θ ˜ ) + L 2 T L ( Θ ˜ ) Θ ˜ F 2 (13)

Hence, when μ L , then:

F ( T μ ( Θ ˜ ) ) Φ μ ( T μ ( Θ ˜ ) , Θ ˜ ) + λ | T μ ( Θ ˜ ) | 1 = Ψ μ ( T μ ( Θ ˜ ) , Θ ˜ ) (14)

The above results show that the condition in Equation (11) is always satisfied when the update rule

Θ k = T L ( Θ k 1 ) (15)

2.3. An Accelerate Gradient Method Algorithm

In practice, L may be unknown or it is expensive to compute. To use the following step size estimation method, usually, giving an initial estimate of L as L 0 and increasing this estimate with a multiplicative factor γ > 1 re- peatedly until the condition in Equation (11) is satisfied. It is well known ( [11] [12] ) that if the objection function is smooth, then the accelerate gradient

method can achieve the optimal convergence rate of O ( 1 k 2 ) . Recently, there

have other similar methods applying in problems consisted a smooth part and a non-smooth part ( [10] [13] [14] [15] ). Then giving the accelerate gradient algorithm to solve the optimization problem in Equation (2).

Algorithm1:An accelerate gradient method algorithm for high-dimensional precision matrix

1) Initialize: L 0 , γ , Θ ˜ 1 = Θ 0 n × n , α 1 = 1

2) Iterate:

3) set L ¯ = L k 1

4) While F ( T L ¯ ( Θ ˜ k 1 ) ) > Ψ L ¯ ( T L ¯ ( Θ ˜ k 1 ) , Θ ˜ k 1 ) , set L ¯ : = γ L ¯

5) Set L k = L ¯ and update

Θ k = T L k ( Θ ˜ k )

α k + 1 = 1 + 1 + 4 α k 2 2

Θ ˜ k + 1 = Θ k + ( α k 1 α k + 1 ) ( Θ k Θ k 1 )

2.4. Convergence Analysis

In our method, two sequences Θ k and Θ ˜ k are updated recursively. In particular, Θ k is the approximate solution at the kth step and Θ ˜ k is called the search point ( [11] [12] ), which is constructed as a linear combination of the latest two approximate solutions Θ k 1 and Θ k 2 . In this section, the con-

vergence rate of the method can be showed as O ( 1 k 2 ) . This result is sum-

marized in the following theorem.

Theorem 2: Let { Θ k } and { Θ ˜ k } be the covariance matrices sequence generated by our algorithm. Then for any k 1 , having

F ( Θ k ) F ( Θ * ) 2 γ L Θ 0 Θ * F 2 ( k + 1 ) 2 (16)

where Θ * = arg min Θ ε I F ( Θ ) .

3. Simulation

In this section, providing numerical results for our algorithm which will show our algorithmic advantages by three model. In the simulation study, data were generated from N ( 0 , Σ 0 ) , where Θ 0 = ( Σ 0 ) 1 . And the sample size was taken to be n = 400 in all models, and let p =500 in Models 1 and 2, and p = 484 in Model 3, which is similar to Zhang et al. [9] . The numerical results of three models as follow:

Model 1: Θ i , i 0 = 1 , Θ i , j 0 = 0.2 for 1 | i j | 2 and Θ i , j 0 = 0 otherwise.

Model 2: Θ i , i 0 = 1 , Θ i , j 0 = 0.2 for 1 | i j | 4 and Θ i , j 0 = 0 otherwise.

Model 3: Θ i , i 0 = 1 , Θ i , i + 1 0 = 0.2 for mod ( i , p 1 / 2 ) 0 , Θ i , i + p 1 / 2 0 = 0.2 and Θ i , j 0 = 0 otherwise; this is the grid model in Ravikumar et al. [16] and requires p 1 / 2 to be an integer.

Simulation results based on 100 independent replications are showed in Table 1. This paper mainly compare the three methods in terms of four quantities: the

operator risk E ( Θ ^ Θ 0 2 ) , the matrix l 1, risk E ( Θ ^ Θ 0 1, ) , and the

percentages of correctly estimated nonzeros and zeros (TP and TN), where l 1, norm max i ( j | X i , j | ) is written as X 1 , . In the first two columns smaller numbers are better; in the last two columns larger numbers are better. In general, Table 1 shows that our estimator performs better than Zhang et al.’s method estimator and the lasso penalized Gaussian likelihood estimator.

4. Conclusion

This paper mainly estimate positive-definite sparse precision matrix estimation

Table 1. Comparison of our method with Zhang et al.’s method and graphical lasso.

via lasso penalized D-trace loss by an efficient accelerated gradient method. The positive-definiteness and sparsity are the most important property of large covariance matrices, our method not only efficiently achieves these property, but also shows an better convergence rate. Numerical results have show that our estimator also have a better performance, comparing to Zhang et al.’s method and the Graphical lasso method.

Funding

This project was supported by National Natural Science Foundation of China (71601003) and the National Statistical Scientific Research Projects (2015LZ54).

Cite this paper

Xia, L., Huang, X.D., Wang, G.P. and Wu, T. (2017) Positive-Definite Sparse Precision Matrix Estimation. Advances in Pure Mathematics, 7, 21-30. http://dx.doi.org/10.4236/apm.2017.71002

References

  1. 1. Huang, J., Liu, N., Pourahmadi, M. and Liu, L. (2006) Covariance Matrix Selection and Estimation via Penalised Normal Likelihood. Biometrika, 93, 85-98. https://doi.org/10.1093/biomet/93.1.85

  2. 2. Meinshausen, N. and Bühlmann, P. (2006) High-Dimensional Graphs and Variable Selection with the Lasso. Annals of Statist, 34, 1436-1462. https://doi.org/10.1214/009053606000000281

  3. 3. Peng, J., Wang, P., Zhou, N. and Zhu, J. (2009) Partial Correlation Estimation by Joint Sparse Regression Models. Journal of the American Statistical Association, 104, 735-746. https://doi.org/10.1198/jasa.2009.0126

  4. 4. Yuan, M. (2010) High Dimensional Inverse Covariance Matrix Estimation via Linear Programming. Journal of Machine Learning Research, 11, 2261-2286.

  5. 5. Cai, T., Liu, W. and Luo, X. (2011) A Constrained Minimization Approach to Sparse Precision Matrix Estimation. Journal of the American Statistical Association, 106, 594-607. https://doi.org/10.1198/jasa.2011.tm10155

  6. 6. Yuan, M. and Lin, Y. (2007) Model Selection and Estimation in the Gaussian Graphical Model. Biometrika, 94, 19-35. https://doi.org/10.1093/biomet/asm018

  7. 7. Friedman, J.H., Hastie, T.J. and Tibshirani, R.J. (2008) Sparse Inverse Covariance Estimation with the Graphical Lasso. Biostatistics, 9, 432-441. https://doi.org/10.1093/biostatistics/kxm045

  8. 8. Witten, D., Frienman, J.H. and Simon, N. (2011) New Insights and Faster Computations for the Graphical Lasso. Journal of Computational and Graphical Statistics, 20, 892-900. https://doi.org/10.1198/jcgs.2011.11051a

  9. 9. Zhang, T. and Zou, H. (2014) Sparse Precision Matrix Estimation via Lasso Penalized D-Trace Loss. Biometrika, 101, 103-120. https://doi.org/10.1093/biomet/ast059

  10. 10. Ji, S. and Ye, J. (2009) An Accelerated Gradient Method for Trace Norm Minimization. International Conference on Machine Learning, 58, 457-464. https://doi.org/10.1145/1553374.1553434

  11. 11. Nesterov, Y. (1983) A Method for Solving a Convex Programming Problem with Convergence Rate O(1/K2) . Soviet Mathematics Doklady, 27, 372-367.

  12. 12. Nesterov, Y. (2003) Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic.

  13. 13. Toh, K.C. and Yun, S. (2010) An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Linear Least Squares Problems. Pacific Journal of Optimization, 6, 615-640.

  14. 14. Tseng, P. (2008) On Accelerated Proximal Gradient Methods for Convex-Concave Optimization. Siam Journal on Optimization.

  15. 15. Beck, A. and Teboulle, M. (2009) A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Siam Journal on Imaging Sciences, 2, 183-202. https://doi.org/10.1137/080716542

  16. 16. Ravikumar, P., Wainwrighr, M., Rasutti, G. and Yu, B. (2011) High-Dimensional Covariance Estimation by Minimizing l1-Penalized Log-Determinant Divergence. Electronic Journal of Statistics, 5, 935-980. https://doi.org/10.1214/11-EJS631

Appendix: Proff of Theorems and Lemmas

Appendix: Proof of Lemma 1

Since that both the trace function and l 1 norm are all convex function, so

1 2 Θ 2 , Σ ^ n tr ( Θ ) = 1 2 tr ( Θ 2 Σ ^ n ) tr ( Θ ) 1 2 tr ( Θ ˜ 2 Σ ^ n ) tr ( Θ ˜ ) + Θ Θ ˜ , 1 2 ( Θ ˜ Σ ^ n + Σ ^ n Θ ˜ ) I (17)

λ | Θ | 1 λ | T L ( Θ ˜ ) | 1 + λ Θ T L ( Θ ˜ ) , g ( T L ( Θ ˜ ) ) (18)

where g ( T L ( Θ ˜ ) ) T L ( Θ ˜ ) 1 , is the sub-gradient of l 1 norm at point T L ( Θ ˜ ) .

Since F ( T μ ( Θ ˜ ) ) Ψ μ ( T μ ( Θ ˜ ) , Θ ˜ ) and combing in Equations (17), (18) then

F ( Θ ) F ( T L ( Θ ˜ ) ) F ( Θ ) Ψ L ( T L ( Θ ˜ ) , Θ ˜ ) Θ Θ ˜ , 1 2 ( Θ ˜ Σ ^ n + Σ ^ n Θ ˜ ) I + λ Θ T L ( Θ ˜ ) , g ( T L ( Θ ˜ ) ) T L ( Θ ˜ ) Θ ˜ , 1 2 ( Θ ˜ ^ n + Σ ^ n Θ ˜ ) I L 2 T L ( Θ ˜ ) Θ ˜ F 2 = Θ T L ( Θ ˜ ) , 1 2 ( Θ ˜ Σ ^ n + Σ ^ n Θ ˜ ) I + λ g ( T L ( Θ ˜ ) ) L 2 T L ( Θ ˜ ) Θ ˜ F 2 (19)

Since T L ( Θ ˜ ) is a minimizer of Ψ L ( Θ , Θ ˜ ) , thus

1 2 ( Θ ˜ Σ ^ n + Σ ^ n Θ ˜ ) I + L ( T L ( Θ ˜ ) Θ ˜ ) + λ g ( T L ( Θ ˜ ) ) = 0 (20)

So the Equation (19) can be simplified as:

F ( Θ ) F ( T L ( Θ ˜ ) ) Θ T L ( Θ ˜ ) , 1 2 ( Θ ˜ Σ ˜ n + Σ ˜ n Θ ˜ ) I + λ g ( T L ( Θ ˜ ) ) L 2 T L ( Θ ˜ ) Θ ˜ F 2 = Θ T L ( Θ ˜ ) , L ( T L ( Θ ˜ ) Θ ˜ ) L 2 T L ( Θ ˜ ) Θ ˜ F 2 = L Θ ˜ Θ , T L ( Θ ˜ ) Θ ˜ + L 2 T L ( Θ ˜ ) Θ ˜ F 2
(21)

Appendix: Proof of Theorem 2

Defining U k = α k Θ k ( α k 1 ) Θ k 1 Θ * , V k = F ( Θ k ) F ( Θ * ) , easily obtaining

2 L k + 1 ( α k 2 V k 2 α k + 1 2 V k + 1 ) U k + 1 F 2 U k F 2 (22)

since L k + 1 L k . so

2 L k α k 2 V k 2 L k + 1 α k + 1 2 V k + 1 U k + 1 F 2 U k F 2 (23)

By applying Lemma 1, easily obtaining:

F ( Θ * ) F ( Θ 1 ) = F ( Θ * ) F ( T L 1 ( Θ ˜ 1 ) ) L 1 2 Θ 1 Θ * F 2 L 1 2 Θ ˜ 1 Θ * F 2 (24)

so:

2 V 1 L 1 Θ ˜ 1 Θ * F 2 Θ 1 Θ * F 2 (25)

Applying (23) (25), then:

V k + 1 L k + 1 2 α k + 1 2 Θ ˜ 1 Θ * F 2 (26)

Combining the Equation (26) and the relation α k 2 ( k + 1 ) 2 4 , easily ob-

taining:

F ( Θ k ) F ( Θ * ) 2 L k Θ 0 Θ * F 2 ( k + 1 ) 2 2 γ L Θ 0 Θ * F 2 ( k + 1 ) 2 (27)

Submit or recommend next manuscript to SCIRP and we will provide best service for you:

Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc.

A wide selection of journals (inclusive of 9 subjects, more than 200 journals)

Providing 24-hour high-quality service

User-friendly online submission system

Fair and swift peer-review system

Efficient typesetting and proofreading procedure

Display of the result of downloads and visits, as well as the number of cited articles

Maximum dissemination of your research work

Submit your manuscript at: http://papersubmission.scirp.org/

Or contact apm@scirp.org