Advances in Pure Mathematics
Vol.07 No.01(2017), Article ID:73737,10 pages
10.4236/apm.2017.71002
Positive-Definite Sparse Precision Matrix Estimation
Lin Xia1, Xudong Huang1, Guanpeng Wang1, Tao Wu2
1School of Mathematics and Computer Science, Anhui Normal University, Wuhu, China
2School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, China
Copyright © 2017 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/
Received: December 29, 2016; Accepted: January 19, 2017; Published: January 23, 2017
ABSTRACT
The positive-definiteness and sparsity are the most important property of high-dimensional precision matrices. To better achieve those property, this paper uses a sparse lasso penalized D-trace loss under the positive-definiteness constraint to estimate high-dimensional precision matrices. This paper derives an efficient accelerated gradient method to solve the challenging optimization problem and establish its converges rate as . The numerical simulations illustrated our method have competitive advantage than other methods.
Keywords:
Positive-Definiteness, Sparsity, D-Trace Loss, Accelerated Gradient Method
1. Introduction
In the past twenty years, the most popular direction of statistics is high- dimensional data. In functional magnetic resonance imaging (FMRI), bioin- formatics, Web mining, climate research, risk management and social science, it not only has a wide range of applications, but also the main direction of scientific research at present. In theoretical and practical, high-dimensional precision matrix estimation always plays a very important role and has wide applications in many fields.
Thus, estimation of high-dimensional precision matrix is increasingly becoming a crucial question in many field. However, estimation of high- dimensional precision matrix has two difficulty: 1) sparsity of estimator; (ii) the positive-definiteness constraint. Huang et al. [1] considered using Cholesky decomposition to estimate the precision matrix. Although the regularized Cholesky decomposition approach can achieve a positive-semidefiniteness, it can not guarantee sparsity of estimator. Meinshausen et al. [2] use a neigh- bourhood selection scheme in which one can sequentially estimate the support of each row of precision matrix by fitting lasso penalized least squares regression model. Peng et al. [3] considered a joint neighbourhood estimator by using the lasso penalization. Yuan [4] considered the Dantzig selector to replace the lasso penalized least squares in the neighbourhood selection scheme. Cai et al. [5] considered a constrained minimization estimator for estimating sparse precision matrices. However, this methods mentioned are not always achieve a positive-definiteness.
To overcome the difficulty (ii), one possible method is using the eigen- decomposition of and designing to satisfy condition . Assume that has the eigen-decomposition and then a positive semi- definite estimator was gained by setting . However, this strategy destroys the sparsity pattern of for sparse precision matrix estimation. Yuan et al. [6] considered the lasso penalized likelihood criterion and used the maxd et al. gorithm to compute the estimator. Friedman et al. [7] considered the graphical lasso algorithm for solving the lasso penalized Gaussian likelihood estimator. Witten et al. [8] optimized the graphical lasso. By the lasso or penalized Gaussian likelihood estimator, thoses methods simultaneously achieve positive-definiteness and sparsity.
Recently, Zhang et al. [9] consider a constrained convex optimization frame- work for high-dimensional precision matrix. They used lasso penalized D-trace loss replace traditional lasso function, and enforced the positive-definite constraint for some arbitrarily small . In their work, focusing on solving problem as follow:
(1)
It is important to note that is not a tuning parameter like . We simply include in the procedure to ensure that the smallest eigenvalue of the estimator is at least . They developed an efficient alternating direction method of multipliers (ADMM) to solve the challenging optimization problem (1) and establish its convergence properties.
To gain a better estimator for high-dimensional precision matrix and achieve the more optimal convergence rate, this paper mainly propose an effective algorithm, an accelerated gradient method ( [10] ), with fast global convergence rates to solve problem (1). This method mainly basis on the Nesterov's method for accelerating the gradient method ( [11] [12] ), showing that by exploiting the special structure of the trace norm, the classical gradient method for smooth problems can be adapted to solve the trace regularized nonsmooth problems. However, for our problem (1), we have not trace norm, instead of is norm form, but this method have the similar efficiently result for our problem. Numerical results show that this method for our problem (1) not only has significant computational advantages, but also achieves the optimal convergence
rate as .
The paper is organized as follows: Section 2 introduces our methodology, including model establishing in Section 2.1; step size estimation in Section 2.2; an accelerate gradient method algorithm in Section 2.3; the convergence analysis results of this algorithm in Section 2.4. Section 3 introduced numerical results for our method in comparing with other methods. And discussion are made in Section 4. All proofs are given in the Appendix.
2. An Accelerated Gradient Method
2.1. Model Establishing
According to introduction, our optimization problem D-trace Loss function as follow:
(2)
where is a nonnegative penalization parameter, is the sample cova-
riance matrix. , and is the off- diagonal penalty. Defining , and is a con-
tinuously differentiable function. Considering the gradient step
(3)
where is a stepsize, . The smooth part (3)
can be reformulated equivalent as a proximal regularization of the linearized function at :
(4)
where
(5)
Based on this equivalence relationship, solving the optimization problem (2) by the following iterative step:
(6)
with equality in the last line by ignoring terms that do not depend on .
Defining as the projection of a matrix onto the convex cone . Assuming that has the eigen-decomposition , and then can be obtained as . Defining an entry-wise soft-thresholding rule for all the off-diagonal elements of a matrix
with
. Thus, the above problem can be summarized in the following theorem:
Theorem 1: Let , and is symmetric covariance matrix, then:
(7)
is given by , where with .
The proof of this theorem is easy by applying the soft-thresholding method.
2.2. Step Size Estimation
To guarantee the convergence rate of the resulting iterative sequence, Firstly giving the relationship between our proximal function and the objection function at the certain point.
Lemma 1: Let
(8)
where is defined in Equation (6). Assuming the following inequality holds:
(9)
then for any , then:
(10)
This lemma is proved in the Appendix.
At each iterative step of the algorithm, an appropriate step size is needed to satisfy and
(11)
Since the gradient of f(・) satisfies Lipschitz continuous, according to Nesterov et al. [11] work, having follow lemma.
Lemma 2: Supposing that is a convex function, and the gradient of denote is Lipschitz continuous with constant , then:
(12)
so, we have
(13)
Hence, when , then:
(14)
The above results show that the condition in Equation (11) is always satisfied when the update rule
(15)
2.3. An Accelerate Gradient Method Algorithm
In practice, may be unknown or it is expensive to compute. To use the following step size estimation method, usually, giving an initial estimate of as and increasing this estimate with a multiplicative factor re- peatedly until the condition in Equation (11) is satisfied. It is well known ( [11] [12] ) that if the objection function is smooth, then the accelerate gradient
method can achieve the optimal convergence rate of . Recently, there
have other similar methods applying in problems consisted a smooth part and a non-smooth part ( [10] [13] [14] [15] ). Then giving the accelerate gradient algorithm to solve the optimization problem in Equation (2).
Algorithm1:An accelerate gradient method algorithm for high-dimensional precision matrix
1) Initialize: , , ,
2) Iterate:
3) set
4) While , set
5) Set and update
2.4. Convergence Analysis
In our method, two sequences and are updated recursively. In particular, is the approximate solution at the kth step and is called the search point ( [11] [12] ), which is constructed as a linear combination of the latest two approximate solutions and . In this section, the con-
vergence rate of the method can be showed as . This result is sum-
marized in the following theorem.
Theorem 2: Let and be the covariance matrices sequence generated by our algorithm. Then for any , having
(16)
where .
3. Simulation
In this section, providing numerical results for our algorithm which will show our algorithmic advantages by three model. In the simulation study, data were generated from , where . And the sample size was taken to be n = 400 in all models, and let p =500 in Models 1 and 2, and p = 484 in Model 3, which is similar to Zhang et al. [9] . The numerical results of three models as follow:
Model 1: for and otherwise.
Model 2: for and otherwise.
Model 3: for mod , and otherwise; this is the grid model in Ravikumar et al. [16] and requires to be an integer.
Simulation results based on 100 independent replications are showed in Table 1. This paper mainly compare the three methods in terms of four quantities: the
operator risk E , the matrix risk E , and the
percentages of correctly estimated nonzeros and zeros (TP and TN), where norm is written as . In the first two columns smaller numbers are better; in the last two columns larger numbers are better. In general, Table 1 shows that our estimator performs better than Zhang et al.’s method estimator and the lasso penalized Gaussian likelihood estimator.
4. Conclusion
This paper mainly estimate positive-definite sparse precision matrix estimation
Table 1. Comparison of our method with Zhang et al.’s method and graphical lasso.
via lasso penalized D-trace loss by an efficient accelerated gradient method. The positive-definiteness and sparsity are the most important property of large covariance matrices, our method not only efficiently achieves these property, but also shows an better convergence rate. Numerical results have show that our estimator also have a better performance, comparing to Zhang et al.’s method and the Graphical lasso method.
Funding
This project was supported by National Natural Science Foundation of China (71601003) and the National Statistical Scientific Research Projects (2015LZ54).
Cite this paper
Xia, L., Huang, X.D., Wang, G.P. and Wu, T. (2017) Positive-Definite Sparse Precision Matrix Estimation. Advances in Pure Mathematics, 7, 21-30. http://dx.doi.org/10.4236/apm.2017.71002
References
- 1. Huang, J., Liu, N., Pourahmadi, M. and Liu, L. (2006) Covariance Matrix Selection and Estimation via Penalised Normal Likelihood. Biometrika, 93, 85-98. https://doi.org/10.1093/biomet/93.1.85
- 2. Meinshausen, N. and Bühlmann, P. (2006) High-Dimensional Graphs and Variable Selection with the Lasso. Annals of Statist, 34, 1436-1462. https://doi.org/10.1214/009053606000000281
- 3. Peng, J., Wang, P., Zhou, N. and Zhu, J. (2009) Partial Correlation Estimation by Joint Sparse Regression Models. Journal of the American Statistical Association, 104, 735-746. https://doi.org/10.1198/jasa.2009.0126
- 4. Yuan, M. (2010) High Dimensional Inverse Covariance Matrix Estimation via Linear Programming. Journal of Machine Learning Research, 11, 2261-2286.
- 5. Cai, T., Liu, W. and Luo, X. (2011) A Constrained Minimization Approach to Sparse Precision Matrix Estimation. Journal of the American Statistical Association, 106, 594-607. https://doi.org/10.1198/jasa.2011.tm10155
- 6. Yuan, M. and Lin, Y. (2007) Model Selection and Estimation in the Gaussian Graphical Model. Biometrika, 94, 19-35. https://doi.org/10.1093/biomet/asm018
- 7. Friedman, J.H., Hastie, T.J. and Tibshirani, R.J. (2008) Sparse Inverse Covariance Estimation with the Graphical Lasso. Biostatistics, 9, 432-441. https://doi.org/10.1093/biostatistics/kxm045
- 8. Witten, D., Frienman, J.H. and Simon, N. (2011) New Insights and Faster Computations for the Graphical Lasso. Journal of Computational and Graphical Statistics, 20, 892-900. https://doi.org/10.1198/jcgs.2011.11051a
- 9. Zhang, T. and Zou, H. (2014) Sparse Precision Matrix Estimation via Lasso Penalized D-Trace Loss. Biometrika, 101, 103-120. https://doi.org/10.1093/biomet/ast059
- 10. Ji, S. and Ye, J. (2009) An Accelerated Gradient Method for Trace Norm Minimization. International Conference on Machine Learning, 58, 457-464. https://doi.org/10.1145/1553374.1553434
- 11. Nesterov, Y. (1983) A Method for Solving a Convex Programming Problem with Convergence Rate O(1/K2) . Soviet Mathematics Doklady, 27, 372-367.
- 12. Nesterov, Y. (2003) Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic.
- 13. Toh, K.C. and Yun, S. (2010) An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Linear Least Squares Problems. Pacific Journal of Optimization, 6, 615-640.
- 14. Tseng, P. (2008) On Accelerated Proximal Gradient Methods for Convex-Concave Optimization. Siam Journal on Optimization.
- 15. Beck, A. and Teboulle, M. (2009) A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Siam Journal on Imaging Sciences, 2, 183-202. https://doi.org/10.1137/080716542
- 16. Ravikumar, P., Wainwrighr, M., Rasutti, G. and Yu, B. (2011) High-Dimensional Covariance Estimation by Minimizing l1-Penalized Log-Determinant Divergence. Electronic Journal of Statistics, 5, 935-980. https://doi.org/10.1214/11-EJS631
Appendix: Proff of Theorems and Lemmas
Appendix: Proof of Lemma 1
Since that both the trace function and norm are all convex function, so
(17)
(18)
where , is the sub-gradient of norm at point .
Since and combing in Equations (17), (18) then
(19)
Since is a minimizer of , thus
(20)
So the Equation (19) can be simplified as:
Appendix: Proof of Theorem 2
Defining , , easily obtaining
(22)
since . so
(23)
By applying Lemma 1, easily obtaining:
(24)
so:
(25)
Applying (23) (25), then:
(26)
Combining the Equation (26) and the relation , easily ob-
taining:
(27)
Submit or recommend next manuscript to SCIRP and we will provide best service for you:
Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc.
A wide selection of journals (inclusive of 9 subjects, more than 200 journals)
Providing 24-hour high-quality service
User-friendly online submission system
Fair and swift peer-review system
Efficient typesetting and proofreading procedure
Display of the result of downloads and visits, as well as the number of cited articles
Maximum dissemination of your research work
Submit your manuscript at: http://papersubmission.scirp.org/
Or contact apm@scirp.org