Advances in Pure Mathematics
Vol.07 No.01(2017), Article ID:73803,9 pages
10.4236/apm.2017.71006
A Generalized Elastic Net Regularization with Smoothed Penalty
Sisu Li, Wanzhou Ye
Department of Mathematics, College of Science, Shanghai University, Shanghai, China
Copyright © 2017 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/
Received: December 6, 2016; Accepted: January 21, 2017; Published: January 24, 2017
ABSTRACT
This paper presents an accurate and efficient algorithm for solving the generalized elastic net regularization problem with smoothed penalty for recovering sparse vector. Finding the optimal solution to the unconstrained minimization problem in the recovery of compressive sensed signals is an NP-hard problem. We proposed an iterative algorithm to solve this problem. We then prove that the algorithm is convergent based on algebraic methods. The numerical result shows the efficiency and the accuracy of the algorithm.
Keywords:
Sparse Vector, Compressed Sense, Elastic Net Regularization, Minimization
1. Introduction
Compressive sensing (CS) has been emerging as a very active research field and brought about great changes in the field of signal processing during recent years with broad applications such as compressed imaging, analog-to-information conversion, biosensors, and so on [1] [2] [3] . Meanwhile, the norm based signal recovery is attractive in compressed sensing as it can facilitate exact recovery of sparse signal with very high probability [4] [5] . Mathematically, the problem can be presented as
(1)
where is a measurement matrix, denotes the Euclidean norm and , formally called the quasi-norm, denotes the number of the nonzero components of , and the is a regulari- zation parameter.
We can then solve the unconstrained regularization problem
(2)
A natural approach to this problem is to solve a convex relaxation re- gularization problem [6] [7] as following
(3)
where the is the norm. Undoubtly, the regularization
has many applications [8] [9] and can be solved by many classic algorithms such as the iterative soft thresholding algorithm [7] , the LARs [10] , etc. An effective regression method, Lasso [11] , has a very close relationship with the re- gularization as well. In 2005, Zou et al. proposed the following algorithm, which called the elastic net regularization [12]
(4)
where the are two regularization parameters. It is proved in many papers that the elastic net regularization outperforms the Lasso in prediction accuracy. Cands proved that as long as A satisfies the RIP condition with a constant parameter, the minimization can yield an equivalent solution as that of minimization [13] . So in general, the regularization problem can be regard as an aproach to the regularization. Therefore, we shall consider a generalized elastic net regularization problem with penalty:
(5)
Unfortunately, the norm minimization problem is NP-hard [14] . And due to the sparsity of the solution x, we could turn out to calculate the following generalized elastic net regularization with smoothed penalty:
(6)
where , the is a parameter which approaches zero in
order to approximate .
In this paper,we propose an iterative algorithm for recovering sparse vectors which substitute the penalty with a function [15] . And by adding an term, we can prove that the algorithm is convergent based on the algebraic methods. In the experiment part, we compare the algorithm with the soft thresholding algorithm (ITH) [16] . And the output results show an outstanding success of the new method.
The rest of this paper is organized as follows. We develop the new algorithm in Section 2 and prove its convergence in Section 3. Experiments on accuracy and efficiency are reported in Section 4. Finally, we conclude this paper in Section 5.
2. Problem Reformulation
The reconstruction method discussed in this paper is for directly approaching the norm and obtaining its minimal solution with suitably designed objective functions. We denote by the objective function of the minimization problem (6).
(7)
Our goal is to minimize the objective function. For any and , the minimization problem is convex coercie, thus it has a solution. So the optimal solution of (7) can be given according to the optimal condition.
(8)
Then we can present the following iterative algorithm to solve the above minimization problem.
3. Convergence of the Algorithm
In this section, we prove that the algorithm is convergent. Firstly, we start from the lemma 1 [17] which we can deduce the inquality directly by using the mean value theorem.
Lemma 1. Given , then the inequality
(11)
holds for any .
Proof. We first denote , then by the mean value theorem, we
have
(12)
So we have
(13)
Thus, we can simplify the inequality as follow:
This inequality of (11) holds no matter , or . And the next Lemma proves that the sequence drives the function downhill.
Lemma 2. For any and , let be the solution of (9) for Then we can have
(14)
Furthermore,
(15)
where c is a positive constant that depends on .
Proof.
(16)
Using (9). The last term in (16) can be simplified to be
(17)
Substituting (15) into (16) and using (11),
(18)
Since for any and . From (18) we can
obtain the results of (14) and (15) with .
Lemma 3. ( [18] , Theorem 3.1) Let to be given, and let be its corresponding highest ordered system of equations. If has only the trivial solution , then has solutions, where is the degree of .
Theorem 1. For any and . Then the iterative solutions in (9) converge to , that is and is a critical point of (6).
Proof. Here, we need to prove that the sequence is bounded. We assume that is one convergent subsequence of and its limit point is . By (15) we know that the sequence also converges to . If we replace , with , in (10) and letting yields.
(19)
And this implies that the limit point which converges to any convergent subsequence of is the critical point of (8). In order to prove the convergence of sequence , we need to prove that the limit point set M, which contains all the limit points of convergent subsequence of is a finite set. So we have to prove that the following equation has finite solutions.
(20)
where . We can rewrite (20) as follow:
(21)
It is obvious that is a positive definite matrix, is the identity matrix. Then the (21) can be rewritten as the following eq- uation:
(22)
where B is an diagonal matrix with diagonal entries . We denote and . Then
(23)
If we want to prove that (23) has finite solutions, then we can prove the (22) system has finite solutions. According to lemma 3, if we prove that the highest ordered system of (23) has only trivial solution, then it's easy to conclude that the Equation (23) has finite solutions.
(24)
We prove the system (24) has only trivial solution. We assume that is a nonzero solution of (24), for , . Then we have
(25)
where is the leading principle submatrix of matrix is the positive definite, therefore the matrix C is positive definite as well. So we have for . This contradicts the assumption of , , .
Therefore, the system (24) has only trivial solution. So the Equation (20) has finite solutions. Since all the limit points of convergent subsequence of satisfies the Equation (20) and we have proved that (20) has finite solutions. So the limit point set M is a finite set. Combining with as , we thus obtain that the sequence is convergent and limit is a critical point of problem (7).
4. Numerical Experiments
In this section, we present some numerical experiments to show the efficiency and the accuracy of the Algorithm 1 for sparse vector recovery. We compare the performance of Algorithm 1 with IST [3] . In the test, the matrix A had the size of , which is and . All the experiments were performed in Matlab and all the experimental results were averaged over 100 independent trials for various sparsity s.
The experiment results contain two parts: the first one focuses on the comparison of the two algorithms in accuracy; the other one focuses on the efficiency of the two algorithms. In the experiments, the mean squared error of the original vector and the result is recorded as
(26)
4.1. Comparison on the Accuracy
The matrix and the original sparse vector was gene- rated randomly according to the standard Gaussian distribution with N-length and s-sparse, which varies as 2, 4, 6, 8, ・・・, 48. The location of the nonzero elements were randomly generated. The regularization parameters were set as and . All the other parameters of the two al- gorithms were set to be the same.The results are shown in Figure 1.
The Figure 1 shows that the convergence error MSE for the two algorithms tends to be stable at last for different sparsity s. We can also observe that the MSE of the LAGENR-L0 is lower than the IST which demonstrates that our algorithm is not only convergent,but also outperforms the IST in accuracy.
4.2. Comparison on the Efficiency
In this subsection, we focus on the speed of the two algorithms. We conduct various experiments to test the effectiveness of the proposed algorithm. Table 1 report the numerical results of the two algorithms for recovering vectors for different sparsity level. From the results, we can see that the IAGENR-L0 performs much better than IST in efficiency and the accuracy.
Figure 1. Comparison of the convergence error for both IAGENR-L0 and IST.
Table 1. The iteration time of the IAGENR-L0 and the IST for different sparsity level.
5. Conclusion
In this paper, we consider an iterative algorithm for solving the generalized elastic net regularization problems with smoothod penalty for recovering sparse vectors. Then a detailed proof of convergence of the iterative algorithm is given in Section 2 by using the algebraic method. Additionally, the numerical experiments in Section 3 show that our iterative algorithm is convergent and performs better than the IST on recovering sparse vectors.
Cite this paper
Li, S.S. and Ye, W.Z. (2017) A Generalized Elastic Net Regularization with Smoothed
References
- 1. Donoho, D.L. (2006) Compressed Sensing. IEEE Transactions on Information Theory, 52, 1289-1306.
- 2. Cands, E.J., Romberg, J. and Tao, T. (2006) Robust Uncertainty Principles: Exagct signal Reconstruction from Highly Incomplete Frequency Information. IEEE Transactions on Information Theory, 26, 489-509.
- 3. Duarte, M.F. and Eldar, Y.C. (2011) Structured Compressed Sensing: From Theory to Applications. IEEE Transactions on Signal Processing, 59, 4053-4085.
- 4. Lu, Z. (2014) Iterative Hard Thresholding Methods for Regularized Convex cone Programming. Mathematical Programming, 147, 125-154. https://doi.org/10.1007/s10107-013-0714-4
- 5. Cand‘es, E.J. and Tao, T. (2005) Decoding by Linear Programming. IEEE Transactions on Information Theory, 51, 4203-4215.
- 6. Chen, S.S., Donodo, D.L. and Saunders, M.A. (1998) Atomic Decomposition by basis Pursuit. SIAM Journal on Scientific Computing, 20, 33-61. https://doi.org/10.1137/S1064827596304010
- 7. Daubechies, I., Defries, M. and DeMol, C. (2004) An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint. Communications on Pure and Applied Mathematics, 57, 1413-1457.
- 8. Zou, H. (2006) The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association, 101, 1418-1429. https://doi.org/10.1198/016214506000000735
- 9. Meinshausen, N. and Yu, B. (2009) Lasso-Type Recovery of Sparse Representations for High-Dimensional Data. Annals of Statistics, 46, 246-270. https://doi.org/10.1214/07-AOS582
- 10. Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004) Least Angle Regression. The Annals of Statistics, 32, 407-451.
- 11. Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society Series B (Statistical Methodology), 73, 273-282.
- 12. Zou, H. and Hastie, T. (2005) Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society Series B (Statistical Methodology), 67, 301-320.
- 13. Kordas, G. (2015) A Neurodynamic Optimization Method for Recovery of Compressive Sensed Signals with Globally Converged Solution Approximating to L0 Minimization. IEEE Transactions on Neural Networks and Learning Systems, 26, 1363-1374.
- 14. Natraajan, B.K. (1995) Sparse Approximation to Linear Systems. SIAM Journal on Computing, 24, 227-234. https://doi.org/10.1137/S0097539792240406
- 15. Xiao, Y.H. and Song, H.N. (2012) An Inexact Alternating Directions Algorithm for Constrained Total Variation Regularized Compressive Sensing Problems. Journal of Mathematical Imaging and Vision, 44, 114-127.
- 16. Daubechies, I., Defrise, M. and Mol, C.D. (2004) An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint.
- 17. Lai, M.J., Xu, Y.Y. and Yin, W.T. (2013) Improved Iteratively Reweighted Least Squares for Unconstrained Smoothed Lq Minimization. SIAM Journal on Numerical Analysis, 51, 927-957.
- 18. Garcia, C.B. and Li, T.Y. (1980) On the Number of Solutions to Polynomial Systems of Equations. SIAM Journal on Numerical Analysis, 17, 540-546. https://doi.org/10.1137/0717046