Advances in Pure Mathematics
Vol.07 No.01(2017), Article ID:73803,9 pages
10.4236/apm.2017.71006

A Generalized Elastic Net Regularization with Smoothed l 0 Penalty

Sisu Li, Wanzhou Ye

Department of Mathematics, College of Science, Shanghai University, Shanghai, China

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 6, 2016; Accepted: January 21, 2017; Published: January 24, 2017

ABSTRACT

This paper presents an accurate and efficient algorithm for solving the generalized elastic net regularization problem with smoothed l 0 penalty for recovering sparse vector. Finding the optimal solution to the unconstrained l 0 minimization problem in the recovery of compressive sensed signals is an NP-hard problem. We proposed an iterative algorithm to solve this problem. We then prove that the algorithm is convergent based on algebraic methods. The numerical result shows the efficiency and the accuracy of the algorithm.

Keywords:

Sparse Vector, Compressed Sense, Elastic Net Regularization, l 0 Minimization

1. Introduction

Compressive sensing (CS) has been emerging as a very active research field and brought about great changes in the field of signal processing during recent years with broad applications such as compressed imaging, analog-to-information conversion, biosensors, and so on [1] [2] [3] . Meanwhile, the l 0 norm based signal recovery is attractive in compressed sensing as it can facilitate exact recovery of sparse signal with very high probability [4] [5] . Mathematically, the problem can be presented as

min x R N x 0 , subjectto A x = y , (1)

where y R m , A R m × N is a measurement matrix, 2 denotes the Euclidean norm and x 0 , formally called the quasi-norm, denotes the number of the nonzero components of x = ( x 1 , x 2 , , x n ) T R N , and the λ > 0 is a regulari- zation parameter.

We can then solve the unconstrained l 0 regularization problem

min x R N { 1 2 A x y 2 2 + λ x 0 } , (2)

A natural approach to this problem is to solve a convex relaxation l 1 re- gularization problem [6] [7] as following

min x R N { 1 2 A x y 2 2 + λ x 1 } , (3)

where the x 1 = i = 1 N x i is the l 1 norm. Undoubtly, the l 1 regularization

has many applications [8] [9] and can be solved by many classic algorithms such as the iterative soft thresholding algorithm [7] , the LARs [10] , etc. An effective regression method, Lasso [11] , has a very close relationship with the l 1 re- gularization as well. In 2005, Zou et al. proposed the following algorithm, which called the elastic net regularization [12]

min x R N { 1 2 A x y 2 2 + λ 1 x 1 + λ 2 x 2 } , (4)

where the λ 1 , λ 2 > 0 are two regularization parameters. It is proved in many papers that the elastic net regularization outperforms the Lasso in prediction accuracy. Cands proved that as long as A satisfies the RIP condition with a constant parameter, the l 1 minimization can yield an equivalent solution as that of l 0 minimization [13] . So in general, the l 1 regularization problem can be regard as an aproach to the l 0 regularization. Therefore, we shall consider a generalized elastic net regularization problem with l 0 penalty:

min x R N { 1 2 A x y 2 2 + λ 1 x 0 + λ 2 x 2 2 } , (5)

Unfortunately, the l 0 norm minimization problem is NP-hard [14] . And due to the sparsity of the solution x, we could turn out to calculate the following generalized elastic net regularization with smoothed l 0 penalty:

min x R N { 1 2 A x y 2 2 + λ 1 x 0 , δ + λ 2 x 2 2 } , (6)

where x 0 , δ = i = 1 N x i 2 x i 2 + δ , the δ > 0 is a parameter which approaches zero in

order to approximate x 0 .

In this paper,we propose an iterative algorithm for recovering sparse vectors which substitute the l 0 penalty with a function [15] . And by adding an l 2 term, we can prove that the algorithm is convergent based on the algebraic methods. In the experiment part, we compare the algorithm with the l 1 soft thresholding algorithm (ITH) [16] . And the output results show an outstanding success of the new method.

The rest of this paper is organized as follows. We develop the new algorithm in Section 2 and prove its convergence in Section 3. Experiments on accuracy and efficiency are reported in Section 4. Finally, we conclude this paper in Section 5.

2. Problem Reformulation

The reconstruction method discussed in this paper is for directly approaching the l 0 norm and obtaining its minimal solution with suitably designed objective functions. We denote by C δ ( x , λ 1 , λ 2 ) the objective function of the minimization problem (6).

C δ ( x , λ 1 , λ 2 ) = 1 2 A x y 2 2 + λ 1 i = 1 N x i 2 x i 2 + δ + λ 2 x 2 2 . (7)

Our goal is to minimize the objective function. For any δ > 0 and λ 1 , λ 2 > 0 , the minimization problem is convex coercie, thus it has a solution. So the optimal solution of (7) can be given according to the optimal condition.

A T ( A x ^ y ) + [ 2 λ 1 x ^ i δ ( x ^ i 2 + δ ) 2 ] 1 i N + 2 λ 2 x ^ = 0. (8)

Then we can present the following iterative algorithm to solve the above minimization problem.

3. Convergence of the Algorithm

In this section, we prove that the algorithm is convergent. Firstly, we start from the lemma 1 [17] which we can deduce the inquality directly by using the mean value theorem.

Lemma 1. Given δ > 0 , then the inequality

x 2 x 2 + δ y 2 y 2 + δ 2 δ ( x y ) y ( x 2 + δ ) 2 δ ( x y ) 2 ( x 2 + δ ) 2 (11)

holds for any x , y R .

Proof. We first denote f ( x 2 ) = x 2 x 2 + δ , then by the mean value theorem, we

have

f ( x 2 ) f ( y 2 ) = f ( ξ ) ( x 2 y 2 ) where ξ between x 2 and y 2 . (12)

So we have

x 2 x 2 + δ y 2 y 2 + δ = δ ( x 2 y 2 ) ( ξ + δ ) 2 = 2 δ ( x y ) y + δ ( x y ) 2 ( x 2 + δ ) 2 . (13)

Thus, we can simplify the inequality as follow:

x 2 x 2 + δ y 2 y 2 + δ 2 δ ( x 2 y 2 ) ( x 2 + δ ) 2 δ ( x y ) 2 ( x 2 + δ ) 2 .

This inequality of (11) holds no matter x 2 > y 2 , x 2 < y 2 or x 2 = y 2 . And the next Lemma proves that the sequence x ( k ) drives the function C δ ( x , λ 1 , λ 2 ) downhill.

Lemma 2. For any δ > 0 and λ 1 , λ 2 > 0 , let x ( k + 1 ) be the solution of (9) for k = 1 , 2 , 3 , Then we can have

A x k A x k + 1 2 2 2 ( C δ ( x k , λ 1 , λ 2 ) C δ ( x k + 1 , λ 1 , λ 2 ) ) . (14)

Furthermore,

x k x k + 1 2 2 c ( C δ ( x k , λ 1 , λ 2 ) C δ ( x k + 1 , λ 1 , λ 2 ) ) . (15)

where c is a positive constant that depends on λ 2 .

Proof.

C δ ( x k , λ 1 , λ 2 ) C δ ( x k + 1 , λ 1 , λ 2 ) = λ 1 i = 1 N ( ( x i k ) 2 ( x i k ) 2 + δ ( x i k + 1 ) 2 ( x i k + 1 ) 2 + δ ) + λ 2 ( x k 2 2 x k + 1 2 2 ) + 1 2 ( A x k y 2 2 A x k + 1 y 2 2 ) = λ 1 i = 1 N ( ( x i k ) 2 ( x i k ) 2 + δ ( x i k + 1 ) 2 ( x i k + 1 ) 2 + δ ) + 1 2 A x k A x k + 1 2 2 + λ 2 x k x k + 1 2 2 + 2 λ 2 ( x k x k + 1 ) T x k + 1 + ( A x k A x k + 1 ) T ( A x k + 1 y ) . (16)

Using (9). The last term in (16) can be simplified to be

( A x k A x k + 1 ) T ( A x k + 1 y ) = ( A x k A x k + 1 ) T [ ( A T ) 1 ( 2 λ 2 x k + 1 + 2 λ 1 δ x k + 1 ( ( x k ) 2 + δ ) 2 ) ] = i = 1 N 2 λ 1 δ x i k + 1 + ( x i k x i k + 1 ) ( ( x k ) 2 + δ ) 2 2 λ 2 ( x k x k + 1 ) T x k + 1 . (17)

Substituting (15) into (16) and using (11),

C δ ( x k , λ 1 , λ 2 ) C δ ( x k + 1 , λ 1 , λ 2 ) = λ 1 i = 1 N ( ( x i k ) 2 ( x i k ) 2 + δ ( x i k + 1 ) 2 ( x i k + 1 ) 2 + δ 2 δ x i k + 1 ( x i k x i k + 1 ) ( ( x k ) 2 + δ ) 2 ) + 1 2 A x k A x k + 1 2 2 + λ 2 x k x k + 1 2 2 i = 1 N δ λ 1 ( x k x i k + 1 ) 2 ( ( x k ) 2 + δ ) 2 + 1 2 A x k A x k + 1 2 2 + λ 2 x k x k + 1 2 2 . (18)

Since i = 1 N δ λ 1 ( x k x i k + 1 ) 2 ( ( x k ) 2 + δ ) 2 0 for any x k and x k + 1 . From (18) we can

obtain the results of (14) and (15) with C = 1 λ 2 .

Lemma 3. ( [18] , Theorem 3.1) Let P ( z , w ¯ ) = 0 to be given, and let Q ( z , ( a ¯ ) , ( c ¯ ) ) = 0 be its corresponding highest ordered system of equations. If Q ( z , ( a ¯ ) , ( c ¯ ) ) = 0 has only the trivial solution z = 0 , then P ( z , w ¯ ) = 0 has β = i = 1 m q i solutions, where q i is the degree of P i .

Theorem 1. For any δ > 0 and λ 1 , λ 2 > 0 . Then the iterative solutions x k in (9) converge to x * , that is lim k x k = x * and x * is a critical point of (6).

Proof. Here, we need to prove that the sequence x k is bounded. We assume that x k i is one convergent subsequence of x k and its limit point is x * . By (15) we know that the sequence x k i + 1 also converges to x * . If we replace x k , x k + 1 with x k i , x k i + 1 in (10) and letting i yields.

2 λ 1 δ x j * ( ( x j * ) 2 + δ ) 2 + A T ( A x * y ) + 2 λ 2 x * = 0. (19)

And this implies that the limit point which converges to any convergent subsequence of x k is the critical point of (8). In order to prove the convergence of sequence x k , we need to prove that the limit point set M, which contains all the limit points of convergent subsequence of x k is a finite set. So we have to prove that the following equation has finite solutions.

[ 2 λ 1 δ u j ( ( u i ) 2 + δ ) 2 ] 1 i N + A T ( A u y ) + 2 λ 2 u = 0. (20)

where u = ( u 1 , u 2 , , u N ) T R N . We can rewrite (20) as follow:

[ 2 λ 1 δ u j ( ( u i ) 2 + δ ) 2 ] 1 i N + ( A T A + 2 λ 2 I N ) u A T y = 0. (21)

It is obvious that A T A + 2 λ 2 I N is a positive definite matrix, A T y R N is the N × N identity matrix. Then the (21) can be rewritten as the following eq- uation:

2 λ 1 δ u + B ( ( A T A + 2 λ 2 I N ) u A T y ) = 0. (22)

where B is an N × N diagonal matrix with diagonal entries B i i = ( ( u i ) 2 + δ ) 2 , i = 1 , 2 , 3 , , N . We denote A T A + 2 λ 2 I N = ( a i i ) N × N and A T y = ( q 1 , q 2 , , q N ) T . Then

( 2 λ 1 δ u 1 + ( a 11 u 1 + a 12 u 2 + + a 1 N u N q 1 ) ( u 1 2 + δ ) 2 = 0 , 2 λ 1 δ u 2 + ( a 21 u 1 + a 22 u 2 + + a 2 N u N q 2 ) ( u 2 2 + δ ) 2 = 0 , 2 λ 1 δ u N + ( a N 1 u 1 + a N 2 u 2 + + a N N u N q N ) ( u N 2 + δ ) 2 = 0. (23)

If we want to prove that (23) has finite solutions, then we can prove the (22) system has finite solutions. According to lemma 3, if we prove that the highest ordered system of (23) has only trivial solution, then it's easy to conclude that the Equation (23) has finite solutions.

( ( a 11 u 1 + a 12 u 2 + + a 1 N u N ) u 1 4 = 0 , ( a 21 u 1 + a 22 u 2 + + a 2 N u N ) u 2 4 = 0 , ( a N 1 u 1 + a N 2 u 2 + + a N N u N ) u N 4 = 0. (24)

We prove the system (24) has only trivial solution. We assume that u = ( u 1 , u 2 , , u s , 0 , , 0 ) T R N is a nonzero solution of (24), u i 0 for i = 1 , 2 , , s , 1 s N . Then we have

C u s = 0. (25)

where C = ( a i i ) s × s is the s × s leading principle submatrix of matrix A T A + 2 λ 2 I N is the positive definite, therefore the matrix C is positive definite as well. So we have u i = 0 for i = 1 , 2 , , s . This contradicts the assumption of u i 0 , i = 1 , 2 , , s , 1 i s .

Therefore, the system (24) has only trivial solution. So the Equation (20) has finite solutions. Since all the limit points of convergent subsequence of x ( k ) satisfies the Equation (20) and we have proved that (20) has finite solutions. So the limit point set M is a finite set. Combining with x ( k + 1 ) x ( k ) 2 0 as k , we thus obtain that the sequence x ( k ) is convergent and limit x * is a critical point of problem (7).

4. Numerical Experiments

In this section, we present some numerical experiments to show the efficiency and the accuracy of the Algorithm 1 for sparse vector recovery. We compare the performance of Algorithm 1 with l 1 IST [3] . In the test, the matrix A had the size of 100 × 250 , which is m = 100 and N = 250 . All the experiments were performed in Matlab and all the experimental results were averaged over 100 independent trials for various sparsity s.

The experiment results contain two parts: the first one focuses on the comparison of the two algorithms in accuracy; the other one focuses on the efficiency of the two algorithms. In the experiments, the mean squared error of the original vector and the result is recorded as

MSE = x k x 0 2 2 / N (26)

4.1. Comparison on the Accuracy

The matrix A R 100 × 250 and the original sparse vector x 0 R 250 was gene- rated randomly according to the standard Gaussian distribution with N-length and s-sparse, which varies as 2, 4, 6, 8, ・・・, 48. The location of the nonzero elements were randomly generated. The regularization parameters were set as δ = 10 6 and λ 1 = 10 3 , λ 2 = 10 5 . All the other parameters of the two al- gorithms were set to be the same.The results are shown in Figure 1.

The Figure 1 shows that the convergence error MSE for the two algorithms tends to be stable at last for different sparsity s. We can also observe that the MSE of the LAGENR-L0 is lower than the IST which demonstrates that our algorithm is not only convergent,but also outperforms the IST in accuracy.

4.2. Comparison on the Efficiency

In this subsection, we focus on the speed of the two algorithms. We conduct various experiments to test the effectiveness of the proposed algorithm. Table 1 report the numerical results of the two algorithms for recovering vectors for different sparsity level. From the results, we can see that the IAGENR-L0 performs much better than IST in efficiency and the accuracy.

Figure 1. Comparison of the convergence error MSE = x k x 0 2 2 / N for both IAGENR-L0 and IST.

Table 1. The iteration time of the IAGENR-L0 and the IST for different sparsity level.

5. Conclusion

In this paper, we consider an iterative algorithm for solving the generalized elastic net regularization problems with smoothod l 0 penalty for recovering sparse vectors. Then a detailed proof of convergence of the iterative algorithm is given in Section 2 by using the algebraic method. Additionally, the numerical experiments in Section 3 show that our iterative algorithm is convergent and performs better than the IST on recovering sparse vectors.

Cite this paper

Li, S.S. and Ye, W.Z. (2017) A Generalized Elastic Net Regularization with Smoothed Penalty. Advances in Pure Mathematics, 7, 66- 74. http://dx.doi.org/10.4236/apm.2017.71006

References

  1. 1. Donoho, D.L. (2006) Compressed Sensing. IEEE Transactions on Information Theory, 52, 1289-1306.

  2. 2. Cands, E.J., Romberg, J. and Tao, T. (2006) Robust Uncertainty Principles: Exagct signal Reconstruction from Highly Incomplete Frequency Information. IEEE Transactions on Information Theory, 26, 489-509.

  3. 3. Duarte, M.F. and Eldar, Y.C. (2011) Structured Compressed Sensing: From Theory to Applications. IEEE Transactions on Signal Processing, 59, 4053-4085.

  4. 4. Lu, Z. (2014) Iterative Hard Thresholding Methods for Regularized Convex cone Programming. Mathematical Programming, 147, 125-154. https://doi.org/10.1007/s10107-013-0714-4

  5. 5. Cand‘es, E.J. and Tao, T. (2005) Decoding by Linear Programming. IEEE Transactions on Information Theory, 51, 4203-4215.

  6. 6. Chen, S.S., Donodo, D.L. and Saunders, M.A. (1998) Atomic Decomposition by basis Pursuit. SIAM Journal on Scientific Computing, 20, 33-61. https://doi.org/10.1137/S1064827596304010

  7. 7. Daubechies, I., Defries, M. and DeMol, C. (2004) An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint. Communications on Pure and Applied Mathematics, 57, 1413-1457.

  8. 8. Zou, H. (2006) The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association, 101, 1418-1429. https://doi.org/10.1198/016214506000000735

  9. 9. Meinshausen, N. and Yu, B. (2009) Lasso-Type Recovery of Sparse Representations for High-Dimensional Data. Annals of Statistics, 46, 246-270. https://doi.org/10.1214/07-AOS582

  10. 10. Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004) Least Angle Regression. The Annals of Statistics, 32, 407-451.

  11. 11. Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society Series B (Statistical Methodology), 73, 273-282.

  12. 12. Zou, H. and Hastie, T. (2005) Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society Series B (Statistical Methodology), 67, 301-320.

  13. 13. Kordas, G. (2015) A Neurodynamic Optimization Method for Recovery of Compressive Sensed Signals with Globally Converged Solution Approximating to L0 Minimization. IEEE Transactions on Neural Networks and Learning Systems, 26, 1363-1374.

  14. 14. Natraajan, B.K. (1995) Sparse Approximation to Linear Systems. SIAM Journal on Computing, 24, 227-234. https://doi.org/10.1137/S0097539792240406

  15. 15. Xiao, Y.H. and Song, H.N. (2012) An Inexact Alternating Directions Algorithm for Constrained Total Variation Regularized Compressive Sensing Problems. Journal of Mathematical Imaging and Vision, 44, 114-127.

  16. 16. Daubechies, I., Defrise, M. and Mol, C.D. (2004) An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint.

  17. 17. Lai, M.J., Xu, Y.Y. and Yin, W.T. (2013) Improved Iteratively Reweighted Least Squares for Unconstrained Smoothed Lq Minimization. SIAM Journal on Numerical Analysis, 51, 927-957.

  18. 18. Garcia, C.B. and Li, T.Y. (1980) On the Number of Solutions to Polynomial Systems of Equations. SIAM Journal on Numerical Analysis, 17, 540-546. https://doi.org/10.1137/0717046