Applied Mathematics
Vol.3 No.10A(2012), Article ID:24108,7 pages DOI:10.4236/am.2012.330189

A Fixed Point Method for Convex Systems

Morteza Kimiaei1, Farzad Rahpeymaii2

1Department of Mathematics, Razi University, Kermanshah, Iran

2Department of Mathematics, Payame Noor University, Tehran, Iran

Email: morteza.kimiaei@gmail.com, rahpeyma_83@yahoo.com

Received June 26, 2012; revised September 27, 2012; accepted October 6, 2012

Keywords: Convex Equations; Least Squares; -Regularization Problems; Fixed Point; Quadratically Convergence

ABSTRACT

We present a new fixed point technique to solve a system of convex equations in several variables. Our approach is based on two powerful algorithmic ideas: operator-splitting and steepest descent direction. The quadratic convergence of the proposed approach is established under some reasonable conditions. Preliminary numerical results are also reported.

1. Introduction

System of convex equations is a class of problems that is conceptually close to both constrained and unconstrained optimization and often arise in the applied areas of mathematics, physics, biology, engineering, geophysics, chemistry, and industry. Consider the following system of convex equations

(1)

in which is a convex continuously differentiable function. It is noticed that if the system (1) is a linear system of equations and there are a lot of approaches to solve this problem. One of the most interesting methods for solving linear system is fixed point methods that have been comprehensively studied by many authors. For example, shrinkage, subspace optimization and continuation [1], fixed-Point continuation method [2], nonlinear wavelet image processing [3], EM method [4], iterative thresholding method [5] and fast iterative thresholding [6]. The system (1) is called an overdetermined system whenever and under-determined for. If, we obtain a square system of convex equations. Most of the time, we wish to find a proper such that (1) holds as closely as possible. This means that our objective is to reduce as much as and, if possible, reduce it to zero. Hence the system of convex Equations (1) can be written as an unconstrained optimization problem

(2)

in which

(3)

It is obvious that

where is the Jacobin matrix of. In this work, we consider a -regularized least squares problem for system (1):

(4)

in which is a parameter. We note that if and any is convex, then F is convex. On the other hand, convexity of implies that f is convex. Therefore, φ is a convex function.

As an example, Hale, Yin, and Zhang in [2] presented a fixed-point continuation method for -regularized minimization that based on operator-splitting and con tinuation:

(5)

in where and mappings are defined as :

(6)

(7)

The operator in the right hand side of relation (7) denote the component-wise product of and. Because of the parameter τ is constant, the number of iterations and computational costs increase and so it is not suitable. To overcome the mentioned disadvantage, we create innovation in the parameter τ based on the steepest descent direction:

(8)

The analysis of the new approach shows that it inherits both stability of fixed point methods and low computational cost of steepest descent methods. We also investigate the global convergence to first-order stationary points of the proposed method and provide the quadratic convergence rate. To show the efficiency of the proposed method in practice, some numerical results are also reported.

The rest of this paper is organized as follows: In Section 2, we describe the motivation behind the proposed algorithm in the paper together with the algorithm’s structure. In Section 3, we prove that the proposed algorithm is globally convergent. Preliminarily numerical results are reported in Section 4. Finally, some conclusions are expressed in Section 5.

2. The New Algorithm: Motivation and Structure

In this section, we first introduce a fixed point algorithm for small-scale convex systems of equations. Then, given some properties of the algorithm and investigate its global convergence as well as the quadratic convergence rate. The objective function in (4) is a sum of two convex functions. By convex analysis, minimizing a convex function is equivalent to finding a zero of the subdifferential. Let be the set of optimal solutions of (4). It is well-known that an optimality condition for (4) is

or equivalently,

(9)

where 0 denotes the zero vector in and is i-th component of. It follows readily from (9) that 0 is an optimal solution of (4) if and only if, or in other words,

Therefore, it is easy to check whether 0 is a solution of (4) (see [2]).

One of the simplest methods for solving (4) generates a sequence that based on steepest descent direction. Here,

and. Note that if the system (1) be a linear system of equations, then and

. Here, the system (1) is a convex system of equations, then and for the purpose of our analysis, we will always choose

. Using these information, we present a proximal regularization of the linearized function f at for problem (4) (see [7]), and written it equivalently as

(10)

After ignoring constant terms, (10) can be rewritten as

(11)

Notice that the function in the problem (11) is minimized if and only if each functions

is minimized. If we take, then we can simply obtain the minimizer of as follows:

Therefore, and the solution of (4) is obtained. Now, based on above arguments, a new fixed point algorithm can be outlined as follows:

Algorithm 1: Fixed point algorithm (FP)

Input: Choose an initial point and constants, ,.

Begin

; l

While {Start loop }

Step 1: {Parameter shrinkage calculation}

Step 2: {Operation shrinkage calculation}

Step 3: {Parameters update}

Calculate as (8);

Generate;

       Increment k by one and go to Step 1;

End While {End loop}

End

3. Convergence Analysis

In this section, we will give the convergence analysis of the proposed algorithm given in Section 2. In the convergence analysis, we need the following assumption:

(H1) Problem (4) has an optimal solution set, and there exists a set

for some and, such that f is twice continuously differentiable on Ω and

for all. Using the mean-value theorem, we hav

for any.

(H2) There exists a constant such that

Lemma 3.1. By the definition of and satisfying (8), we have

for any.

Proof. Suppose that then and. So by (8), we conclude that. Now suppose that. Then, we show that. By contradiction, we assume that. Then and.

Therefore we have that is a contradiction.

In the following lemma, we show that the new choice of satisfying in the lemma 4.1 and corollary 4.1 of [2] when.

Lemma 3.2. Under assumption (H1), the choice of and result in is nonexpansive over Ω, i.e. for any

(12)

Moreover, whenever equality holds in (12).

Proof. Let and. Now, we have two cases:

1) If, then

(13)

Hence,

Let. By the lemma 3.1, we have if and only if.

Then

ifby the equation (13), we obtain

which contradicts to.

Hence so that

2) If, then

(14)

Hence,

Let. By the lemma 3.1, we have

if and only if.

Then

ifby the Equation (14), we obtain

which contradicts to. Hence

so that

which completes the proof.

Corollary 3.3. (Constant optimal gradient). From (H1) assumption, for any, there is a vector such that.

Let be the solution set of (4), , and be the vector specified in corollary 3.2. Then, we define

where. We will show that the sequence generated by (5) is finite convergence for components in L and is quadratic convergence for components in E.

It is obvious from the optimality condition (9) that, and for any, we have

Hale, Yin, Zhang in [2] establishes the finite convergence properties of stated in the following of theorem. The proof of the theorem 3.4 and 3.5 is similar to the theorem 4.1 and 4.2 in [2].

Theorem 3.4. Under assumption (H1), the sequence is generated by the fixed point iteration (5) applied to problem (4) from any starting point converges to some. In addition, for all but finitely many iterations, we have

(15)

(16)

where the numbers of iterations not satisfying (15) and (16) do not exceed and, respectively.

Theorem 3.5. (The quadratic case). Let f be a convex quadratic function that is bounded below, be its Hessian, and satisfy

then the sequence is generated by the fixed point iteration (5) applied to problem (4) from any starting point converges to some. In addition, for all but finitely many iterations, we have (15) - (16) hold for all but finitely many iterations.

Lemma 3.6. Suppose assumptions (H1) and (H2) holds. Then, we have

Proof. From (9), we can obtain

then, by (H2) and the above inequality, we have

For sufficiently large k, we conclude that

Now, consider the sequence generated by the FP algorithm. According to the fixed point iterations (5), it converge to some point. We will show that the convergence is quadratic. In order to do, the following additional assumption is required:

(H3) The following condition

(17)

holds, in where and. Suppose that k is enough large so that, for all. Also, suppose that

denote the square sub-matrix of the matrix C corresponding to the index set E. Firstly we suppose that, then the mean-value theorem yields

(18)

Since and is non-expansive [2], using H2, (17), and (18), we have that

Secondly, we suppose that. Similar first case, we conclude that

Theorem 3.7. Suppose that (H1)-(H3) holds and let is the sequence generated by the FP algorithm starting. For sufficiently large k, the sequence is converges to some point in quadratically.

4. Preliminary Numerical Experiments:

This section reports some numerical results and comparisons regarding the implementations of the new proposed idea of the present study with some other algorithms for small-scale problems. All codes are written in MATLAB 9 programming environment with double precision format by a same subroutine. In the experiments, the presented algorithms are stopped whenever

Test problems are as follows:

1) Tridiagonal system linear is

in where A is a tridiagonal matrix given by

and

2) Five diagonal system linear is

in where A is a five matrix given by

and

3) Logarithmic function [8]

initial point:.

4) Strictly convex function 1 [8] that is the gradient of

initial point:.

5) Strictly convex function 2 [8] that is the gradient of

initial point:.

6) Strictly convex function 3 [8] that is the gradient of

initial point:.

7) Linear function-full rank [8]

initial point:.

8) Penalty function [8]

initial point:.

9) Sum square function [9]

initial point:.

10) Trigonometric exponential function [10]

where.

initial point:.

In this section, we compare the numerical results obtained by running Algorithms following:

1) FP1

2) FP2

3) DS (steepest descent direction,

in where the Jacobian matrix Jk is exact. In running the algorithm FP1 and FP2 takes advantages of the parame - ters and. The dimensions of problems are selected from 2 to 100. The results for small-scale problems are summarized in Table 1.

In Table 1, and respectively indicate the total number of iterates and the total number of function evaluations. Table 1 indicates the total number of iterations and function evaluations for some small scale problems with dimensions 2 to 100. Evidentally, one can see that FP2 performs better than the other presented algorithms in the sense of both the total number of iterations and the total number of function evaluations. From Table 1, we observe that the proposed algorithm is the best one on the all of test problems. We can deduce that our new algorithm is more efficient and robust than the other considered algorithms for solving small scale system of convex equations problems. In more details, the results of Table 1 in Figure 1 are interpreted thanks to the Dolan and More’s performance profile in [11].

In the procedure of Dolan and More, the profile of each code is measured considering the ratio of its computational outcome versus the best numerical outcome of all codes. This profile offers a tool for comparing the performance of iterative processes in statistical structure. In particular, let S is set of all algorithms and P is a set of

Table 1. Numerical results for small scale problems.

Figure 1. Performance profile for the number of iterates.

test problems, with solvers and problems. For each problem p and solver s and is the computation result regarding to the performance index. Then, the following performance ratio is defined

(19)

If algorithmis not convergent for a problem p, the procedure sets, where should be strictly larger than any performance ratio (19). For any factor, the overall performance of algorithmis given by

In fact is the probability of algorithm that a performance ratio is within a factor of the best possible ratio. The function is the distribution function for the performance ratio. Especially, gives the probability that algorithm s wins over all other algorithms, and gives the probability of that algorithm s solve a problem. Therefore, this performance profile can be considered as a measure of the efficiency and the robustness among the algorithms. In Figure 1, the x-axis shows the number while the y-axis inhibits

From Figure 1, it is clear that FP2 had the most wins compared with the other algorithm while it solved about 60% of the test problems with the greatest efficiency. If one concentrates on the ability of completing a run successfully, it can be seen that FP2 is the best algorithm among the considered algorithms because it reaches faster than the other.

5. Conclusion

In this paper, we have presented a new algorithm for small-scale systems of convex equations that blending steepest descent direction and fixed point ideas. Preliminary numerical effort on the set of small-scale convex systems of equations indicates that significant profits in both the total number of iterations and the total number of function evaluations can be achieved.

REFERENCES

  1. Z. Wen, W. Yin, D. Goldfarb and Y. Zhang, “A Fast Algorithm for Sparse Reconstruction Based on Sharinkage,” Optimization, and Continuation, CAAM Technical Report TR09-01, 2009.
  2. E. T. Hale, W. Yin and Y. Zhang, “A Fixed-Point Continuation Method for -Regularized Minimization with Applications to Compressed Sensing,” CAAM Technical Report TR07-07, 7 July 2007.
  3. A. Chambolle, R. A. De Vore, N. Y. Lee and B. J. Lucier, “Nonlinear Wavelet Image Processing: Variational Problems, Compression, and Noise Removal through Wavelet Shrinkage,” IEEE Transactions on Image Processing, Vol. 7, No. 3, 1998, pp. 319-335. doi:10.1109/83.661182
  4. M. A. T. Figueiredo and R. D. Nowa, “An EM Algorithm for Wavelet-Based Image Restoration,” IEEE Transactions on Image Processing, Vol. 12, No. 8, 2003, pp. 906-916. doi:10.1109/TIP.2003.814255
  5. I. Daubechies, M. Defrise and C. D. Mol, “An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint,” Communications on Pure and Applied Mathematics, Vol. 57, No. 11, 2004, pp. 1413- 1457. doi:10.1002/cpa.20042
  6. C. Vonesch and M. Unser, “Fast Iterative Thresholding N. Algorithm for Wavelet-Regularized Deconvolution,” Proceedings of the SPIE Optics and Photonics 2007 Conference on Mathematical Methods: Wavelet XII, Vol. 6701, San Diego, 26-29 August 2007, p. 15.
  7. B. Martinet, “Régularisation d’Inéquations Variation Nelles par Approximations Successives,” Recherche Operationnelle, Vol. 4, No. 3, 1970, pp. 154-158
  8. G. L. Yuan, Z. X. Wei and S. Lu, “Limited Memory BFGS Method with Backtracking for Symmetric Nonlinear Equations,” Mathematical and Computer Modelling, Vol. 54, No. 1-2, 2011, pp. 367-377. doi:10.1016/j.mcm.2011.02.021
  9. M. Moga and C. Smutnicki, “Test Functions for Optimization Need,” 2005. www.zsd.itc.pwr.pc/file/docs/function.pdf.
  10. L. Lukšan and J. Vlček, “Sparse and Partially Separable Test Problems for Unconstrained and Equality Constrained Optimization,” Technical Report, Vol. 767, 1999.
  11. E. D. Dolan and J. J. Moré, “Benchmarking Optimization Software with Performance Profiles,” Mathematical Programming, Vol. 91, No. 2, 2002, pp. 201-203.