Journal of Applied Mathematics and Physics
Vol.07 No.01(2019), Article ID:89915,7 pages
10.4236/jamp.2019.71009

A Spectral Projected Gradient-Newton Two Phase Method for Constrained Nonlinear Equations

Yuezhe Zhang

College of Science, University of Shanghai for Science and Technology, Shanghai, China

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 29, 2018; Accepted: January 13, 2019; Published: January 16, 2019

ABSTRACT

In this paper, we proposed a spectral gradient-Newton two phase method for constrained semismooth equations. In the first stage, we use the spectral projected gradient to obtain the global convergence of the algorithm, and then use the final point in the first stage as a new initial point to turn to a projected semismooth asymptotically newton method for fast convergence.

Keywords:

Constrained Semismooth Equations, Spectral Projected Gradient Method, Newton Method, Two-Phase

1. Introduction

In this paper, we consider the constrained nonlinear semismooth equations problem: finding a vector x Ω such that

H ( x ) = 0 , x Ω : = { x n | l x u } , (1)

where

Ω : = { x n | l x u } , l i { } , u i { } , l i < u i , i = 1 , , n

H : n n is a semismooth mapping. The notation of semismoothness was introduced for the functionals by Mifflin [1] and extended to vector functions by Qi and Sun [2] .

Systems of constrained semismooth equations arise in various application, for instance complementarity problems, the box constrained variational inequality problems, the KKT system of variational inequlity problems and so on. The solution of nonlinear equations can be transformed into solving the following constrained optimization problem:

min f ( x ) = 1 2 H ( x ) 2 s .t . x Ω (2)

where f : R n R is continuously differentiable and its gradient denoted by f ( x ) . Many researchers have studied constrained optimization problems such as (2) and given many effective algorithms. For example, a new class of adaptive non-monotone spectral gradient method is given in reference [3] , an active set projected trust region algorithm in [4] . The methods of optimization problems involve the first-order methods and the second-order methods. Classical first-order algorithms include gradient method, sub-gradient method, conjugate gradient method, etc. The main advantage of first-order method is its small storage, which is particularly suitable for large-scale problems. However, the disadvantage of first-order method is that its convergence speed is at most linear, and it can not meet the requirements of high precision. For the second-order method, it has the advantage of fast convergence speed. Under certain conditions, it can achieve superlinear convergence or even quadratic convergence. But its disadvantage is that it needs a good initial point, sometimes it even needs the initial point to approach the local optimal point.

Motivated by this, in this paper, we combine the advantages of the first-order method with those of the second-order method. We will consider the two-stage combination algorithm to solve the optimization problem. First, we use the first-order method to obtain the global convergence of the algorithm, and then use the final point obtained by the first-order method as the new initial point to turn to the second-order method to obtain the fast convergence speed. At the same time, we use projection technology to solve the constrained conditions.

2. Preliminaries

In this section, we present some definitions and theorems that are useful to our main result.

Suppose H : R n R n is a locally Lipschitzian function, according to Rademacher theorem, H is differentiable almost everywhere. Denote the set of points at which H is differentiable by D H . We write H ( x k ) for the usual n × m Jacobian matrix of partial derivatives whenever x is a point at which the necessary partial derivatives exists. Let H ( x ) be the generalized Jacobian defined by Clarke in [5] . Then

H ( x ) = C 0 ( B H ( x ) ) , (3)

where the C 0 denotes the convex hull of a set, B H ( x ) = { lim x j x x j D H H ( x j ) } .

Definition2.1 [2] Suppose H : R n R n is a locally Lipschitzian function, we say that H is semismooth at x if

lim V H ( x + t h ) h h , t 0 { V h } (4)

exists for any h R n .

Lemma 2.2 [2] : Suppose H : R n R n is a locally Lipschitzian function, the following statements are equivalent:

1) H is semismooth at x;

2) For any V H ( x + h ) , h 0 ,

V h H ( x ; h ) = ο ( h ) , (5)

H ( x + h ) H ( x ) V h = ο ( h ) . (6)

Lemma 2.3 [2] : Suppose H : R n R n is a locally Lipschitzian function, H is semismooth at x if each component of H is semismooth at x.

Definition 2.4 [2] : Suppose H : R n R n is a locally Lipschitzian function, If for any V H ( x + h ) , h 0

V h H ( x ; h ) = Ο ( h 1 + p ) , (7)

where 0 < p 1 , then we call H is p-order semismooth at x.

Lemma 2.5 [2] : suppose H : R n R n is a locally Lipschitzian function, we say H is strongly BD-regular at x if all V B H ( x ) are nonsingular.

Lemma 2.6 [6] : Suppose that H : n n is locally Lipschitz continuous and H is BD-regular at x n . Then there exist a neighborhood ( x ) of x and a constant K such that for any y ( x ) and V B H ( y ) , V is nonsingular and V 1 K .

Lemma 2.7 [6] : Suppose that H : n n is locally Lipschitz continuous and H is BD-regular at a solution x of H ( x ) = 0 . If H is semismooth at x , then there exist a neighborhood ( x ) of x and a constant k > 0 such that for any x (x∗)

H ( x ) k x x . (8)

Lemma 2.8 [7] : The projection operator Π X ( ) satisfies.

1) For any x X , [ Π X ( z ) z ] T [ Π X ( z ) x ] 0 for all z n .

2) Π X ( y ) Π X ( z ) y z for all y , z n .

Lemma 2.9 [8] : Given x n and d n , the function ξ defined by

ξ ( λ ) = X ( x + λ d ) x / λ , λ 0 (9)

is nonincreasing.

Lemma 2.9 actually implies that if x X is a stationary point of (2), then

d ¯ G ( λ ) = Π X [ x + λ d G ] x = 0 , λ 0 (10)

3. Algorithm

In order to obtain the global convergence of the algorithm, in the first stage, we adopt the non-monotone spectral projection gradient method of the first-order method. The one-dimensional search procedure of Algorithm 3.1 will be called SPG1 from now on and Algorithm 3.2 will be called SPG2 in the rest of the paper.

Given z n , we define P ( z ) as the orthogonal projection on Ω , denote g ( x ) = f ( x ) . x 0 Ω , integer M 1 , a small parameter α min > 0 , a large parameter α max > α min , sufficient decrease parameter γ ( 0 , 1 ) , 0 < σ 1 < σ 2 < 1 , initially α 0 [ α min , α max ] , x 0 Ω .

Algorithm 3.1 [9] (SPG1)

Step 1. If P ( x k ) g ( x k ) x k < ε 1 , stop, input x k .

Step 2. (Backtracking)

Step 2.1 Set λ = α k .

Step 2.2 Set x + = P ( x k λ g ( x k ) ) .

Step 2.3 If

f ( x + ) max 0 j min { k , M 1 } f ( x k j ) + γ x + x k , g ( x k ) , (11)

Then define λ k = λ , x k + 1 = x + , s k = x k + 1 x k , y k = g ( x k + 1 ) g ( x k ) , and go to step 3.

If (11) does not hold, define λ n e w [ σ 1 λ , σ 2 λ ] . Set λ = λ n e w , and go to step 2.2.

Step 3. compute b k = s k , y k , If b k 0 , set α k + 1 = α max ; else compute α k = s k , s k , α k + 1 = min { α max , max { α min , a k / b k } } , and go to step 1.

Algorithm 3.2 [9] (SPG2)

Step 2. (Backtracking)

Step 2.1. Compute d k = P ( x k α k g ( x k ) ) x k , Set λ = 1 .

Step 2.2. Set x + = x k + λ d k .

Step 2.3. If

f ( x + ) max 0 j min { k , M 1 } f ( x k j ) + γ λ d k , g ( x k ) , (12)

Then define λ k = λ , x k + 1 = x + , s k = x k + 1 x k , y k = g ( x k + 1 ) g ( x k ) , and go to step 3.

If (12) does not hold, define λ n e w [ σ 1 λ , σ 2 λ ] . Set λ = λ n e w , and go to step 2.2.

The output point of the first stage is used as the initial point of the next stage.

Algorithm 3.3 [10] (A Projected semismooth asymptotical newton method)

Step 0. Choose constants ρ , σ , η ( 0 , 1 ) , p 1 > 0 , p 2 > 2 , Let x 0 = x N Ω , k : = 0 .

Step 1. Choose V k B H ( x k ) , compute f ( x k ) = V k T H ( x k ) .

Step 2. If x k is a stationary point, stop. Otherwise let

d G k = γ k f ( x k ) , (13)

where

γ k = min { 1 , η f ( x k ) / f ( x k ) 2 } , (14)

and go to step 3.

Step 3. If the linear system

H ( x k ) + V k d = 0 (15)

has a solution d N k , and

f ( x k ) T d N k p 1 d N k p 2 , (16)

then use the direction d N k . Otherwise, set d N k = d G k .

Step 4. Let m k be the smallest nonnegative integer m satisfying

f ( x k + d ¯ k ( ρ m ) ) f ( x k ) + σ f ( x k ) T d ¯ G k ( ρ m ) , (17)

where for any λ [ 0 , 1 ] ,

d ¯ k ( λ ) = t k ( λ ) d ¯ G k ( λ ) + [ 1 t k ( λ ) ] d ¯ N k ( λ ) , (18)

d ¯ G k ( λ ) = X [ x + λ d G k ] x k , d ¯ N k ( λ ) = X [ x + λ d N k ] x k (19)

and t k ( λ ) is an optimal solution to

min t [ 0 , 1 ] 1 2 H ( x k ) + V k [ t d ¯ G k ( λ ) + ( 1 t ) d ¯ N k ( λ ) ] 2 , (20)

the optimal solution is

t ( λ ) = max { 0 , min { 1 , t ( λ ) } } , (21)

let λ k = ρ m k , x k + 1 = x k + d ¯ k ( λ k ) .

Step 5. Let k = : k + 1 , and go to step 1.

4. Convergence Analysis

Theorem 4.1 [9] : Algorithm SPG1 is well defined, and any accumulation point of the sequence { x k } that is generates is a constrained stationary point.

Theorem 4.2 [9] : Algorithm SPG2 is well defined, and any accumulation point of, and any accumulation.

Theorem 4.3 [10] : Let { x k } X be a sequence generated by Algorithm 3.3, then any accumulation point of { x k } is a station point of (2).

5. Application

Many practical problems can be solved by transforming them into constrained semi-smooth equations. For example, mixed complement problem (MCP):

F : R n R n is a continuous differentiable function, finding a vectors x X satisfies

F ( x ) T ( y x ) 0 , y X , (22)

The function ψ α : 2 with α [ 0 , 1 ] is defined by

ψ α ( a , b ) : = ( [ ϕ α ( a , b ) + ] ) 2 + ( [ a ] + ) 2 , (23)

where [ a ] + : = max { 0 , a } for any a and ϕ α : 2 is the penalized Fischer-Burmeister function introduced by Chen et al. [11] and has the form:

ϕ α ( a , b ) : = α ϕ F B ( a , b ) + ( 1 α ) a + b + , (24)

Here, ϕ α : 2 is an NCP function, which is given by

ϕ F B ( a , b ) : = ( a + b ) a 2 + b 2 , (25)

The mixed complement problem can be transformed into a semi-smooth system of equations by the above functions.

Let N = { 1 , , n }

I f : = { i | l i = , u i = , i N } , I l : = { i | l i > , u i = , i N } , I u : = { i | l i = , u i < , i N } , I l u : = N \ { I l I u I f } (26)

MCP can be reformulated as H ( x ) = 0 with

H i ( x ) : = { | F i ( x ) | if i I f | ϕ α ( x i l i , F i ( x ) ) | if i I l | ϕ α ( u i x i , F i ( x ) ) | if i I u ψ α ( x i l i , F i ( x ) ) + ψ α ( x i l i , F i ( x ) ) if i I l u , i = 1 , , n (27)

Then we can use the two phase method to solve this problem.

6. Conclusion

In this paper, we proposed a two-phase method for the constrained equations. We can also combine other first-order and second-order methods. In this paper, the iteration complexity analysis of the first-order method is a meaningful work, and we will do further research.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Zhang, Y.Z. (2019) A Spectral Projected Gradient-Newton Two Phase Method for Constrained Nonlinear Equations. Journal of Applied Mathematics and Physics, 7, 104-110. https://doi.org/10.4236/jamp.2019.71009

References

  1. 1. Mifflin, R. (2006) Semismooth and Semiconvex Functions in Constrained Optimization. SIAM Journal on Control & Optimization, 15, 959-972. https://doi.org/10.1137/0315061

  2. 2. Qi, L. and Sun, J. (1993) A Nonsmooth Version of Newton’s Method. Mathematical Programming, 58, 353-368. https://doi.org/10.1007/BF01581275

  3. 3. Ji, L. and Yu, Z.S. (2009) New Class of Adaptive Nonmonotone Spectral Projected Gradient Method. Journal of University of Shanghai for Science & Technology.

  4. 4. Qi, L.Q., Tong, X.J. and Li, D.H. (2004) Active-Set Projected Trust-Region Algorithm for Box-Constrained Nonsmooth Equations. Journal of Optimization Theory and Applications, 120, 601-625. https://doi.org/10.1023/B:JOTA.0000025712.43243.eb

  5. 5. Clarke, F.H. (1983) Optimization and Nonsmooth Analysis. John Wiley & Sons, New York.

  6. 6. Qi, L. (1993) Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations. Mathematics of Operations Research, 18, 227-244. https://doi.org/10.1287/moor.18.1.227

  7. 7. Zarantonello, E.H. (1971) Projections on Convex Sets in Hilbert Space and Spectral Theory: Part I. Projections on Convex Sets: Part II. Spectral Theory. Revista de la Unión Matemática Argentina, 26, 237-424.

  8. 8. Powell, M.J.D. (1983) Variable Metric Methods for Constrained Optimization. Mathematical Programming: The State of the Art, Springer, Berlin Heidelberg. https://doi.org/10.1007/978-3-642-68874-4_12

  9. 9. Birgin, E.G., Martínez, J.M. and Raydan, M. (2000) Nonmonotone Spectral Projected Gradient Methods on Convex Sets. Society for Industrial and Applied Mathematic, 10, 1196-1211. https://doi.org/10.1137/S1052623497330963

  10. 10. Sun, D., Womersley, R.S. and Qi, H. (2002) A Feasible Semismooth Asymptotically Newton Method for Mixed Complementarity Problems. Mathematical Programming, 94, 167-187. https://doi.org/10.1007/s10107-002-0305-2

  11. 11. Chen, B., Chen, X. and Kanzow, C. (2000) A Penalized Fischer-Burmeister NCP-Function. Mathematical Programming, 88, 211-216. https://doi.org/10.1007/PL00011375