_{1}

Several preconditioners are proposed for improving the convergence rate of the iterative method derived from splitting. In this paper, the comparison theorem of preconditioned iterative method for regular splitting is proved. And the convergence and comparison theorem for any preconditioner are indicated. This comparison theorem indicates the possibility of finding new preconditioner and splitting. The purpose of this paper is to show that the preconditioned iterative method yields a new splitting satisfying the regular or weak regular splitting. And new combination preconditioners are proposed. In order to denote the validity of the comparison theorem, some numerical examples are shown.

There are many iterative methods for solving a linear system of equations,

A x = b . (1)

Here, A is a n × n nonsingular M-matrix; x and b are n-dimensional vectors. Matrix A which arises from various problems is usually large and sparse matrix. Then large amount of computation times and memory are needed in order to solve efficiently the problems. Therefore, various preconditioners and iterative methods have been proposed. In this paper, Gauss-Seidel iterative method is treated as classical iterative method. Basically, the classical iterative method can be defined by splitting the coefficient matrix. It is assumed that the splitting for original linear equation satisfies the regular splitting. When Gauss- Seidel iterative method for preconditioned linear system, its splitting is Gauss- Seidel method. However, for the original coefficient matrix A it means to define a new splitting. The new splitting also fulfils the condition of the regular or weak regular splitting. We propose new preconditioners by combining preconditioners satisfying the regular splitting.

The outline of the paper is as follows: In Section 2, we review the preconditioned iterative method and some known results. And the iterative algorithm based on the splitting is shown. Section 3 consists of a comparison theorem and some numerical examples. Finally, in Section 4, we make some concluding remarks.

We review some known results [

Definition 1. Let A be a real matrix. The representation A = M − N is called splitting of A if M is a nonsingular matrix. In addition, the splitting is

(i) Convergent if ρ ( M − 1 N ) < 1.

(ii) Regular if M − 1 ≥ O and N ≥ O .

(iii) Weak regular if M − 1 ≥ O and M − 1 N ≥ O .

We can denote a splitting based iterative method as follows,

x ( k + 1 ) = M − 1 N x ( k ) + M − 1 b . (2)

M − 1 N is called the iterative matrix. If the spectral radius of the iterative matrix is less than one, the sequence { x ( k ) } will converge to the solution of the linear system (1). We can express the matrix A as the matrix sum

A = D − E − F (3)

where D = diag { a 11 , a 22 , ⋯ , a n n } , E and F are strictly lower and strictly up- per triangular n × n matrices, respectively. For using Diagonal preconditioner D − 1 = diag { 1 / a 11 , 1 / a 22 , ⋯ 1 / a n n } , we can rewrite

A ′ = D − 1 A = D − 1 D − D − 1 E − D − 1 F = I − L − U . (4)

In this article, suppose the diagonal part of a coefficient matrix is unit diagonal element. So, we consider the matrix sum of a coefficient matrix as follows,

A = I − L − U . (5)

When setting M = I , we have the point Jacobiiterative method. And if M = I − L , then we have the Gauss-Seidel iterative method.

Definition 2. We define M = I − L the Gauss-Seidel regular splitting of A = M − N , if M − 1 ≥ O and N ≥ O .

For some preconditioner P , we call the following equation the preconditioned iterative system,

P A x = P b . (6)

Many researchers proposed some preconditioner P . The preconditioner using the first column has been proposed [

P c = I + C = [ 1 0 ⋯ 0 − a 21 1 ⋮ ⋮ 0 ⋱ 0 − a n 1 0 1 ] . (7)

P c works to eliminate the first column of A . Then A c = ( I + C ) A can be written,

A c = I − L − U + C − C U = M c − N c , (8)

where

M c = I − D c − L + C − E c , N c = U + F c (9)

and D c , E c and F c are the diagonal, strictly lower and strictly upper triangular parts of C U , respectively. If M c is nonsingular, then the iterative matrix of the Gauss-Seidel method is defined by

T c = M c − 1 N c = ( I − D c − L + C − E c ) − 1 ( U + F c ) . (10)

In 1991, Gunawardena et al. proposed the preconditioner P S = I + S [

P S = I + S = [ 1 − a 12 0 0 0 1 ⋱ ⋮ ⋮ ⋱ ⋱ − a n − 1 , n 0 ⋯ 0 1 ] . (11)

In 1997, Kohno et al. proposed the preconditioner P S ( α ) = I + S ( α ) with parameter α to accelerate its convergence for the preconditioned iterative method [

P U = I + β U = [ 1 − β 1 a 12 ⋯ − β 1 a n 2 0 1 ⋱ ⋮ ⋮ ⋱ ⋱ − β n − 1 a n − 1 , n 0 ⋯ 0 1 ] . (12)

Parameters of each preconditioner are changed for each row.

The preconditioner P max = I + S max using the maximum absolute value of the element of the upper diagonal part was proposed [

S max = { − a i k i 1 ≤ i < n , i + 1 ≤ j ≤ n 0 otherwise, (13)

where k i = min I i , I i = { j : | a i j | ismaximalfor i + 1 ≤ j ≤ n } for 1 ≤ i < n .

We now consider the comparison theorem for the two regular splitting of normal and preconditioned linear system in Equation (1) and (2). By using some preconditioner P , we have preconditioned splitting P A = M P − N P , if M P and P are nonsingular. Rewrite two splitting like following relation,

A = M − N = P − 1 M P − P − 1 N P (14)

because the iterative matrix of P A transformed as follows,

M P − 1 N P = M P − 1 P P − 1 N P = ( P − 1 M P ) − 1 P − 1 N P . (15)

A related lemma and theorems [

Lemma 3. Let A = M − N be a regular splitting of A . If A − 1 ≥ O , then

ρ ( M − 1 N ) < 1. (16)

Conversely, if ρ ( M − 1 N ) < 1 , then A − 1 ≥ O .

Theorem 4. Let A ∈ R n × n be irreducible. Then each of the following conditions is equivalent to the statement: A is a nonsingular M-matrix.

(i) A − 1 ≥ O .

(ii) A x ≥ 0 for some x > 0 .

Corollary 5. If A ∈ Z n × n is a nonnegative diagonally dominant matrix with a i i > 0 for all i , then A − 1 ≥ O .

Theorem 6. Let T be a nonnegative matrix. If T x ≥ α x for some positive vector x , then ρ ( T ) ≥ α .

We solve the comparison theorem for any preconditioner P .

Theorem 7. Let A = M − N = P − 1 M P − P − 1 N P be two regular splitting of A . If A − 1 ≥ O and ( P − 1 M P ) − 1 ≥ M − 1 ≥ O , then

ρ ( M P − 1 N P ) ≤ ρ ( M − 1 N ) < 1. (17)

Proof. Clearly, A − 1 ≥ O , ρ ( M − 1 N ) < 1 from Lemma 3. From the assumption ( P − 1 M P ) − 1 ≥ M − 1 ≥ O and Theorem 4, we have the following relation

{ ( P − 1 M ) − 1 − M − 1 } A x ≥ 0. (18)

It follows that

{ ( P − 1 M P ) − 1 − M − 1 } A x = ( M P − 1 P − M − 1 ) A x = M P − 1 P A x − M − 1 A x = M P − 1 P ( P − 1 ( M P − N P ) ) x − M − 1 ( M − N ) x = ( I − M P − 1 N P ) x − ( I − M − 1 N ) x = M − 1 N x − M P − 1 N P x ≥ 0. (19)

Because the iterative matrix M − 1 N is nonnegative, there exists a positive vector x satisfied the following equation

M P − 1 N P x ≤ ρ ( M − 1 N ) x . (20)

From Theorem 6, we have

ρ ( M P − 1 N P ) ≤ ρ ( M − 1 N ) < 1. (21)

Example 1. We test the following matrix,

A 1 = [ 1 − 1 0 0 0 1 − 1 0 0 0 1 − 1 − 0.5 0 0 1 ] . (22)

This matrix was shown in [

A 1 = M − N = [ 1 0 0 0 0 1 0 0 0 0 1 0 − 0.5 0 0 1 ] − [ 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 ] = P S − 1 ( M S − N S ) = [ 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 ] − 1 ( [ 1 0 0 0 0 1 0 0 − 0.5 0 1 0 − 0.5 0 0 1 ] − [ 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 ] ) . (23)

M − N and M S − N S are Gauss-Seidel regular splitting, respectively. The assumption of Theorem 7 is satisfied as following inequality,

( P S − 1 M S ) − 1 − M − 1 = [ 0 1 0 0 0 0 1 0 0.5 0.5 0 1 0 0.5 0 0 ] ≥ O . (24)

Using the preconditioner P S = I + S is equivalent to using the following splitting,

A 1 = [ 1 − 1 1 − 1 0 1 − 1 1 0 0 1 − 1 − 0.5 0 0 1 ] − [ 0 0 1 − 1 0 0 0 1 0 0 0 0 0 0 0 0 ] . (25)

This splitting satisfies the regular splitting. And the following inequality is satisfied,

( P max − 1 M max ) − 1 = ( P S − 1 M S ) − 1 ≥ M − 1 ≥ O . (26)

Therefore, we have the spectral radius of each iterative matrix,

ρ ( M S − 1 N S ) = 0.500 ≤ ρ ( M C − 1 N C ) = 0.707 ≤ ρ ( M − 1 N ) = 0.794 < 1. (27)

For display, eigenvalues are given in approximate values. When using P S ( α ) = I + S ( α ) with parameter α , the regular splitting is not satisfied for α > 1 . However, it is well-known that the spectral radius may be smaller than the one of P S = I + S in the range of α > 1 . For example, by using α = 1.1 for each

row, we have the assumption condition ( P S ( α ) − 1 M S ( α ) ) − 1 ≥ ( P S − 1 M S ) − 1 and the

spectral radius ρ ( M S ( α ) − 1 N S ( α ) ) = 0.328 . But P S ( α ) A = M S ( α ) − N S ( α ) does not satisfy the regular splitting, since N S ( α ) is not nonnegative. And more, comparison condition between P S and P C is not indicated. Because elements used in each preconditioner are different, comparison of matrices is not satisfied. Therefore, we show following corollary.

Corollary 8. Let A = M − N = P − 1 M P − P − 1 N P be two splitting of A . If

{ ( P − 1 M ) − 1 − M − 1 } A x ≥ 0 (28)

then

ρ ( M P − 1 N P ) ≤ ρ ( M − 1 N ) < 1. (29)

In Theorem 7 and Corollary 8, notice that the vector x is positive vector. When setting x = ( 1 , 1 , ⋯ , 1 ) T , A x indicates the sum of each row and A x ≥ 0 . There- fore, A ∈ Z n × n is a diagonally dominant matrix. For example 1, if x = ( 1 , 1 , 1 , 1 ) T is chosen, it is A x = ( 0 , 0 , 0 , 0.5 ) ≥ 0 . We set x = ( 1.2 , 1.1 , 1.1 , 0.8 ) T in order to make it a vector without zero. As a result, we can confirm

A x = ( 0.1 , 0.0 , 0.3 , 0.2 ) ≥ 0 and comparison condition

{ ( P S − 1 M S ) − 1 − ( P C − 1 M C ) − 1 } A x ≥ 0 between P S and P C .

Example 2. Let

A 2 = [ 1 − 0.3 − 0.1 − 0.1 − 0.1 − 0.1 1 − 0.1 − 0.3 − 0.1 − 0.1 − 0.1 1 − 0.1 − 0.3 − 0.1 − 0.1 − 0.1 1 − 0.3 − 0.1 − 0.1 − 0.1 − 0.1 1 ] . (30)

When using the Gauss-Seidel splitting for each preconditioned linear system, we have the following relations

( P S − 1 M S ) − 1 ≥ M − 1 ≥ O (31)

( P max − 1 M max ) − 1 ≥ M − 1 ≥ O (32)

and

{ ( P max − 1 M max ) − 1 − ( P S − 1 M S ) − 1 } A x = ( 0 , 0.104 , 0.191 , 0.040 , 0.033 ) ≥ 0 (33)

where x = ( 1 , 1 , 1 , 1 , 1 ) T . The relation of each spectral radius is

ρ ( M max − 1 N max ) = 0.166 < ρ ( M S − 1 N S ) = 0.197 < ρ ( M − 1 N ) = 0.348. (34)

We test the following preconditioner combining two preconditioners,

P S + C = I + S + C = [ 1 − a 12 0 0 − a 21 1 ⋱ ⋮ ⋮ ⋱ ⋱ − a n − 1 , n − a n 1 ⋯ 0 1 ] . (35)

In this case, the condition of Theorem 7 satisfies, we have the spectral radius of preconditioned Gauss-Seidel iterative matrix is 0.156. And more, by setting the combination preconditioner P U + C = I + U + C , weak regular splitting is satisfied, the spectral radius is 0.078.

We show spectral radii of some preconditioners in

Example | |||||
---|---|---|---|---|---|

1 | 0.794 | 0.707 | 0.500 | 0.328* | 0.500 |

2 | 0.348 | 0.266 | 0.197 | 0.082* | 0.166 |

*Denote that it does not satisfy Corollary 8. In Example 2, α = ( 1.0 , 1.6 , 1.6 , 1.6 ) .

In order to consider effective preconditioner and splitting with small calculation, we proved their comparison theorem. The splitting formula in Equation (15) obtained by preconditioned Gauss-Seidel iterative method with P S = I + S is the regular splitting. This splitting has a strange shape, but this iterative method converges. This result means that there is splitting what reduces the spectral radius of iterative matrix. Using preconditioner P S ( α ) = I + S ( α ) , smaller spectral radii are obtained for two examples, but their splitting does not satisfy both the regular and weak regular splitting. And, we were able to test the combination preconditioner and show a smaller spectral radius. However, there are many preconditioners to reduce the spectral radius even if the weak regular splitting is satisfied. Finding a new effective splitting and preconditioner is a future work.

The author would like to thank the referees who point out some improvements in the earlier manuscript. This study was supported by JSPS KAKENHI Grant Number JP 26400181.

Kohno, T. (2017) Preconditioned Iterative Method for Regular Splitting. Advances in Pure Mathematics, 7, 180-187. https://doi.org/10.4236/apm.2017.72009