 Applied Mathematics, 2011, 2, 236-240 doi:10.4236/am.2011.22026 Published Online February 2011 (http://www.SciRP.org/journal/am) Copyright © 2011 SciRes. AM Improving AOR Method for a Class of Two-by-Two Linear Systems Cuixia Li1, Shiliang Wu2 1College of Mathematics, Chengdu University of Infromation Technology, Chengdu, China 2School of Mathema tics and Statistics, Anyang Normal University, Anyang, China E-mail: lixiatk@126.com, wushiliang1999@126.com, slwu@aynu.edu.cn Received October 18, 2010; revised December 10, 2010; accepted December 15, 2010 Abstract In this paper, the preconditioned accelerated overrelaxation (AOR) method for solving a class of two-by-two linear systems is presented. A new preconditioner is proposed according to the idea of  by Wu and Huang. The spectral radii of the iteration matrix of the preconditioned and the original methods are compared. The comparison results show that the convergence rate of the preconditioned AOR methods is indeed better than that of the original AOR methods, whenever the original AOR methods are convergent under certain condi-tions. Finally, a numerical example is presented to confirm our results. Keywords: Preconditioner, AOR Method, Convergence, Comparison 1. Introduction Sometimes we have to solve the following linear systems ,Hxf (1.1) where 12,IB DHCIB is non-singular with 12,,,.ij ijppnp npij ijnppp npBb BbCc Dd  Systems such as (1.1) are important and appear in many different applications of scientific computing. For example, (1.1) are usually faced when the following gen- eralized linear-squares problem is considered 1min ,nTxAxbWAxb where Wis the variance-covariance matrix. One can see [2-5] for details. As is known, the linear systems (1.1) can be solved by direct methods or iterative methods. Direct methods are widely employed when the order of the coefficient ma-trix H is not too large, and are often regarded as robust methods. The memory and the computational require-ments for solving the large linear systems may seriously challenge the most efficient direct methods available to- day. The alternative is to use iterative methods establi- shed for solving the large linear systems. Naturally, it is necessary that we make the use of iterative methods in-stead of direct methods to solve the large sparse linear systems. Meanwhile, iterative methods are easier to im-plement efficiently on high performance computers than direct methods. As is known, there exist three well-known classical it-erative methods, i.e., Jacobi, Gauss-Seidel and succes-sive overrelaxation (SSOR) method, which were fully covered in the excellent books by Varge  and Young . To make the convergence rate of SSOR method bet-ter, accelerated overrelaxation (AOR) method was pro-posed in  by Hadjidimos. To solve the linear systems (1.1) with the AOR itera-tive method, based on the structure of the matrix H, the matrix H is split as follows 1200 .00BDHI BC  (1.2) The AOR iterative method for solving (1.1) is estab-lished as follows  1,,0,1,iiwrxTxwgi (1.3) where 0w, ,wrT is iteration matrix and is of the fol-lowing form C. X. LI ET AL. Copyright © 2011 SciRes. AM 237 11,2112000100111wrBDITwIwBrC ICw IwBwDwrC wrCBwIwBwrCD      and 0.IgfrC I Obviously, if wr, then the AOR method reduces to the SOR method. The spectral radii of the iteration matrix is decisive for the convergence and stability of the method, and the smaller it is, the faster the method converges when the spectral radii is smaller than 1. To accelerate the conver-gence rate of the iterative method solving the linear sys-tems (1.1), preconditioned methods are often used. That is, ,PHx Pf where the preconditioner P is a non-singular matrix. If the matrix PH is expressed as 12,IB DPH CIB then the preconditioned AOR method can be defined by  ,1,0,1,wriixTxwgwi (1.4) where  1,12111wrw IwBwDTwrCwrCBwI wBwrCD    and 0.IgPfrC I In this paper, according to the idea of  by Wu and Huang, a new preconditioner is proposed to improve the convergence rate of the AOR method. Be similar to the work of  and , we compare the spectral radii of the iteration matrix of the preconditioned and the original methods. The comparison results show that the conver-gence rate of the preconditioned AOR methods is indeed superior to that of the original AOR methods, whenever the original AOR methods are convergent (to see the next section). For convenience, we shall now briefly explain some of the terminology and lemmas. Let nnijCc be an nn real matrix. By,diag C we denote thenn di- agonal matix coinciding in its diagonal with .iic For ,ijAa,nnijBb we writeABif ij ijab holds for all ,1,2,,.ij n Calling A is nonnegative if 0,A0;,1,2,,,ijaij nwe say that 0AB if and only if AB These definitions carry immedi- ately over to vectors by identifying them with 1n ma- trices.  denotes the spectral radius of a matrix. Lemma 1.1  Let nnA be a nonnegative and irreducible nn matrix. Then 1) A has a positive real eigenvalue equal to its spec- tral radius A; 2) for ,Athere corresponds an eigenvector 0x; 3) A is a simple eigenvalue of A. 4) A increases when any entry of A increases. Lemma 1.2  Let A be a nonnegative matrix. Then 1) If xAx for some nonnegative vector ,x 0x, then A. 2) If Axx for some positive vector x, then A. Moreover, if A is irreducible and if 0xAx x for some nonnegative vector x, then ()A and x is a positive vector. The outline of this paper is as follows. In Section 2, the spectral radii of the iteration matrix of the original and the preconditioned methods are compared. In Sec-tion 3, a numerical example is presented to illustrated our results. 2. Preconditioned AOR Methods and Comparisons Now, let us consider the preconditioned linear systems, ,Hxf (2.1) where HISH and fISfwith 0.00SS According to , here S is taken as as follows 12231,,100000 0.00000 0pppppbbSbb Naturally, we assume that there at least exists a non-zero number in the elements of S. By simple computations, we obtain  112IBSIB ISDHCIB  C. X. LI ET AL. Copyright © 2011 SciRes. AM 238 with 111112 2112 22112 22123 312223 32223 31112 11211.ppppppp ppppBSIBbbbbbbbbbbbbbbb bbbbb bbb bb  Be similar to (1.2), H can be expressed as  11200 .00BSIB ISDHI CB   Then the preconditioned AOR method for (2.1) is de-fined as follows:  1,,00,1,iiwrxTxwgwi (2.2) where ,11112111wrTwIwBSIBwrCwrCBSIBw ISDw IwBwrC IS D   and 0.IgfrC I The following theorem is given by comparing the spectral radii of the iteration matrix ,wrT and the origi-nal iteration matrix ,wrT. Theorem 2.1 Let the coefficient matrix H be irre-ducible, 10B with 10diag B, 20B, 0C, 0D, 01w and 01r. Then 1) ,,,wr wrTT if ,1;wrT 2) ,,,wr wrTT if ,1.wrT Proof. By simple computations, we obtain  1,2111100wrw IwBwDTwrCwI wBwr CB CD   (2.3) Clearly, if the matrix H satisfies 10B, 20B, 0C and 0D with 01w and 01r, then ,1000 and0.wrTCB CD Since the matrix H is irreducible, by observing the structure of (2.3), it is not difficult to get that the matrix ,wrT is irreducible. Similarly, the matrix ,wrT is non-negative and irreducible with 10diag B. By Lemma 1.1, there is a positive vector x such that ,,wrTx x Where ,.wrTObviously, 1 is impossible, otherwise the matrix H becomes singular. So we will mainly discuss two cases: 1 and 1. Case 1: 1. Since ,,,,wrwr wrTx xTxTx we get 1,,11,00000001.0wr wrwrwS IBwSDTxTx xwrCS IBwrCSDSwI BwDxrCSSTIxrCSSxrCS   Since 0S and 0S, then we get 000and 0.00SSxxrCS rCS     If 1, then ,,0wr wrTxTx but not equal to zero vector. By Lemma 1.2, we get ,,wr wrTT. That is, 1) holds. Similarly, 2) holds with 1, which completes the proof. □ It is well known that when wr, AOR iteration is reduced to SOR iteration. The following corollary is eas-ily obtained. Corollary 2.1 Let the coefficient matrix H be irre-ducible, 10B with 10diag B, 20B, 0C, 0D and 01w. Then 1) ,wwTT if 1;wT 2) ,wwTT if 1.wT Next, we consider the following preconditioners. Let the matrix S in (2.1) be defined by 00.0SS There exist the following three forms for S, that is, 1) If npp, then 1122,1 ,00000000.000npnpnp np pccScc 2) If npp, then 1122,1 ,0000.0npnpnp np pccScc C. X. LI ET AL. Copyright © 2011 SciRes. AM 2393) If np p, then 1122,1 ,0000.000 0pppnppccScc Naturally, we assume that there at least exists a non-zero number in the elements of S. For the sake of sim-plicity, we assume that ,nppH can be expressed as 112IB DHSI BC I BSD  with 111 111211 12111 12122 2122 22222 2,1 ,2,ppppnp npnppSI BCcbc cbccbccbcbc cbdd d  where ,1,1 11,,1,2,2 ,112 ,,2,,,11,,,,.np npnpnpnpnpnpnpnpnpnpn ppn ppn ppn pnp n ppdcbcbdccbcbdccbcb    The matrix H is split as follows 11200 .00BDHI SI BCBSD  Then the preconditioned AOR method for (2.1) is of the following form:  1,,00,1,iiwrxTxwgw i where ,1,wrTwIwJwrK  112,BDJSI BC BSD 11100,KSI BCI BSIBCD  and 10.IgfrSI BCI Similarly, the following theorem and corollary are given by comparing the spectral radii of the iteration ma- trix ,wrT and the original iteration matrix ,wrT. Theorem 2.2 Let the coefficient matrix H be irre-ducible, 10Bwith 10,diag B20,B0,C0D, 01w and 01r. Then 1) ,,,wr wrTT if ,1;wrT 2) ,,,wr wrTT if ,1.wrT Corollary 2.2 Let the coefficient matrix H be irre-ducible, 10Bwith 10,diag B20,B0,C0D, and 01w. Then 1) ,wwTT if 1;wT 2) ,wwTT if 1.wT 3. A Numerical Example Now let us consider the following example to illustrate the results. Example 3.1 12,IB DHCIB where 1212,,ij ijppnp npBb Bb ,ij np pCc and ijpnpDd with 111,1,2,,,5011,,40 401, ,1,2, ,iiijbi pbijjiipjp 111,,40 4012, ,,1,2, ,1ijbijij iipj p   221,1,2,,,4011,,40 401, ,1,2,,iiijbi npbijpj piinpjnp    211,,40 4012, ,,2,,1ijbijij piinpjnp     C. X. LI ET AL. Copyright © 2011 SciRes. AM 240 Table 1. The spectral radii of the AOR and preconditioned AOR iteration matrix. n p w r ,wrT ,wrT ,wrT 5 3 0.95 0.8 0.1153 0.1081 0.1088 10 6 0.9 0.5 0.2591 0.2513 0.2524 15 5 0.9 0.85 0.3379 0.3351 0.3358 20 10 0.75 0.6 0.5422 0.5376 0.5379 30 18 0.8 0.75 0.7005 0.6960 0.6982 40 30 0.5 0.3 0.9582 0.9575 0.9579 50 25 0.9 0.5 1.1774 1.1796 1.1793 60 20 0.8 0.5 1.3923 1.3957 1.3948 Table 2. The spectral radii of the SOR and preconditioned SOR iteration matrix. n p wr ,wrT ,wrT ,wrT 5 3 0.95 0.1079 0.1003 0.1038 10 5 0.9 0.2321 0.2261 0.2276 15 5 0.8 0.4148 0.4123 0.4127 20 15 0.75 0.5427 0.5345 0.5404 30 18 0.65 0.7622 0.7587 0.7602 40 30 0.6 0.9467 0.9457 0.9464 50 25 0.8 1.1749 1.1773 1.1766 60 20 0.5 1.2452 1.2473 1.2468 11,401 401, ,,1,2,,ijcpi jpiinpj p  11,40 401, ,,1,2, ,ijdpj iipj np Tables 1, 2 display the spectral radii of the corres-ponding iteration matrix with different parameters w, r and p. These calculations are performed using Mat-lab 7.1. Obviously, from Table 1, it easy to known that ,,1wr wrTT and ,,1wr wrTT. That is, these are in concord with Theorem 2.1 and 2.2. From Table 2, it is easy to know that wwTT andwwTT when 1.wT That is, these are in concord with Corollary 2.1 and 2.2. 4. Acknowledgements This research was supported by NSFC (11026040, 11026083). 5. References  S.-L. Wu and T.-Z. Huang, “A Modified AOR-Type It-erative Method for L-Matrix Linear Systems,” Australian & New Zealand Industrial and Applied Mathematics Journal, Vol. 49, 2007, pp. 281-292.  J.-Y. Yuan, “Iterative Methods for Generalized Least Squares Problems,” Ph.D. Thesis, IMPA, Rio de Janeiro, Brazil, 1993.  J.-Y. Yuan, “Numerical Methods for Generalized Least Squares Problems,” Journal of Computational and Ap-plied Mathematic, Vol. 66, No. 1-2, 1996, pp. 571-584. doi:10.1016/0377-0427(95)00167-0  J.-Y. Yuan and A. N. Iusem, “SOR-Type Methods for Generalized Least Squares Problems,” Acta Mathemati-cae Applicatae Sinica, Vol. 16, 2000, pp. 130-139. doi:10.1007/BF02677673  J.-Y. Yuan and X.-Q. Jin, “Convergence of the General-ized AOR Method,” Applied Mathematics and Computa-tion, Vol. 99, No. 1, 1999, pp. 35-46. doi:10.1016/S0096-3003(97)10175-8  R. S. Varga, “Matrix Iterative Analysis,” Springer Series in Computational Mathematics: 27, Springer-Verlag, Ber-lin, 2000.  D. M. Young, “Iterative Solution of Large Linear Sys-tems,” Academic Press, New York, 1971.  A. Hadjidimos, “Accelerated over Relaxtion Method,” Mathematics of Computation, Vol. 32, No. 141, 1978, pp. 149-157. doi:10.1090/S0025-5718-1978-0483340-6  X.-X. Zhou, Y.-Z. Song, L. Wang and Q.-S. Liu, “Pre-conditioned GAOR Methods for Solving Weighted Lin-ear Least Squares Problems,” Journal of Computational and Applied Mathematics, Vol. 224, No. 1, 2009, pp. 242-249. doi:10.1016/j.cam.2008.04.034  A. Berman and R. J. Plemmons, “Nonnegative Matrices in the Mathematics Sciences,” SIAM, Philadelphia, 1994.