Journal of Applied Mathematics and Physics
Vol.05 No.03(2017), Article ID:74875,13 pages
10.4236/jamp.2017.53057

An Efficient Algorithm for the Numerical Computation of the Complex Eigenpair of a Matrix

R. O. Akinola*, K. Musa, I. A. Nyam, S. Y. Kutchin, K. V. Joshua

Department of Mathematics, Faculty of Natural Sciences, University of Jos, Jos, Nigeria

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 28, 2016; Accepted: March 21, 2017; Published: March 24, 2017

ABSTRACT

In computing the desired complex eigenpair of a matrix, we show that by adding Ruhe’s normalization to the matrix pencil, we obtain a square nonlinear system of equations. In this work, we show that the corresponding Jacobian is non-singular at the root and that with an appropriately chosen initial guesses, Ruhe’s normalization with a fixed complex vector not only converges quadratically but also faster than the earlier Algorithms for the numerical computation of the complex eigenpair of a matrix. The mathematical tools used in this work are Newton and Gauss-Newton’s methods.

Keywords:

Quadratic Convergence, Newton’s Method

1. Introduction

In [1] , Akinola and Spence considered the problem of computing the eigenpair ( x , λ ) from the generalized complex eigenvalue problem:

D x = λ P x , (1)

where x C n , x 0, λ C , D is a large real n × n non-symmetric matrix and P a real symmetric positive definite matrix. After adding the normalization [2]

x H P x = 1 , (2)

to (1), they obtained a combined system of equations of the form F ( u ) = 0, where u = [ x H , λ ] , given as

F ( u ) = [ ( D λ P ) x 1 2 x H P x + 1 2 ] = 0. (3)

In trying to solve the nonlinear system (3), two drawbacks were encountered. The first one is that if x from ( x , λ ) solves (3), then so does e x p i θ x for any θ [ 0 , 2 π ) , which means that x has no unique solution. Secondly, x ¯ in x H = x ¯ T is not differentiable since x ¯ does not satisfy the Cauchy-Riemann [3] equations which implies that (3) cannot be differentiated and the standard Newton’s method cannot be applied. The author then proposed that the above drawbacks could be overcome at least for the P = I case. Before the works of Akinola, Ruhe [4] and Tisseur [5] added the differentiable normalizations

c H x = 1 , (4)

and

τ e s T x = τ , (5)

where c is a fixed complex vector and τ = max ( D , P ) for some fixed s. Adding each of the two normalization to (1), Ruhe and Tisseur then obtained the following combined system of equations;

F ( u ) = [ ( D λ P ) x c H x 1 ] = 0, (6)

and

F ( u ) = [ ( D λ P ) x τ e s T x τ ] = 0, (7)

which have the corresponding Jacobians

F u ( u ) = [ ( D λ P ) P x c H 0 ] , (8)

and

F u ( u ) = [ ( D λ P ) P x τ e s T 0 ] . (9)

In this paper, we show that the square Jacobian given by (8) is nonsingular at the root using the ABCD lemma if the eigenvalue of interest is algebraically simple. The major distinction between the two-norm normalization and Ruhe’s normalization is that the two-norm normalization is a natural normalization which makes the choice of c free. The Jacobian (9) above was shown to be singular in [5] at the root if and only if λ is a finite multiple eigenvalue of the pencil ( D , P ) .

In this paper, we compare the numerical performance of the algorithm (Algorithm 1) based on Ruhe’s normalization (i.e., an application of Newton’s method on (6) using the Jacobian (8)) with previous algorithms developed by Akinola et al., in [1] [6] and [7] . All three algorithms: Algorithm 2 as discussed in [1] , Algorithm 3 as described in [6] , Algorithm 4 as presented in [7] were based on the natural two-norm normalization for the eigenvector. We show that with the same starting guesses, and a carefully chosen fixed complex vector c that the algorithm based on Ruhe’s normalization converges faster than the other three. The plan of this paper is as follows: in Section 2, we used Keller’s ABCD Lemma [8] to show that the Jacobian (8) is nonsingular at the root in Theorem 2.1 and we present the four Algorithms. In Section 3, we compare the performance of the four algorithms on three numerical examples. Eigenvalues are used in differential equations in studying stability and in complex biological systems in determining eigenvector centrality (see also [9] [10] ).

2. Methodology

In this section, we proof the main result in this paper which states the condition under which the Jacobian matrix (8) (for P = I ) is non-singular at the root, that is x = ( x , λ ) . This is then followed by a presentation of Algorithm 1, which is actually Newton’s method for solving (6). The remaining algorithms have been discussed extensively in [1] [6] and [7] . For the sake of avoiding self plagiarism, we refer the interested reader to those articles.

Algorithm 1 involves solving an ( n + 1 ) by ( n + 1 ) square system of equations using LU factorisation and does not involve splitting the eigenvalue and eigenvector into real and imaginary parts.

Algorithm 2 involves splitting the eigenpair into real and imaginary parts to obtain an under-determined non linear system of equations. This results in solving a ( 2 n + 1 ) real under-determined linear system of equations for ( 2 n + 2 ) real unknowns using Gauss-Newton method [11] . This is solved using QR factorisation.

Algorithm 3 also involves splitting the eigenpair into real and imaginary parts but with the help of an added equation we obtained a square ( 2 n + 2 ) by ( 2 n + 2 ) system of linear equations which is solved using LU factorisation.

Algorithm 4 is closely related to Algorithm 1 in the sense that both uses complex arithmetic. While Algorithm 1 used a fixed complex vector which does not change throughout the computation, Algorithm 4 uses the natural two-norm normalization which ensures that the eigenvector is updated at each stage of the computation.

Theorem 2.1. Let ( D λ I ) be an n by n matrix, x , c C n . Let

M = [ ( D λ I ) x c H 0 ] , (10)

be an ( n + 1 ) by ( n + 1 ) matrix. If D λ I is singular and rank ( D λ I ) = Math_49#, then M is nonsingular if and only if ψ H x 0 , for all ψ N ( D λ I ) H \ { 0 } and c H ϕ 0 , for all ϕ N ( D λ I ) \ { 0 } . Where N ( D λ I ) is the nullspace of D λ I .

Proof: Let M be nonsingular. Assume D λ I is singular and c H ϕ = 0 , we want to show by contradiction that c H ϕ 0 . We multiply M from the right by the nonzero vector [ ϕ , 0 ] H to yield

[ ( D λ I ) x c H 0 ] [ ϕ 0 ] = [ ( D λ I ) ϕ c H ϕ ] = [ 0 0 ] . (11)

This shows that we have multiplied the nonsingular matrix M by a nonzero vector to obtain the zero vector, this implies that M is singular, a contradiction, hence c H ϕ 0 . Similarly, let ψ H x = 0 , multiply M from the left by the nonzero vector [ ψ , 0 ] H to obtain

[ ψ 0 ] H [ ( D λ I ) x c H 0 ] = [ ψ H ( D λ I ) ψ H x ] = [ 0 H 0 ] .

This shows that M is singular, contradicting the nonsingularity of M , therefore, ψ H x 0 .

Conversely, let D λ I be singular of rank ( D λ I ) = n 1 , and assume ψ H x 0 and c H ϕ 0 . We want to show that M is nonsingular. If we can show that the vector [ p , q ] H is zero in

[ ( D λ I ) x c H 0 ] [ p q ] = [ 0 0 ] ,

then M is nonsingular. After expanding the above equation, we obtain

( D λ I ) p q x = 0 (12)

c H p = 0. (13)

By using the fact that ψ H ( D λ I ) = 0 H in ψ H ( D λ I ) p q ( ψ H x ) = 0 , we have q ( ψ H x ) = 0 . But by assumption, ψ H x 0 , hence q = 0 . With this value of q , we are left with ( D λ I ) p = 0 in (12) and because D λ I is singular, this implies that p = α ϕ . After substituting the value of p into (13), we have α c H ϕ = 0 from which α = 0 is immediate since c H ϕ 0 . Therefore, p = 0 and M is nonsingular.

Next, we present Algorithm 1 for computing the complex eigenpair of D using complex arithmetic. This is the main contribution to knowledge in this paper.

Stop Algorithm 1 as soon as Δ v ( k ) tol .

Next, we present Algorithm 4 for computing the complex eigenpair of D using complex arithmetic.

3. Numerical Experiments

In this section, we compare the performance of the algorithm (Algorithm 1) obtained by adding Ruhe’s normalization with three other algorithms (Algorithm 2, Algorithm 3 and Algorithm 4) which were presented in the last section on three numerical examples. Throughout this section w ( k ) = [ x 1 ( k ) T , x 2 ( k ) T ] and λ ( k ) = [ α ( k ) , β ( k ) ] .

Example 3.1.

Consider the matrix

D = [ 0 1 1 0 ] .

We compared the performance of the four algorithms on the two by two matrix and the results are as presented in Table 1, Table 2, Table 3 and Table 4 respectively. In all four algorithms we used the same initial guesses α ( 0 ) = 6.0 × 10 3 ,

β ( 0 ) = 9.9 × 10 1 i , x ( 0 ) = [ 1 0 ] + [ 1 0 ] i , and c = N ( A + i I ) . It was observed that

Table 1. Values of α ( k ) + i β ( k ) for the two by two matrix using Algorithm 1.

Table 2. Values of α ( k ) + i β ( k ) for the two by two matrix using Algorithm 2. Quadratic convergence is shown in columns 4, 6 and 7 for k = 3 , k = 4 and k = 5 .

Table 3. Values of α ( k ) and β ( k ) for the two by two matrix using Algorithm 3. Quadratic convergence is shown in columns 4, 6 and 7 for k = 3 , 4 and k = 5 .

Table 4. Values of α ( k ) + i β ( k ) for the two by two matrix using Algorithm 4. Quadratic convergence is shown in columns 3, 5 and 6 for k = 3 , 4 and k = 5 .

Figure 1. Distribution of the complex eigenvalues of the 20 by 20 grcar matrix. The x -axis is the real axis while the y -axis is the imaginary axis.

Algorithm 1 converged after only four iterations while it took seven iterates for the other three to converge to the eigenvalue λ = i .

Example 3.2.

The grcar matrix [12] [13] is a non symmetric matrix with sensitive eigenvalues and defined by

D ( i , j ) = ( 1 , if i = j + 1 1 , if i j and j i + k 0 , Otherwise .

Figure 1 shows the distribution of the complex eigenvalues of the twenty by twenty grcar matrix on the real and imaginary axis. All the four algorithms discussed in the last section converged to the eigenvalue λ = 1.58207 + 6.43690 × 10 1 i after 12 iterations with the same starting guesses as shown in Table 5, Table 6, Table 7 and Table 8. However, unlike the first example in which Algorithm 1 converged faster than the other three, that was not the case, maybe due to the sensitivity of its eigenvalues.

Figure 2. Distribution of the complex eigenvalues of the 200 by 200 bmw 200.mtx matrix. The x -axis is the real axis while the y - axis is the imaginary axis.

Example 3.3.

Consider the 200 by 200 matrix D bwm200.mtx from the matrix market library [14] . It is the discretised Jacobian of the Brusselator wave model for a chemical reaction. The resulting eigenvalue problem with P = I was also studied in [9] and we are interested in finding the rightmost eigenvalue of D which is closest to the imaginary axis and its corresponding eigenvector. Figure 2 shows the distribution of the complex eigenvalues of the matrix.

For this example, in all four algorithms we take α ( 0 ) = 0.0 , β ( 0 ) = 2.5 in line with [9] and took x 1 ( 0 ) = 1 / 2 1 , x 2 ( 0 ) = 3 2 1 / 1 and c = 1 + 1 2 1 i , where 1 is

Table 5. Values of α ( k ) + i β ( k ) for the twenty by twenty grcar matrix using Algorithm 1. Quadratic convergence is shown in columns 3 and 5 for k = 10 and 11.

Table 6. Values of α ( k ) + i β ( k ) for the twenty by twenty grcar matrix using Algorithm 2. Quadratic convergence is shown in columns 4, 5 and 6 for k = 9 , 10 and 11.

Table 7. Values of α ( k ) + i β ( k ) for the twenty by twenty grcar matrix using Algorithm 3. Quadratic convergence is shown in columns 4, 5 and 6 for k = 9 , 10 and 11.

Table 8. Values of α ( k ) + i β ( k ) for the twenty by twenty grcar matrix using Algorithm 4. Quadratic convergence is shown in columns 4, 5 and 6 for k = 9 , 10 and 11.

the vector of all ones. Results of numerical experiments are as tabulated in Tables 9-12 respectively. We observed that while it took Algorithm 1 with a fixed complex vector six iterations to converge to the desired eigenvalue 1.81999 × 10 5 + 2.13950 i as shown in Table 9, it took eight, ten and eight iterates for Algorithm 2 (Table 10), Algorithm 3 (Table 11) and Algorithm 4 (Table 12) respectively to achieve the same result. This shows that Algorithm 1 converged faster than the others.

As shown in Table 1 and Table 9, we observed that an application of Algorithm 1 showed faster convergence to the eigenvalue of interest with a close enough initial guess than the previous algorithms already discussed in [1] [6] and [7] viz-a-viz: Algorithm 2, Algorithm 3 and Algorithm 4 respectively.

Table 9. Values of α ( k ) + i β ( k ) for the 200 by 200 matrix of Example 3.3 using Algorithm 1. Columns 4 and 5 shows that the results converged quadratically for k = 1 , 2 , 3 and 4.

Table 10. Values of α ( k ) and β ( k ) for the 200 by 200 matrix of Example 3.3 using Algorithm 2. Columns 6 and 7 show that the results converged quadratically for k = 3 , 4 , 5 , 6 and 7.

Table 11. Values of α ( k ) and β ( k ) for the 200 by 200 matrix of Example 3.3 using Algorithm 3. Columns 6 and 7 show that the results converged quadratically for k = 3 , 4 , 5 , 6 and 7.

Table 12. Values of α ( k ) and β ( k ) for the 200 by 200 matrix of Example 3.3 using Algorithm 4. Columns 5 and 6 show that the results converged quadratically for k = 3 , 4 , 5 , 6 and 7.

4. Conclusion

In this paper, we have shown using the ABCD Lemma that the Jacobian obtained from adding Ruhe’s normalization to the matrix pencil is non-singular at the root. With a proper choice of the fixed complex vector and an initial guess close to the eigenvalue of interest, we recommend the use of Algorithm 1 for the numerical computation of the desired complex eigenpair of a matrix because of its faster convergence.

Acknowledgements

The authors acknowledge valuable suggestions of an anonymous referee which helped in improving the final version of this paper. The main part of this work was done when the first author was a Ph.D. student at the University of Bath and duly acknowledge financial support in the form of a studentship.

Cite this paper

Akinola, R.O., Musa, K., Nyam, I.A., Kutchin, S.Y. and Joshua, K.V. (2017) An Efficient Algorithm for the Numerical Computation of the Com- plex Eigenpair of a Matrix. Journal of Applied Mathematics and Physics, 5, 680-692. https://doi.org/10.4236/jamp.2017.53057

References

  1. 1. Akinola, R.O. and Spence, A. (2014) Two-Norm Normalization for the Matrix Pencil: Inverse Iteration with a Complex Shift. International Journal of Innovation in Science and Mathematics, 2, 435-439.

  2. 2. Stewarti, G.W. (2001) Matrix Algorithms, Volume II: Eigensystems. SIAM, Philadelphia.

  3. 3. Kreyszig, E. (1999) Advanced Engineering Mathematics. John Wiley & Sons Inc., New York.

  4. 4. Ruhe, A. (1973) Algorithms for the Nonlinear Eigenvalue Problem. SIAM Journal on Numerical Analysis, 10, 674-689. https://doi.org/10.1137/0710059

  5. 5. Tisseur, F. (2001) Newton’s Method in Floating Point Arithmetic and Iterative Refinement of Generalized Eigenvalue Problems. SIAM Journal on Matrix Analysis and Applications, 22, 1038-1057. https://doi.org/10.1137/S0895479899359837

  6. 6. Akinola, R.O. and Spence, A. (2015) Numerical Computation of the Complex Eigenvalues of a Matrix by Solving a Square System of Equations. Journal of Natural Sciences Research, 5, 144-156.

  7. 7. Akinola, R.O. (2015) Computing the Complex Eigenpair of a Large Sparse Matrix in Complex Arithmetic. International Journal of Pure & Engineering Mathematics, 3, 137-158.

  8. 8. Keller, H.B. (1977) Numerical Solution of Bifurcation and Nonlinear Eigenvalue Problems. In: Rabinowitz, P., Ed., Applications of Bifurcation Theory, Academic Press, New York, 359-384.

  9. 9. Parlett, B.N. and Saad, Y. (1987) Complex Shift and Invert Strategies for Real Matrices. Linear Algebra and Its Applications, 88-89, 575-595.https://doi.org/10.1016/0024-3795(87)90126-1

  10. 10. Meerbergen, K. and Roose, D. (1996) Matrix Transformations for Computing Rightmost Eigenvalues of Large Sparse Non-Symmetric Eigenvalue Problems. IMA Journal of Numerical Analysis, 16, 297-346.https://doi.org/10.1093/imanum/16.3.297

  11. 11. Deuflhard, P. (2004) Newton Methods for Nonlinear Problems. Springer, Heidelberg, 174-175.

  12. 12. Grcar, J. (1989) Operator Coefficient Methods for Linear Equations. Technical Report SAND89-8691, Sandia National Laboratories, Albuquerque, New Mexico, Appendix 2.

  13. 13. Nachtigal, N.M., Reichel, L. and Trefethen, L. (1992) A Hybrid GMRES Algorithm for Nonsymmetric Linear Systems. SIAM Journal on Matrix Analysis and Applications, 13, 796-825. https://doi.org/10.1137/0613050

  14. 14. Boisvert, B., Pozo, R., Remington, K., Miller, B. and Lipman, R. Matrix Market. http://math.nist.gov/MatrixMarket/