Applied Mathematics
Vol.09 No.06(2018), Article ID:85751,15 pages
10.4236/am.2018.96052

A Poisson Solver Based on Iterations on a Sylvester System

Michael B. Franklin, Ali Nadim

Institute of Mathematical Sciences, Claremont Graduate University, Claremont, CA, USA

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: June 1, 2018; Accepted: June 26, 2018; Published: June 29, 2018

ABSTRACT

We present an iterative scheme for solving Poisson’s equation in 2D. Using finite differences, we discretize the equation into a Sylvester system, A U + U B = F , involving tridiagonal matrices A and B. The iterations occur on this Sylvester system directly after introducing a deflation-type parameter that enables optimized convergence. Analytical bounds are obtained on the spectral radii of the iteration matrices. Our method is comparable to Successive Over-Relaxation (SOR) and amenable to compact programming via vector/array operations. It can also be implemented within a multigrid framework with considerable improvement in performance as shown herein.

Keywords:

Poisson’s Equation, Sylvester System, Multigrid

1. Introduction

Poisson’s equation 2 u = f , an elliptic partial differential equation [1] , was first published in 1813 in the Bulletin de la Société Philomatique by Siméon-Denis Poisson. The equation has since found wide utility in applications such as electrostatics [2] , fluid dynamics [3] , theoretical physics [4] , and engineering [5] . Due to its expansive applicability in the natural sciences, analytic and efficient approximate solution methods have been sought for nearly two centuries. Analytic solutions to Poisson’s equation are unlikely in most scientific applications because the forcing or boundary conditions on the system cannot be explicitly represented by means of elementary functions. For this reason, numerical approximations have been developed, dating back to the Jacobi method in 1845. The linear systems arising from these numerical approximations are solved either directly, using methods like Gaussian elimination, or iteratively. To this day, there are applications solved by direct solvers and others solved by iterative solvers, depending largely on the structure and size of the matrices involved in the computation. The 1950s and 1960s saw an enormous interest in relaxation type methods, prompted by the studies on optimal relaxation and work by Young, Varga, Southwell, Frankel and others. The books by Varga [6] and Young [7] give a comprehensive guide to iterative methods used in the 1960s and 1970s, and have remained the handbooks used by academics and practitioners alike [8] .

The Problem Description

The Poisson equation on a rectangular domain is given by

2 u = f in Ω = { x , y | 0 x a , 0 y b } (1)

where u = u ( x , y ) is to be solved in the 2D domain Ω , and f ( x , y ) is the forcing function. Typical boundary conditions for this equation are either Dirichlet, where the value of f ( x , y ) is specified on the boundary, or Neumann, where the value of the normal derivative is specified on the boundary. These are given mathematically as,

u = g D or u / n n ^ u = g N on Ω , (2)

where n ^ is the outward unit normal along Ω and g D and g N are the function values specified by Dirichlet or Neumann boundary conditions. It is also possible to have mixed boundary conditions along the boundary, where some edges have Dirichlet and some have Neumann, so long as the problem is well-posed. Furthermore, edges could be subject to Robin boundary conditions of the form c 1 u + c 2 u / n = g R . Any numerical scheme designed to solve the Poisson equation should be robust in its ability to incorporate any form of boundary condition into the solver. A detailed discussion of boundary condition implementation is given in the Appendix. Discretizing (1) using central differences with equal grid size Δ x = Δ y = h leads to an M × N rectangular array of unknown U, such that U i , j u ( x i , y j ) (assuming that a and b are both integer multiples of h). This discretization leads to a linear system of the form A U + U B = F , the Sylvester equation, which can be solved either directly or iteratively. The direct method utilizes the Kronecker product approach [9] , given by

K u = f where K = kron ( I , A ) + kron ( B T , I ) (3)

where u and f are appropriately ordered M N × 1 column vectors obtained from the M × N arrays U and F, and K is a sparse M N × M N matrix. The Kronecker product, kron ( P , Q ) , of any two matrices P and Q is a partitioned matrix whose ijth partition contains matrix Q multiplied by component p i j of P. Due to the potentially large size of the system given in (3), direct solvers are not the preferred solution approach. Specifically addressed here is an iterative approach to solving the Sylvester equation,

A U int + U int B = F int , (4)

where U int is the m × n array of interior unknowns (not including the known boundary values when Dirichlet boundary conditions are given) with m = M 2 and n = N 2 . Operator matrices A and B are given by the O ( h 2 ) finite difference approximation to the second derivative,

1 h 2 [ 2 1 1 2 1 1 2 1 1 2 ] . (5)

For an array of unknowns U int m × n , the operator matrices are of dimension A m × m , and B n × n . The matrix structure of U should be modified such that the first index i corresponds to the x-direction, and the second index j corresponds to the y-direction. With this orientation, multiplying U int by the matrix A on the left approximates the second derivative in the x-direction, and multiplying U int by the matrix B on the right approximates the second derivative in the y-direction.

2. Sylvester Iterative Scheme

Examining (4) it might seem natural to move one term to the right-hand side of the equation to achieve an iterative scheme such as:

A U + U B = F A U = F U B

U k + 1 = A 1 F A 1 U k B , (6)

However, this scheme diverges, and an alternative approach is required to iterate on the Sylvester system. An appropriate method is to break up the iterative scheme into two “half-steps’’ as follows

1) First half-step: U k U *

A U = F U B ( A α I + α I ) U = F U B

( A α I ) U * = F U k ( B + α I ) (7)

2) Second half-step: U * U k + 1

U B = F A U U ( B β I + β I ) = F A U

U k + 1 ( B β I ) = F ( A + β I ) U * (8)

where U * is some intermediate solution between steps k and k + 1 . Rearranging, this leads to

( I 1 α A ) U * = U k ( I + 1 α B ) 1 α F , U k + 1 ( I 1 β B ) = ( I + 1 β A ) U * 1 β F . (9)

The iterative scheme (9) is similar to the Alternating Direction Implicit (ADI) formulation [10] , where Poisson’s equation is reformulated to have pseudo-time dependency,

d U d t = A U + U B F , (10)

which achieves the solution to Equation (4) when it reaches steady-state. This method is separated into two half-steps, the first time step going from time k k + 1 / 2 treating the x-direction implicitly and the y-direction explicitly. The second time step then goes from time k + 1 / 2 k + 1 , treating the y-direction implicitly and the x-direction explicitly. The two half-steps are,

U k + 1 / 2 U k Δ t / 2 = 1 h 2 [ A U k + 1 / 2 + U k B ] , U k + 1 U k + 1 / 2 Δ t / 2 = 1 h 2 [ A U k + 1 / 2 + U k + 1 B ] , (11)

which leads to

( I Δ t 2 A ) U k + 1 / 2 = U k ( I + Δ t 2 B ) Δ t 2 F , U k + 1 ( I Δ t 2 B ) = ( I + Δ t 2 A ) U k + 1 / 2 Δ t 2 F . (12)

This iteration procedure looks nearly identical to our Sylvester iterations given in (9) with Δ t / 2 replaced by the unknown parameters 1/α and 1/β. However in our formulation, there is no pseudo-time dependency introduced. Instead, the eigenvalues of our operator matrices A and B are deflated to produce an iterative scheme that optimally converges, and finding the values of the parameters α and β becomes an optimization problem.

Convergence

After the Sylvester Equation (4) is modified into the iterative system (9), the iterative scheme can be written as a single step by substituting the expression for the intermediate solution U * into the second step of the iterative process; this yields the single update equation for U k + 1 given by

U k + 1 = ( I + 1 β A ) ( I 1 α A ) 1 U k ( I + 1 α B ) ( I 1 β B ) 1 [ 1 α ( I + 1 β A ) ( I 1 α A ) 1 1 β I ] F ( I 1 β B ) 1 . (13)

Assuming that an exact solution U exact exists that exactly satisfies the linear system (4), i.e. A U exact + U exact B = F , we define the error between the kth iteration and the exact solution as

E k U k U exact . (14)

Finding an update equation for the error is done by subtracting the error at the kth step from the error at the ( k + 1 ) s t step, noting that the expressions involving the forcing F disappear, we arrive at

E k + 1 = P E k Q , (15)

where the matrices P and Q are given by

P = ( I + 1 β A ) ( I 1 α A ) 1 , Q = ( I + 1 α B ) ( I 1 β B ) 1 . (16)

Denoting the m eigenvalues of A m × m by λ k A , and the n eigenvalues of B n × n by λ k B , the corresponding deflated eigenvalues of the iteration matrices P and Q are

λ k P = 1 + ( λ k A / β ) 1 ( λ k A / α ) = α β ( β + λ k A α λ k A ) , k = 1 , 2 , , m , λ k Q = 1 + ( λ k B / α ) 1 ( λ k B / β ) = β α ( α + λ k B β λ k B ) , k = 1 , 2 , , n . (17)

A sufficient condition for convergence of the iterative process is achieved if the spectral radii of both iteration matrices P and Q are less than one,

ρ ( P ) max | λ k P | < 1 and ρ ( Q ) max | λ k Q | < 1. (18)

The error at each consecutive iteration is decreased by the product of ρ ( P ) and ρ ( Q ) ,

E k + 1 = P E k Q , E k + 1 = ρ ( P ) ρ ( Q ) E k , E k + 1 = ( ρ ( P ) ρ ( Q ) ) k E 0 , (19)

where E 0 is the initial error. Often in practical applications, the exact solution is not known, so the error E k cannot be computed directly. In this case, the preferred measure in iterative schemes is given by the residual, which measures the difference of the left and right hand sides of the linear system being solved. This will be further discussed in the Results section.

3. Finding Optimal Parameters α and β

Finding α and β is an optimization problem for achieving the fastest convergence rate of the Sylvester iterative scheme (9). Given the operator matrices A and B and their respective eigenvalues λ k A and λ k B , it seems feasible to find optimal values of α and β to minimize the spectral radii of the iteration matrices P and Q given in Equations (17). From (15) the error E k + 1 is found by multiplying by P on the left, and Q on the right, thus the convergence is governed by the spectral radii of both P and Q.

Figure 1 shows the eigenvalues of P and Q for arbitrary m and n, given by Equation (17), plotted vs. the eigenvalues of A and B for some parameters α and β. It can be seen that as λ A or λ B get large in magnitude, the values of λ P or λ Q approach α / β and β / α , respectively. This implies that if α β ,

Figure 1. Eigenvalues λ P ( λ A ; α , β ) and λ Q ( λ B ; α , β ) vs. λ Q and λ B for given constant α and β . In this figure, α > β , so ρ ( P ) = α / β > 1 , and the scheme will diverge.

the high frequency eigenvalues of P and Q, in magnitude, will be greater than one, thus convergence condition (18) will not be satisfied. This provides the restriction for convergence that,

α = β . (20)

This optimal value of α = β will henceforth be called α * . It is important to note that the operator A m × m or B n × n with the larger dimension

l m a x ( m , n ) , (21)

has a larger range of eigenvalues. Figure 1 shows m > n , (i.e. l = m ) so it can be seen that min ( λ A ) < min ( λ B ) and max ( λ A ) > max ( λ B ) . This property of the eigenvalues is important when calculating an expression for c, which will soon prove to be a highly useful parameter for an adaptive approach to smoothing. Letting α = β = α * in (17) gives the following expression for the eigenvalues of P and Q:

λ k P = α * + λ k A α * λ k A , k = 1 , 2 , , m , λ k Q = α * + λ k B α * λ k B , k = 1 , 2 , , n . (22)

Finding the optimal parameter α * is done by considering the error reduction of Sylvester iterations on an arbitrary initial condition U0. Assume that U0 can be decomposed into its constituent error (Fourier) modes, ranging from low frequency (smooth) to high frequency (oscillatory) modes. Given that U0 contains error modes of all frequencies, the most conservative method would be to choose α * such that the spectral radii ρ ( P ) and ρ ( Q ) are minimized over the full range of frequencies. This ensures that all modes of error are efficiently relaxed, and convergence is governed by the product of spectral radii.

Referring to the lower curve in Figure 2, the conservative method of

Figure 2. Eigenvalues λ P ( λ A ; α ) and λ Q ( λ B ; α ) illustrating the quantities involved in computing the optimal parameters α * and α m g * , and their respective optimal smoothing regions R s m o o t h * and R s m o o t h m g . Note that the upper curve illustrates the method of determining α * in the multigrid formulation, while the lower curve is for determining α * in conservative Sylvester iterations.

determining α * would be to set L min * = L max * and according to (22),

( α * + λ min A α * λ min A ) = ( α * + λ max A α * λ max A ) . (23)

Noting that eigenvalues for all dimensions collapse onto the curves shown in Figure 2, this conservative approach “locks in’’ the value of the larger operator’s spectral radius, thus providing an upper bound for convergence. Figure 2 shows m > n , so ρ ( P ) > ρ ( Q ) , so convergence will be limited by ρ ( P ) . Equation (23) can then be solved for α * giving

α * = | min ( λ min A , λ min B ) | × | max ( λ max A , λ max B ) | , (24)

where absolute values are introduced as a reminder that λ A , λ B are negative. This value of the parameter α * most uniformly smooths all frequencies for any arbitrary U 0 containing all frequency modes of error. It can be seen that the spectral radii of P and Q shown in Figure 2 occur at either endpoint, and the minimum amplitude occurs near the intersection of the curve with the axis. Varying the parameter α * in (22) controls the intersection point, and thus creates an effective “optimal smoothing region’’, denoted R smooth . Modes of error associated with this optimal smoothing region will be damped fastest, which makes Sylvester iterations highly adaptive in nature. This adaptive nature of Sylvester iterations lends itself nicely to a multigrid formulation.

The Sylvester multigrid formulation is based on the philosophy that most iterative schemes, including Sylvester iterations, relax high frequency modes fastest, leaving low frequency components relatively unchanged [11] . On all grids traversed by a multigrid V-cycle, the high frequency modes are eliminated fastest by finding the optimal parameter value α mg such that L min mg = L mid mg , as shown in the upper curve of Figure 2. The height L mid mg is essentially the distance above the axis associated with the approximate “middle’’ eigenvalue in the range of λ A or λ B . This equality gives the following optimal parameter value for the Sylvester multigrid method

α mg = | m i n ( λ min A , λ min B ) | × | m i n ( λ mid A , λ mid B ) | , (25)

where, if m n , the minimum (i.e., most negative) middle eigenvalue λ mid ( λ min + λ max ) / 2 is chosen to shrink the optimal smoothing region R smooth mg such that high frequencies are smoothed most effectively. This choice of optimal parameter can be observed in Figure 2 to drastically decrease the magnitude of λ P and λ Q associated with high frequencies which significantly enhance relaxation in accordance with the multigrid philosophy.

To find analytical expressions for α * and α mg , it is necessary to have values for λ A and λ B . For Dirichlet boundary conditions, analytical expressions for λ A and λ B are derived below, but for Neumann boundary conditions numerical approaches are necessary to find λ A and λ B . The operator matrices A and B are each of tridiagonal form,

[ d 0 d 1 d 1 d 0 d 1 d 1 d 0 d 1 d 1 d 0 ] p × p . (26)

Tridiagonal matrices with constant diagonals, such as A and B for Dirichlet boundary conditions, have analytical expressions for their eigenvalues given by

λ k = d 0 + 2 d 1 cos ( k π p + 1 ) , k = 1 , 2 , , p (27)

where p is the arbitrary dimension of the matrix [12] . Neumann boundary conditions alter the upper and lower diagonals of A or B, thus there is no analytical form of eigenvalues for Neumann boundary conditions. Using (5) and (27) gives the following analytic form of the eigenvalues of the tridiagonal matrices A and B,

λ k = 2 h 2 ( 1 + cos ( k π p + 1 ) ) , k = 1 , 2 , , p , (28)

which achieves minimum and maximum values given by

λ min = 2 h 2 ( 1 + cos ( p π p + 1 ) ) and λ max = 2 h 2 ( 1 + cos ( π p + 1 ) ) , (29)

respectively. Using (24), (25), and (29) the analytic expressions for optimal parameters for both conservative and multigrid approaches are given by

α * = 2 h 2 ( 1 + cos ( l π l + 1 ) ) ( 1 + cos ( π l + 1 ) ) , α mg = 2 h 2 ( 1 + cos ( l π l + 1 ) ) ( 2 + cos ( π l + 1 ) + cos ( l π l + 1 ) ) , (30)

where again l m a x ( m , n ) . Having expressions for α * and α mg allows λ P and λ Q to be found analytically using (22) which subsequently allows the spectral radii of the iteration matrices P and Q to be calculated. Knowing the spectral radii of the iteration matrices P and Q is highly advantageous, as it allows for an analysis of the Sylvester iterative scheme.

4. Analysis

The analysis of standard Sylvester iterations can be performed and describes the error reduction with each consecutive iteration using (19). Having the optimal parameters given by (30) and eigenvalues of P and Q in (22), the spectral radii can be calculated to be

ρ ( P ) = 1 + cos ( m π m + 1 ) + ( 1 + cos ( l π l + 1 ) ) ( 1 + cos ( π l + 1 ) ) 1 + cos ( m π m + 1 ) ( 1 + cos ( l π l + 1 ) ) ( 1 + cos ( π l + 1 ) ) , ρ ( Q ) = 1 + cos ( n π n + 1 ) + ( 1 + cos ( l π l + 1 ) ) ( 1 + cos ( π l + 1 ) ) 1 + cos ( n π n + 1 ) ( 1 + cos ( l π l + 1 ) ) ( 1 + cos ( π l + 1 ) ) . (31)

Rewriting the last expression of (19), we see that

E k E 0 ~ ( ρ ( P ) ρ ( Q ) ) k . (32)

If we want to reduce our error to E k ~ ϵ E 0 and we wish to know how many iterations it will take to achieve this error reduction, using (32) we set ( ρ ( P ) ρ ( Q ) ) k ~ ϵ , and solving for k, we find it will take

k ~ l o g ( ϵ ) l o g ( ρ ( P ) ρ ( Q ) ) (33)

iterations to reduce the error by ϵ . Here log can be with respect to any base, as long as the same one is used in both the numerator and denominator; e.g., the natural log can be used. Recall that the exact solution U exact of (4) is only an approximate solution of the differential Equation (1) we are actually solving. Due to this, we can only expect accuracy of the truncation error of the approximation. With an O ( h 2 ) method, U i , j exact differs from U ( x i , y j ) on the order of h 2 so we cannot achieve better accuracy than this no matter how well we solve the linear system. Thus, it is practical to take ϵ to be something proportional to the expected global error, e.g. ϵ = C h 2 for some fixed C [12] .

To calculate the order of work required asymptotically as h 0 , (i.e. m ) using (33) and our choice for ϵ , we see that

k ~ l o g ( C ) + 2 l o g ( h ) l o g ( ρ ( P ) ρ ( Q ) ) . (34)

The expressions for ρ ( P ) and ρ ( Q ) in (31) contain several cosine terms which can be Taylor expanded about different values. Cosines with arguments like π x can be expanded about x = 1 or x = 0 depending on the form of x, namely

c o s ( π x ) ~ 1 + π 2 2 ( x 1 ) 2 + O ( ( x 1 ) 3 ) for x 1, c o s ( π x ) ~ 1 π 2 2 x 2 + O ( x 3 ) for x 0, (35)

where, from (31), the form of x is something like m / ( m + 1 ) or 1 / ( m + 1 ) , which clearly approach one or zero, respectively, in the limit that m . Using these expansions, along with the fact that 1 / ( 1 x ) ~ 1 + x + O ( x 2 ) for x 1 to simplify the spectral radii, we arrive at the following

ρ ( P ) ~ ρ ( Q ) ~ 1 π l + 1 + 1 4 ( π l + 1 ) 2 , (36)

when m , n 1 . Since h = 1 / ( m + 1 ) , (34) combined with (36) gives the following order of work needed for convergence to within ϵ ~ C h 2 :

k ~ 2 l o g ( m + 1 ) 2 l o g ( 1 π l + 1 ) ~ l π l o g ( m ) , (37)

where only linear terms are used from (36), and the latter simplified expression can be deduced by using the property that l o g ( 1 + x ) ~ x + O ( x 2 ) for x 1 . Note that when m = n , the order of work for Sylvester iterations is k ~ ( m / π ) l o g ( m ) , which is comparable to the work necessary for the Successive Over-Relaxation (SOR) algorithm to solve Poisson’s equation [12] . This will be our basis for comparison in the Results section for standard Sylvester iterations.

5. Results

Problems solved by Sylvester iterations can, in general, be written shorthand as L U = F , where L is a linear operator. In the case of Poisson’s equation, L is the Laplacian operator. As an error measure, the discrete 2 norm of the residual, r F L U , can be measured at each iteration. This number provides the stopping criterion for our iterative schemes, namely the iterations are run until

r ( k ) = F L U ( k ) < tol × r ( 0 ) , (38)

where r ( 0 ) is the initial residual, and tol is the tolerance. The tolerance is set to machine precision tol ~ 10 16 to illustrate the asymptotic convergence rate,

q ( k ) = r ( k ) r ( k 1 ) , (39)

however, in practice, the discretization error O ( h 2 ) is the best accuracy that can be expected. These numerical results were run using MATLAB on a 1.5 GHz Mac PowerPC G4. The model problem that is solved is given by

2 u = 2 [ y 2 ( 1 6 x 2 ) ( 1 y 2 ) + x 2 ( 1 6 y 2 ) ( 1 x 2 ) ] in Ω , u = 0 on Ω , (40)

where Ω = { x , y | 0 x 1 , 0 y 1 } , and whose exact solution

u exact ( x , y ) = ( x 2 x 4 ) ( y 4 y 2 ) , (41)

is known so errors can be computed [11] . This model problem is used to show performance of both standard and multigrid Sylvester iterations. In all cases, the initial guess U ( 0 ) of the iterative scheme is a normalized random array and can be assumed to contain all modes of error.

5.1. Standard Sylvester Iterations

For comparison, standard Sylvester iterations were tested against Successive Over-Relaxation (SOR) with Chebyshev acceleration (see e.g., [13] ). In SOR with Chebyshev acceleration, one uses odd-even ordering of the grid and changes the relaxation parameter ω at each half-step, which converges to the optimal relaxation parameter. The results are shown in Table 1. It is important to note that SOR iterations involve no matrix inversions, whereas Sylvester iterations do, thus the CPU time measure might not be an appropriate gauge for this particular comparison. From these results, it is clear that standard Sylvester iterations are comparable to SOR, and in most cases, converge to within tolerance in fewer iterations than SOR. An unfortunate artifact of standard iterative schemes is that as system size increases, so does the spectral radii governing the convergence. This can be observed in Table 1, as asymptotic convergence rates for each method q sylvester and q sor steadily increase, thus requiring higher numbers of iterations to solve within tolerance. The number of Sylvester iterations required to converge is consistent with the predicted number of iterations given by Equation (33) letting ϵ = tol = 10 16 .

5.2. Multigrid Sylvester Iterations

In multigrid Sylvester iterations, the performance of the V ( ν 1 , ν 2 ) -cycle using Sylvester iterations is compared to that using the traditional Gauss-Seidel (GS)

Table 1. Sylvester iterations vs. successive over-relaxation (SOR).

Table 2. Multigrid sylvester iterations vs. multigrid gauss-seidel (GS) iterations.

iterations. The parameter ν 1 represents the number of smoothing iterations done on each level of the downward branch of the V-cycle, while ν 2 represents the number done on the upward branch. In practice, common choices are ν = ν 1 + ν 2 3 , so our performance is based on the V ( 2,1 ) -cycle [14] . In each case, the V-cycle descends to the coarsest grid having gridwidth h 0 = 1 / 2 , and in the Sylvester implementation, the value of α mg is calculated to smooth high frequencies most effectively on each grid traversed by the cycle. The results are shown in Table 2. It can be seen that the asymptotic convergence rates q sylvester mg and q gs mg reach steady values independent of the gridwidth h. This is characteristic of multigrid methods, and enables the optimality of the multigrid method. It is clear when comparing the CPU times of the Sylvester multigrid formulation in Table 2 with standard Sylvester iterations in Table 1 that the multigrid framework is substantially faster (e.g., 30 times faster than standard iterations for a grid of size 256 × 256 ). It can also be seen that the asymptotic convergence rates are such that q sylvester mg < q gs mg , thus convergence is met in fewer V ( 2,1 ) cycles using Sylvester smoothing versus Gauss-Seidel smoothing.

6. Conclusion

Sylvester iterations provide an alternative iterative scheme to solve Poisson’s equation that is comparable to SOR in the number of iterations necessary to converge, namely converging to discretization accuracy within k ~ ( m / π ) l o g ( m ) iterations. The true benefit of the Sylvester iterations, however, comes from its adaptive ability to smooth any range of error frequencies, thus being a perfect candidate for smoothing in a multigrid framework. Multigrid V ( 2,1 ) -cycles using Sylvester smoothing have an asymptotic convergence rate of q sylvester mg = 0.069 (versus q gs mg = 0.083 for Gauss-Seidel smoothing) and indicate significant improvement in efficiency over standard Sylvester iterations.

Cite this paper

Franklin, M.B. and Nadim, A. (2018) A Poisson Solver Based on Iterations on a Sylvester System. Applied Mathematics, 9, 749-763. https://doi.org/10.4236/am.2018.96052

References

  1. 1. Douglass, C., Hasse, G. and Langer, U. (2003) A Tutorial on Elliptic PDE Solvers and Their Parallelization. Society for Industrial and Applied Math, Philadelphia. https://doi.org/10.1137/1.9780898718171

  2. 2. Feig, M., Onufriev, A., Lee, M.S., Im, W., Case, D.A. and Brooks III, C.L. (2003) Performance Comparison of Generalized Born and Poisson Methods in the Calculation of Electrostatic Solvation Energies for Protein Structures. Journal of Computational Chemistry, 25, 265-284. https://doi.org/10.1002/jcc.10378

  3. 3. Ravoux, J.F., Nadim, A. and Haj-Hariri, H. (2003) An Embedding Method for Bluff Body Flows: Interactions of Two Side-by-Side Cylinder Wakes. Theoretical and Computational Fluid Dynamics, 16, 433-466. https://doi.org/10.1007/s00162-003-0090-4

  4. 4. Trellakis, A., Galick, A.T., Pacelli, A. and Ravaioli, U. (1997) Iteration Scheme for the Solution of the Two-Dimensional Schrodinger-Poisson Equations in Quantum Structures. Journal of Applied Physics, 81, 7880-7884. https://doi.org/10.1063/1.365396

  5. 5. Saraniti, M., Rein, A., Zandler, G., Vogl, P. and Lugli, P. (1996) An Efficient Multigrid Poisson Solver for Device Simulations. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 15, 141-150. https://doi.org/10.1109/43.486661

  6. 6. Varga, R.S. (1962) Matrix Iterative Analysis. Prentice-Hall, New Jersey.

  7. 7. Young, D.M. (1971) Iterative Solution of Large Linear Systems. Academic Press, New York.

  8. 8. Brezinski, C. and Wuytack, L. (2001) Numerical Analysis: Historical Developments in the 20th Century. Elsevier Science B.V., Netherlands.

  9. 9. Van Loan, C.F. (2000) The Ubiquitous Kronecker Product. Journal of Computational and Applied Mathematics, 123, 85-100. https://doi.org/10.1016/S0377-0427(00)00393-9

  10. 10. Peaceman, D.W. and Rachford, H.H. (1955) The Numerical Solution of Parabolic and Elliptic Differential Equations. Journal of the Society for Industrial and Applied Mathematics, 3, 28-41. https://doi.org/10.1137/0103003

  11. 11. Briggs, W.L., Henson, V.E. and McCormick, S.F. (2000) A Multigrid Tutorial. 2nd Edition, Society for Industrial and Applied Math, Philadelphia. https://doi.org/10.1137/1.9780898719505

  12. 12. LeVeque, R.J. (2007) Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems. Society for Industrial and Applied Math, Philadelphia. https://doi.org/10.1137/1.9780898717839

  13. 13. Press, W., Vetterling, W., Teukolsky, S. and Flannery, B. (1992) Numerical Recipes in Fortran. 2nd Edition, Cambridge University Press, New York.

  14. 14. Trottenberg, U., Oosterlee, C. and Schuller, A. (2001) Multigrid. Elsevier Academic Press, London.

Appendix

1) Boundary condition implementation

Solving the Poisson equation using Sylvester iterations lends itself nicely to boundary condition implementation. Dirichlet boundary conditions of the form

u ( 0 , y ) = u 1 ( y ) , u ( a , y ) = u 2 ( y ) , u ( x , 0 ) = u 3 ( x ) , u ( x , b ) = u 4 ( x ) , (1)

where u 1 , u 2 , u 3 and u 4 are functions describing the edges of U, can be implemented as follows. The unknown values in the Sylvester system given in Equation (4) are the m × n array of interior values, where m = M 2 , n = N 2 . It is possible to incorporate Dirichlet boundary conditions directly into this interior system by examining the partitioned matrix product, for example AU, given by

(2)

with UB taking an analogous partitioned form. Multiplying through by h 2 associated with the operator matrices A and B, the partitioned Sylvester system for internal unknowns gives

A L u 1 T + ( A int ) ( U int ) + A R u 2 T + u 3 B T T + ( U int ) ( B int ) + u 4 B B T = ( h 2 ) ( F int ) , (3)

where all matrix-vector products are m × n outer products. Note that the product AU incorporates Dirichlet boundary conditions in the x-direction, and UB incorporates Dirichlet boundary conditions in the y-direction. Combining the partitioned systems incorporating both A and B matrix multiplications and boundary conditions yields

( A int ) ( U int ) = ( h 2 ) ( F int ) ( A L u 1 T + A R u 2 T ) ( u 3 B T T + u 4 B B T ) , (4)

which is an m × n linear system for U int .

For Neumann boundary conditions, the edge at which the condition is imposed becomes part of the internal unknowns in the Sylvester system. As an example, consider a Neumann boundary condition given by

u x = g ( y ) on x = 0. (5)

Staying within the finite difference formulation of derivatives and letting g ( y j ) g j , this condition can be discretized and approximated with the O ( h 2 ) central difference approximation, which yields

( u x ) i , j U i + 1 , j U i 1 , j 2 h = g j for i = 0 , 0 j N . (6)

For a Neumann condition along the edge x = 0 , the row vector u 1 T described in (2) becomes a part of the internal array of unknowns U int . In order to implement this finite difference on the edge, we need to introduce a ghost layer with index i = 1 , and pair Equation (6) to the second derivative operator AU for i = 0 . This gives

U 1 , j U 1 , j 2 h = g j U 1 , j = U 1 , j 2 h g j , ( 2 u x 2 ) 0 , j ( U 1 , j 2 h g j ) 2 U 0 , j + U 1 , j h 2 = 2 U 1 , j 2 U 0 , j h 2 2 g j h , (7)

which leads to the following partitioned form of AU,

(8)

where the additional term 2 g j / h is taken to the right hand side of the Sylvester system such that F 0 , j int = F 0 , j int + 2 g j / h for 0 < j < N . Comparing (8) to (2), the size of U int changes from m × n to ( m + 1 ) × n , and A 0,1 is changed from 1 to 2 (shown boldface in (8)), and the right hand side is slightly modified along that edge. Similarly, any edge with a Neumann condition can be handled in this fashion. It is clear that both Dirichlet and Neumann boundary conditions are very simple to implement in the Sylvester iteration method, and only slightly modify the structure of the arrays involved.