_{1}

^{*}

A time-spectral method for solution of initial value partial differential equations is outlined. Multivariate Chebyshev series are used to represent all temporal, spatial and physical parameter domains in this generalized weighted residual method (GWRM). The approximate solutions obtained are thus analytical, finite order multivariate polynomials. The method avoids time step limitations. To determine the spectral coefficients, a system of algebraic equations is solved iteratively. A root solver, with excellent global convergence properties, has been developed. Accuracy and efficiency are controlled by the number of included Chebyshev modes and by use of temporal and spatial subdomains. As examples of advanced application, stability problems within ideal and resistive magnetohydrodynamics (MHD) are solved. To introduce the method, solutions to a stiff ordinary differential equation are demonstrated and discussed. Subsequently, the GWRM is applied to the Burger and forced wave equations. Comparisons with the explicit Lax-Wendroff and implicit Crank-Nicolson finite difference methods show that the method is accurate and efficient. Thus the method shows potential for advanced initial value problems in fluid mechanics and MHD.

Initial-value problems are traditionally solved numerically, using finite steps for the temporal domain. The time steps of explicit time advance methods are chosen sufficiently small to be compatible with constraints such as the Courant-Friedrich-Lewy (CFL) condition [

We here outline a fully spectral method for computing semi-analytical solutions of initial-value partial differenttial equations [

The GWRM approach consistently employs Chebyshev polynomials for all temporal, spatial and parameter domains for linear and non-linear initial-value problems with arbitrary initial and periodic or non-periodic boundary conditions. As such it appears not to have been extensively pursued earlier. The idea to employ orthogonal sets of basis functions to globally minimize spatial spectral expansions (weighted residual methods, WRM) is, however, far from new [5,6]. The GWRM is indeed a Galerkin WRM, using the weak formulation of the differential equation, as in finite element methods (FEM). An important difference to FEM is the use of more advanced trial functions, valid for larger domains. This is of particular importance for “smearing out” small scale fluctuations.

A number of authors have investigated various aspects of spectral methods in time. Some early ideas were not developed further in [

Chebyshev polynomials are used here for the spectral representation in the GWRM. These have several desirable qualities. They converge rapidly to the approximated function, they are real and can easily be converted to ordinary polynomials and vice versa, their minimax property guarantees that they are the most economical polynomial representation, they can be used for nonperiodic boundary conditions (being problematic for Fourier representations) and they are particularly apt for representing boundary layers where their extrema are locally dense [13,14]. The GWRM is fully spectral; all calculations are carried out in spectral space. The powerful minimax property of the Chebyshev polynomials [

The method is acausal, since the time dependence is calculated by a global minimization procedure acting on the time integrated problem. Recall that in standard WRM methods, initial value problems are transformed into a set of coupled ordinary, linear or non-linear, differential equations for the time-dependent expansion coefficients. These are solved using finite differencing techniques.

The structure of the paper is as follows. First, the GWRM is outlined mathematically. We show, in sections 2-4, how integration, differentiation, nonlinearities, as well as initial and boundary conditions are handled in multivariate Chebyshev spectral space for arbitrary solution intervals. Having transformed the initial-value problem, a set of algebraic equations for the coefficients of the Chebyshev expansions result. These are solved using a new, efficient global nonlinear equation solver which is briefly described in section 5. The introduction of temporal and spatial subdomains, for increasing efficiency, is discussed in section 6. As an introductory example, we solve a stiff, time-dependent ordinary differential equation representing flame propagation. For studying accuracy, the nonlinear Burger equation and its exact solution is used. Detailed comparisons with the time differencing Lax-Wendroff (explicit) and Crank-Nicolson methods (semi-implicit) are presented in section 7. A forced wave equation is subsequently used for studying strongly separated time scales. Finally, we demonstrate application of the GWRM to stability problems formulated within the linearised ideal and resistive magnetohydrodynamic (MHD) models. The paper ends with a summary and conclusions.

Consider a system of parabolic or hyperbolic initial-value partial differential equations, symbolically written as

where is the solution vector, D is a linear or nonlinear matrix operator and is an explicitly given source (or forcing) term. Note that D may depend on both physical variables (t, x and u) and physical parameters (denoted p) and that f is assumed arbitrary but non-dependent on u. Initial u(t_{0}, x; p) as well as (Dirichlet, Neumann or Robin) boundary u(t, x_{B}; p) conditions are assumed known.

Our aim is to determine a spectral solution of Equation (1), using Chebyshev polynomials [

The solution is approximated by finite, multivariate first kind Chebyshev polynomial series. Recall that Chebyshev polynomials of the first kind (henceforth simply referred to as Chebyshev polynomials) are defined by. These functions can be written as real, simple polynomials of degree n and are orthogonal in the interval over a weight. Thus, , etc. For simplicity, we will now restrict the discussion to a single equation with one spatial dimension x and one physical parameter p. Schemes for several coupled equations and higher dimensions may subsequently be straightforwardly obtained. Thus,

with

and,. Here z = t, x or p. Indices “0” and “1” denote lower and upper computational domain boundaries, respectively. The basic Chebyshev polynomials have the limited range of variation and they require arguments within the same range. The variables, and P used here allow for arbitrary, finite computational domains. We note that, at the spatial boundaries, and. Primes on summation signs in Equation (3) indicate that each occurence of a zero coefficient index should render a multiplicative factor of 1/2.

Next, we use a weighted residual formulation in order to determine the unknown coefficients a_{klm} of the solution ansatz (3).

The weighted residual method (WRM) is based on the idea that a residual, when using the ansatz (3) in Equation (2), is to be minimized globally. The residual is integrated over the entire solution domain in order to produce a set of equations for the coefficients of Equation (3). In the Galerkin WRM approach, the simplifying continuous orthogonality properties of the basis functions are employed through first multiplying the partial differential equation by basis functions of the same kind as those of the ansatz. Weight functions may also be included. Thus, similarly as for FEM, the weak formulation of the pde is used. The Galerkin WRM of the present method, providing the equations for the coefficients, is

where the residual R is defined as

with

.

The solution ansatz (3) is now inserted into Equation (5). The separation of variables inherent in the Galerkin WRM approach, enables separately performed integrations. Consequently

We have used the variable transformation and introduced the Kronecker delta (being 1 if i = k and 0 otherwise). The integrals over the spatial and parameter domains may be computed likewise, and the first term of Equation (5) becomes

where the indices obey 0 ≤ q ≤ K, 0 ≤ r ≤ L, 0 ≤ s ≤ M. For the second term of Equation (5) the initial condition is expanded as

with

where has been used. We find

Next, we apply the expansions

which yield

and similarly for the coefficients. The final expression for the GWRM coefficients of Equation (3) becomes simply, using Equations (5), (7), (10) and (12),

defined for all 0 ≤ q ≤ K + 1, 0 ≤ r < L_{BC}, 0 ≤ s ≤ M. Note that the number of modes in the temporal domain is extended to K + 1 due to the time integration, that the initial conditions are included at this stage and that the high end spatial modes with are saved for implementation of boundary conditions. Since the solution ansatz (3) extends to K temporal modes only, the K + 1 mode needs special attention as discussed below.

It may also be noted that Equation (13), the equation for the Chebyshev coefficients, is the same as that which would obtain if the residual R, using Equations (3), (8) and (11), is set identically to zero. How can the global WRM solution of Equation (13) correspond to a “local” Chebyshev approximation? The explanation is that Chebyshev approximations are not local in contrast to, for example, Taylor series expansions of ordinary polynomials. They are always defined on an interval (see Equations (3) and (4)). Due to the uniqueness given by their minimax property, “local” or “global” Chebyshev approximations are identical once a domain is defined.

Whether solving a single differential equation or a system of coupled linear/nonlinear differential equations, Equation (13) will constitute a set of coupled linear/ nonlinear algebraic equations. Here the coefficients are themselves functions of the coefficients whereas the coefficients are uniquely determined from the forcing term f. For example, if f equals a constant C, then. This is shown using Equation (11). Thus Equation (13) specifies a complete, implicit relation for the coefficients of the solution together with the boundary conditions, which will be discussed in the next section. Methods for efficient iterative solution of Equation (13) will be introduced in section 5. Convergence is controlled by requiring that the absolute values sum of the first few coefficients of the solution (3) deviates less than some value from one iteration to the next. Usually we assume.

By also letting

we may employ the Chebyshev representation of time integration:

If the operator D contains spatial derivatives, the following formulas are used [

(16) (17)

Note that the coefficients and are valid for the intervals and, respectively.

The finite number of spectral terms introduces some subtleties. Although Equation (13) may be solved to order K + 1, the solution ansatz (3) is limited to order K. Assuming that K is the highest temporal mode number used in the computation, the term in the sum of Equation (15) must still be retained so that is correctly calculated for a true K order minimax approximation. Since the integrals in Equation (11) are subsequently truncated to order K, the initial condition will not be exactly reproduced when setting t = t_{0} in the solution Equation (3) due to that the integrals will become non-zero (to order K + 1 they are indeed zero). There is, however, a remedy to save both minimax accuracy and true representation of the initial condition. By letting everywhere except for on the left hand side of Equation (13), a solution is obtained which exactly reproduces the initial condition for t = t_{0}. We do not give a formal proof here; rather we conjecture (after having studied a few cases) that this solution exactly coincides with the WRM solution produced directly from the original differential form Equation (1). This is not overly surprising, since the information used by both the differential and the integral formulations becomes the same in this case. Both of the procedures discussed are used in this work.

All computations are here performed using the computer mathematics programme Maple, since editing, compilation, linking, execution, plotting and debugging are conveniently performed within the same environment. For some computations, like when solving Equations (13), analytic differentiation and analytic simplification of expressions, being easily carried out in Maple, is desirable. The GWRM is easily coded in numerically efficient languages like Matlab or Fortran. The computational speed per se is not important for the benchmarking and comparisons with other methods given in Section 7; rather it is here more important that all comparisons are carried out within the same computational environment.

We now turn to a discussion of implementation of initial and boundary conditions in the GWRM. Their number depends on the number of equations in (1) and by the order of the spatial derivatives. It is already shown that initial conditions enter directly into Equation (13).

Boundary conditions should be applied at coefficients at the upper end of the spatial mode spectrum. This can be seen in several ways. From Equations (16), (17) it is clear that the Chebyshev representation of functions differentiated l times is only valid up to order L − l. Thus the coefficient Equations (13) do not apply for higher spatial mode numbers.

Furthermore, it is instructive to consider the flow of information in Chebyshev space, associated with temporal integration and spatial differentiation during iteration of Equation (13); see

whereas for integration, the Chebyshev coefficient at modal point k samples information from modal points both at k − 1 and k + 1. Modes that contribute to the values of the integral or derivatives are marked. Modes outside the computational domain (dashed region and beyond) are defined to give zero contribution. The spatial domain behaviour is consistent with that the solution (13) is defined only to spatial orders less than. Thus L − L_{BC} is the number of boundary conditions that should be imposed for all k and m.

In the diagram, modal points used for two boundary conditions are shown (filled squares). It is seen that any error occuring at high spatial mode numbers is amplified through the multiplicative terms in Equations (16), (17), and numerical instability could result. Since Chebyshev coefficients usually converge rapidly with mode numbers and since the boundary conditions are considered known, numerical stability is usually not compromised by this effect.

The initial condition is imposed at k = 0 for all modes with 0 ≤ l < L_{BC} ≤ L and 0 ≤ m ≤ M and are marked by empty squares. The initial condition may be chosen arbitrarily. If the initial condition requires many, or all, temporal modes for sufficient resolution, care must be taken not to conflict with the boundary conditions applied at high l values. Preferably, initial conditions are chosen so that they satisfy the boundary conditions.

The boundary conditions are implemented into the GWRM in the following way. We chose here to describe the case of Dirichlet boundary conditions; one at each end of a 1D spatial interval. Other types of boundary conditions may straightforwardly be implemented once this case is understood. For systems of equations with many boundary equations, subroutines for handling this are preferably programmed. The boundary conditions are Chebyshev expanded as

We choose to apply discrete Chebyshev interpolation both for initial and boundary conditions since this procedure has precisely the same effect as taking the partial sum of a Chebyshev series expansion and is easily computed [

We thus approximate a function with the finite Chebyshev series

with coefficients

where

The Chebyshev approximation given by Equations (19), (20) can be shown to be, under rather mild conditions, an accurate polynomial approximation of [

Since and, we use T_{l}(−1) = (−1)^{l}

and T_{l}(−1) = 1 to implement the two boundary conditions at the highest modal numbers of the spatial Chebyshev coefficients;

for L being even (upper sign) or odd (lower sign), respectively, with

The Chebyshev coefficients in Equations (8) and (13), for the initial condition expansion, are computed by using the analytical form for in a two-dimensional formulation of Equations (19), (20) in physiccal and parameter space.

It should be noted that a useful simplification occurs for periodic boundary conditions, for which case . This relation is only satisfied for even Chebyshev polynomials. Considerable computation time is thus saved by only computing coefficients with even values of l.

In summary, initial and boundary conditions are initially transformed into Chebyshev space by use of Equations (19), (20) in suitable dimensional forms. All subsequent computations are performed in Chebyshev space, using Equations (13) and equations for the boundary conditions of the form (23). When sufficient accuracy in the coefficients is obtained, the solution Equation (3) is obtained in physical variables. For periodic boundary conditions, coefficients with l odd can be neglected.

Nonlinear terms of the operator D are treated fully spectrally in this method, in contrast to in pseudo-spectral schemes [

For the sake of simplicity, we now discuss the handling of a second order nonlinearity in one-dimensional Chebyshev spectral space. Higher dimension cases are easily obtained from that of one dimension. We also omit the arguments of the Chebyshev functions, which are assumed identical.

Thus we wish to determine the coefficients in

A basic and useful relationship is the identity T_{m}T_{n} = , which “linearizes” expressions containing simple products of Chebyshev polynomials. Since all variable expansions have the same number of modes within the same space (temporal, physical or parameter space), we may assume that in Equation (24). After some algebra, the following exact expression is determined:

being valid for 0 ≤ k ≤ 2M. Here for, and for. The prime on the summation sign denotes that all occurences of a zero index of a and of b should render a multiplying factor of. Note that only the coefficients for the employed spectral space are computed (we thus compute for 0 ≤ k ≤ M); other terms are truncated. The computation is best facilitated by creating a procedure that can be repeatedly called also when computing coefficients for higher order nonlinearities.

The GWRM solution to an initial-value problem is found when the Chebyshev coefficients of Equation (13) are determined to sufficient accuracy. For a linear problem, the coefficients can be obtained by a simple Gaussian elimination procedure. Nonlinear differential equations, however, lead to nonlinear algebraic equations and these may be difficult to solve numerically [

The GWRM is well adapted for solution using iterative methods for two reasons. First, Equation (13) can be immediately cast in the fixed-point iterative form

where the solution vector x here contains the Chebyshev coefficients to be determined, and the vector function reflects the functional forms of and. Second, all iterative methods require an initial estimate of the solution vector, and if this deviates too much from the solution to be determined, numerical instability results. For the GWRM, the coefficients that correspond to the solution for the entire time domain (the roots of the equations) may deviate strongly from the coefficients of the initial state. One of the simplest and most frequently used solvers, Newton’s method, features a fairly limited domain of convergence [15,17,18], however. Because the initial guess in the case of the GWRM is precisely the initial condition, there always remains the possibility to reduce the solution time interval, for example by using subdomains as described below, so that the solution Chebyshev coefficients become sufficiently close to the initial Chebyshev coefficients. This, incidentally, shows that a GWRM formulation of a well posed initialvalue problem in principle always will lead to a solution, although we do not prove this rigorously at present.

Newton’s method is usually globally improved by the addition of line-search methods, in which the iteration step size is decided from the minima of the function, the roots of which are to be determined. Unfortunately, these methods may land on spurious solutions, corresponding to local minima rather than to true zeroes of the function. We have thus developed the semi-implicit root solver (SIR), being an iterative method for globally convergent solution of nonlinear equations and systems of nonlinear equations. By “global” is here meant that correct global solutions are usually (but not always) found, having the the new feature that they are never local, non-zero minima. It is shown in [

Details of SIR are given in [

or, in matrix form

The system (28) has the same solutions as the original system, but contains free parameters in the form of the components of the matrix A. These parameters are determined by specifying the values of, the gradients of the hypersurfaces. The latter gradients control global, quasi-monotonous and superlinear convergence. In SIR, for all m ≠ n, whereas is finite and is chosen to produce limited step lengths and quasi-monotonous convergence; it usually approaches zero after some initial iterations. Since Newton’s method is a limiting case of the present method, namely when all, rapid second order convergence is generally approached after some iteration steps. The relationship to Newton’s method fortunately leads to approximately similar numerical work, essentially that of solving a Jacobian matrix equation at each iteration step.

There are two aspects of the GWRM that are of particular importance for the root solver. First, the algebraic equations to be solved are polynomials of the same order as the nonlinearity of the original differential equations. For example, second order nonlinear pde’s lead to the solution of a system of second order polynomial equations by SIR. Since a large class of problems in physics, formulated as pde’s, feature second (or third) order nonlinearities, there is a potential to device more efficient versions of SIR where this fact has been utilized. Second, most of the computational time in SIR, when applied to the GWRM, is not spent on matrix equation solution, but rather on function evaluation. If the functions are formulated and evaluated more economically, computational efficency may be improved.

We conclude this section by stating that the SIR algorithm has turned out to be robust and well suited for all GWRM applications tried to date. Further development would focus on the possibility to enhance SIR efficiency by economizing the handling and evaluation of the polynomial Equation (13).

The number of arithmetical operations due to matrix inversion typically feature a cubic dependence on the number of unknowns. The root solver, applied to Equation (13), thus may dominate computational time. Applied to Equation (3), straightforward application of GWRM and SIR would require about operations for each iteration when solving Equation (13). Using LU decomposition rather than matrix inversion, the number of operations is reduced to [

In more complex calculations, efficiency requires the temporal and spatial domains to be separated into subdomains. This enables a linear rather than a cubic dependence of efficiency on, for example, the number of spatial modes applied to the entire domain, given that the number of subdomains is proportional to L. Assume that the temporal and spatial domains are divided into N_{t} and N_{x} subdomains, respectively. This reduces operations to

operations when solving a particular problem, assuming that the same total number of modes are sufficient in both cases. As an example, for K = L = 11, M = 2 and N_{t} = N_{x} = 3 there would be a reduction from about 2.7 × 10^{7} to 3.3 × 10^{5} operations Additionally it should be noted that the functions in SIR will become substantially less complex when subdomains are used, with resulting reduced computational effort.

Temporal and spatial subdomains must be implemented differently. For the temporal domain the procedure can be more straightforward. As initial condition for each domain, we here simply use the end state of the previous one. It should be recalled, however, that a GWRM (as well as any WRM) solution is not per se a Chebyshev approximation of the true solution, but rather stems from a minimization of the residual, including information concerning the differential formulation of the problem, over the solution domain. Simple averaging (by using a few modes) over regions with strong temporal gradients is thus likely to produce large errors, due to the poorly approximated differential character of the problem. As will be shown, a preferred solution is to use an adaptive scheme, which uses few modes by default in each subdomain, but increases this number whenever accuracy so requires. Furthermore, the use of temporal subdomains is beneficial for SIR convergence, since the initial condition for each domain will be closer to the final solution than what would be the case using a single temporal domain instead.

Spatial subdomains must be treated in another fashion. The reason is that boundary conditions are usually only known at the exterior, rather than at interior, boundaries. A computation is not conveniently progressed successsively through a sequence of spatial subdomains, as for temporal subdomains. Instead the boundary conditions are imposed on the outermost spatial subdomains, and the subdomains are connected at interior boundaries through continuity conditions. The functions themselves and their first derivatives are continuous across each subdomain (interior) boundary. All spatial subdomains are updated in parallel at each solution iteration. Computationally, the choice of procedure is a nontrivial task. Due to the large coefficients, appearing in higher order derivatives (see Equations (16), (17)), derivative matching is sensitive to small errors and numerical instability may result. Instead we have found that a ”handshaking procedure” where the functions are allowed to overlap into neighbouring domains, and are doubly connected, yields improved stability over derivative matching.

Subdomains may also be introduced into the physical parameter domain, if desired. The final set of piece-wise solutions (3) may easily be displayed using Heaviside functions as global solutions. Should a single global, semi-analytic solution be desired, a Chebyshev approximation covering all subdomains may be carried out at the end of the computation. Concluding, the use of temporal, spatial and parameter subdomains offers substantial potential for reducing GWRM computational time, because of the possible transition from cubic to near linear dependence on the number of modes. Details of subdomain applications will be published separately.

We now turn to the important questions of accuracy and efficiency. In this section, the GWRM is compared by example to other methods for solving partial differential equations, that use time discretization in the form of finite differencing. Even though the GWRM generates semi-analytic solutions, it must be comparable to these standard methods with regards to accuracy and efficiency to be of practical use.

To study performance when applied to nonlinear problems, a stiff ordinary differential equations is first solved. Adaptive, temporal subdomains are here showed to provide high accuracy and efficiency.

As a second example the nonlinear, 1D viscous Burger equation is solved. It features a shock-like structure near the boundary. It is shown that GWRM accuracy is comparable to that of the (explicit) Lax-Wendroff and (implicit) Crank-Nicolson schemes for a similar number of floating operations.

Next we study a problem with two strongly separated time scales. For the wave equations without and with a forcing term, the GWRM turns out to be considerably more efficient than both the Lax-Wendroff and the CrankNicolson solution methods when tracing the dynamics of the slower time scale.

The GWRM is finally applied to the demanding problems of solving the linearized systems of ideal and resistive magnetohydrodynamic (MHD) equations. Similar problems are of key importance when studying the stability of magnetically confined plasmas for purposes of controlled thermonuclear fusion.

When a match is lighted, the flame grows rapidly until the oxygen it consumes is balanced by the oxygen that comes through the surface of the ball of flame. A simple model for the flame propagation in terms of the ball radius is

with

For small values of this problem becomes very stiff through the presence of a ramp at, representing the explosive growth of the ball towards its steady state size [^{−4}. The GWRM solution is compared with the exact solution to Equations (29), (30);

where and W is the Lambert W function.

Clearly, for this small value of the ramp is very distinct and hard to resolve. Consequently, explicit finite difference methods will need extremely small time steps to resolve this problem. An optimised Matlab solution to the problem uses implicit methods that may reduce the computational effort to about 100 time steps, taking a few seconds on a tabletop computer. The GWRM solution in

In performing the computation, a default of 10 time subdomains is assumed and K = 6 is used throughout. If the accuracy criterion is satisfied, the subdomain length is doubled at the next domain, and if not it is halved. In the latter case, the calculation is repeated for the same subdomain until the accuracy criterion is satisfied. This goes on as the calculation proceeds in time until near the endpoint, where the subdomain length is adjusted to land exactly on the predefined upper time limit. Due to the stiffness of the problem in ^{4} where the subdomain length may be as small as about 2 time units. The automatic extension of the subdomain length in smoother regions saves considerable computational time; at the end of the calculation the subdomain length is several thousand time units.

The essential information provided by the computations of ^{4}. Perhaps we are willing to sacrify accurate details of the transition region, and would be satisfied with a global GWRM solution that only approximately models the transition, using only a few modes. As discussed earlier, GWRM solutions are not identical to Cheyshev approximations of the true solutions, but also mirrors the effect that results from the differential formulation of the problem. In other words—an implicit formulation of a function as a differential equation plus initial and boundary conditions will render approximate solutions that are, in some sense, imprints of the formulation. This imprint will, of course, diminish as the true solution is approached. We note that

in Equation (29) there are quadratic as well as cubic nonlinearities. As a result a global, low mode approximation of the solution is not trivially obtainable. The transition region needs a certain amount of resolution to “tie” with the solutions at lower and higher times t. This is an interesting topic for future studies.

In summary, a stiff ordinary differential equation has been solved to high accuracy using the GWRM. Due to use of automatic domain length adaption, high efficiency is also obtained, comparing well with highly optimised Matlab routines for implicit finite difference methods.

Burger’s equation is of particular interest since it is nonlinear and contains two time scales as a result of the competition between convection and diffusion. The onedimensional Burger partial differential equation

Thus contains essential physics, such as convective nonlinearities and dissipation, expected to be encountered also in more complex problems of fluid mechanics and MHD. Here denotes (kinematic) viscosity. Since this equation has an analytical solution, it provides excellent benchmarking.

The exact solution to Equation (32) is found by first introducing the Cole-Hopf transformation [

to produce a standard diffusion equation in and then by using the Fourier method. The result, for the boundary conditions is

with coefficients

where. As an example, the initial condition

results in

It should be noted that the sums of the exact solution Equation (34) may need to be carried out over a large number of terms for sufficient accuracy, because of the poor convergence at low viscosity. As at least 100 terms are required to compute a solution that gives a reasonably accurate solution near t = 0. Furthermore, in contrast to polynomials or Chebyshev polynomials, the exponential and trigonometrical functions of Equation (34) are costly to evaluate numerically. This is one example of an exact solution that is of limited practical use.

The most challenging aspect of the Burger equation, from the modelling point of view, is the shock-like structure that evolves for weak dissipation. The associated gradients are often difficult to resolve in spectral representations. Highly accurate modelling may require a high number of Chebyshev modes. The case we study develops a strong gradient near the boundary x = 1, and is representative of the gradients in, for example, edge pressure or temperature, encountered in magnetohydrodynamic computations in fusion plasma physics modelling. The structure may also appear when modelling localized resistive instabilities in tokamak and reversedfield pinch magnetic fusion configurations. It is desired that the GWRM should be able to resolve these structures for limited values of mode numbers. To see the difference as compared to standard modelling, we make comparisons to solutions obtained from the standard explicit Lax-Wendroff and implicit Crank-Nicolson finite difference schemes for partial differential equations [

In

To enable comparisons with explicit and implicit finite difference partial differential equation solvers, we will now fix viscosity to and compute the solutions as functions of t and x. The Burger equation, defined as above but now using t_{1} = 10, is solved using all GWRM, explicit Lax-Wendroff, and implicit Crank-Nicolson methods [

For the GWRM solution, two spatial subdomains with internal boundary at x = 0.75 are used. A similar result would be obtained using only one spatial domain with slightly higher number of spatial modes. With mode numbers K = 9, L = 7 an absolute global accuracy of 0.001 is obtained after 10 iterations, with a tolerance of 1.0 × 10^{−6} for the coefficient values. Results are displayed in Figures 3(b) and (c). The peak near t = 0 in

We now turn to solution of the Burger equation using finite difference methods. Accurate solutions are not straightforwardly obtained because of the strong edge gradient. Let us estimate the spatial step length required for a global error = 0.001. A second order estimate of the mid-point error resulting from finite spatial differencing with spacing is

where a prime denotes spatial differentiation. From the exact solution it is found that = 20.3 at (t, x) = (2.05, 0.94). A maximum global error of δ = 0.001 thus requires < 0.02.

The Lax-Wendroff finite difference scheme is widely used because of its reliability and because it is accurate to second order in both time and space. Since it is explicit, the maximum time step becomes limited, however. A von Neumann analysis of the Lax-Wendroff method applied to the Burger equation (32) features the limiting cases of strong convection or strong diffusion. When convection dominates the CFL condition, where is a characteristic fluid velocity, results. This condition characterises the required causality on the solution grid for hyperbolic problems. When the diffusion term dominates, the problem is parabolic and the time step is limited to by causality. Computations show that the latter criterion is the more relevant one for the present Burger problem.

Recall that accuracy requires < 0.02 according to Equatiom (38). This is in reasonable agreement with the value ≤ 1/70, that was found numerically. For, the number of time steps becomes 1000 for the given accuracy. The error of a Lax-Wendroff computation is shown in

Next, we solve the Burger equation using the CrankNicolson method. This scheme allows for arbitrarily large time steps by using an implicit approach where the functional values are determined both at the previous and present time steps. On the spatial scale, the resolution≤ 1/70 is, however, still needed to obtain a global accuracy of 0.001. To avoid costly matrix inversion at each time step, due to the implicit finite difference formulation, a tridiagonal matrix solution procedure has been developed [

For a time step = 1/500 and with = 1/70, an accuracy of 0.001 was achieved for the Burger equation, as shown in

Interestingly, we have found that the analytical GWRM solution of Burger’s equation is more economic than the “exact” Cole-Hopf solution for use in applications at low values of for given accuracy, due to the poor convergence of the latter. Although the GWRM is primarily intended for computing long time behaviour of complex problems with several time scales, it can thus be used for accurate solution of stiff problems. For the case of Burger’s equation, the GWRM was nearly as efficient as the Lax-Wendroff and Crank-Nicolson schemes for given accuracy. For nonlinear problems, where all terms must be advanced implicitly, the Crank-Nicolson method is expected to compare less favourably with the GWRM either due to reduced efficiency when solving a nonlinear system of equations at each iteration or due to reduced accuracy if nonlinear terms are time linearized. Improved GWRM efficiency is also expected for problems with

periodic boundary conditions. The GWRM has the additional advantage of providing approximate, analytic solutions.

Problems in physics often feature multiple time scales, whereas it may be of main interest to follow the dynamoics of the slowest time scale. Efficient partial differential equation solvers therefore must be able to employ long time steps, retaining stability and sufficient accuracy. By omitting resolution of the finer time scales, improved efficiency and the possibility to study complex systems are expected. As a test problem, we choose a wave equation with a forcing (source) term, also called the inhomogeneous wave equation:

with boundary and initial conditions

Here A, n, , and are free parameters, and is the forcing function. The exact solution is

for, with m an integer. This problem has the separate system and forcing function time scales and. Using the parameter value, , , and, the ratio of these time scales becomes. Thus the forcing term in (40) has here introduced a time scale much longer than that of the “unperturbed” system.

We now wish to solve Equation (39) using all GWRM, Lax-Wendroff and Crank-Nicolson methods. The problem is thus posed as a set of two first order partial differential equations:

with boundary and initial conditions corresponding to those of Equation (39). The GWRM solution, using one spatial domain with K = 6 and L = 8, is rapidly obtained within a single iteration with a tolerance of 1.0 × 10^{−6} for the coefficient values. It is displayed in

We now turn to finite difference solution of Equation (39) using the Lax-Wendroff method. Being an explicit method, it must obey the CFL condition, which for this case becomes. We find that sufficient accuracy is obtained for ≤ 1/30. Thus the maximum allowed time step is 1/30, and the number of time steps becomes 900 for the domain. The calculation requires about ten times more computer time than the GWRM. It can be seen in

The Crank-Nicolson method, being implicit, has no time step restriction and no amplitude dissipation and would perhaps intuitively be well suited for the present problem. Additionally, to avoid time-consuming large matrix equations, the so-called Generalized Thomas algorithm [^{3}/3) operations to O(5M^{3}N/3) operations is possible, that is the speed gain factor becomes N^{2}/5. Here the number of equations M = 2 and the number of spatial nodes N = 30. The handling of a number of sub-matrix equations, required at each time step, is still limiting performance however. With = 1/30, temporal resolution requires at least 50 time steps. Using matrix inversion, the corresponding computation is about three times slower than Lax-Wendroff and thus about 30 times slower than that of the GWRM. A speed gain of a factor three is expected by solving the sub-matrix equations rather than determining inverse matrices, but the GWRM remains considerable faster. It should be noted that the sub-matrices used in the Thomas algorithm are here only 2 × 2 in size. Thus negligible time is spent in matrix inversion; it is rather the extensive use of matrix manipulations in the algorithm that affects efficiency. The solution is shown in

facilitate a comparison with the exact solution. The Crank-Nicolson solution strives to follow the exact solution, and does not accurately average over the fast time scale.

It was found that the GWRM is well suited for long time scale solution of the wave equation test problem, which features both a slow and a fast time scale. For suitable mode parameters, it traces the slower dynamics using substantially less computational time than the LaxWendroff and Crank-Nicolson schemes. An important factor, contributing to the efficiency, is that whereas the Lax-Wendroff and Crank-Nicolson schemes must solve two first order Equations (41) representing the second order wave equation, the GWRM integrates both these equations formally in spectral space into one equation before the coefficient solver is launched. If results are sought for longer times, temporal subdomains are preferably used for the GRWM, to guarantee constant computational effort per problem time unit. For problems with wider separation of the time scales, the GWRM will be an increasingly advantageous method as compared to the Lax-Wendroff scheme since the latter must follow the faster time scale. It may also be noted that the GWRM averages more accurately over the fast time scale oscillations than the finite difference methods. This is a subject that deserves further attention.

This forced wave equation example featured an imposed, periodic, time scale that was longer than the system time scale. How will the GWRM perform when the imposed time scale is shorter than that of the system? At present it appears difficult to adequately handle such problems using the GWRM. A major complication is that, for efficiency, the number of modes used in the calculation would not adequately resolve the forcing function, so that the problem would not be well defined for the GWRM. This is a subject for further investigation. As seen in the case of the Burger equation, and as seen in the resistive MHD computation below, multiple time scales may also be inherent in the systems we are modelling.

We now turn to applications of the GWRM to more advanced research problems featuring large sets of coupled pde’s. In fusion plasma physics research, the stability of magnetically confined plasmas to small perturbations is of considerable importance for plasma confinement. Stability can be studied using a combined set of nonlinear fluid and Maxwell equations, magnetohydrodynamics (MHD). The configuration must be arranged so that the plasma is completely stable to perturbations on the fastest MHD time scale—the so-called Alfvén time, being of the order fractions of microseconds. If plasma resistivity is included in the MHD model, further instabilities (milliseconds time scale) are accessible for the plasma even if it is stable on the faster time scale, and remedies should be sought also for these.

We will study the linear stability of two plasma configurations within the traditional set of MHD equations, both without (ideal MHD) and with resistivity included. By “stability” is here meant the absence of exponentially growing solutions in time. For simplicity the plasma is assumed to be surrounded by a close fitting, ideally conducting wall. It can be shown that such a wall provides maximal stability. The ideal MHD plasma equations are the continuity and force equations, Ohm’s law and the energy equation supplemented with Faraday’s and Ampere’s laws, respectively:

whereas for resistive MHD, Ohm’s law becomes instead with being the resistivity. Here E and B are the electric and magnetic fields respectively, u is the fluid velocity, j is the current density, p is the kinetic pressure, is the mass density, = 5/3 is the ratio of specific heats and μ_{0} is the permeability in vacuum. Furthermore,. To determine the temporal evolution of these equations in circular cylinder geometry, they are linearized and Fourier-decomposed in the periodic coordinates and z. All dependent variables q of Equation (52) are assumed to be superpositions of an equilibrium term q_{0} and a (small) perturbation term q_{1}. Perturbations are taken to be proportional to exp[i(kz + mθ)], where k and m denote axial and azimuthal perturbation mode numbers, respectively. Next, a non-dimensionalization is carried out using the characteristic values (denoted with index “c”) of plasma radius a, Alfvén velocity, with, , and. Resulting non-dimensional equations become identical to those obtained from Equation (42) with μ_{0} = 1. We wish to solve for the time evolution of the perturbed terms for a given equilibrium and a specified perturbation (m, k). If these feature an exponential growth, the plasma is unstable to small perturbations for the assumed equilibrium.

The equilibrium is here taken to be that of a simple screw pinch with constant axial magnetic field and current density and constant mass density:

After eliminating E and j in Equation (42) there result seven complex-valued coupled partial differential equations for u_{1}, B_{1} and p_{1} respectively as functions of the independent variables time t and cylindrical variable r. They are all written on the component form, where q_{1i} denotes perturbed variables, to conform with the GWRM formulation of Equation (1). The seven dependent variables were all Chebyshev expanded in t, r and in resistivity (which was here set equal to a constant). Since the GWRM is (so far) developed for solution of real valued equations, u_{1}, B_{1} and p_{1} are finally split up in real and imaginary parts, resulting in a system of 14 simultaneous equations to be solved by the GWRM.

Let us now discuss boundary and initial conditions. It can be shown that, in circular cylinder geometry, the following conditions must hold for m = 1 perturbations near the internal boundary r = 0:

We have chosen to study m = 1 perturbations because they are often the most critical ones with respect to stability. At the outer, ideally conducting, boundary r = 1 the normal components of the fluid velocity and magnetic fields, u_{1r} and b_{1r}, must vanish.

The relation between the initial conditions can be chosen somewhat arbitrary. The reason is found from studies of the corresponding system of eigen-equation and is that, for unstable behaviour, a competition between modes with different number of radial nodes will take place until the fastest mode (with zero radial nodes) will dominate the behaviour. The memory of the initial perturbation is then lost. For consistency with respect to the boundary conditions we however choose the following set of initial conditions:

The 14 coupled pde’s we are about to solve are linearized, but nevertheless contain nonlinear terms with products between equilibrium and perturbed variables. As an example, a term proportional to should be computed. Some care is needed in order to obtain an optimal spectral representation of this and similar terms. Due to the finiteness of the spatial spectral space, nonlinear products should always be carried out in spectral space before the “division by r problem” is treated using the procedure described below. Otherwise the lowest order spectral coefficients may be inadvertently eliminated.

Further complications thus include apparent singularities near r = 0 in terms that are divided by r. These arise due to the choice of coordinate system and must be carefully handled. For example, terms of the type turn up when writing the perturbed Equation (42) in component form. Clearly, for m = 1 this term is finite at r = 0 due to the boundary condition. But it must be secured that the latter relation holds exactly to avoid singularity at r = 0. Furthermore, the equation controlling the pressure evolution contains the term. For the case m = 0 this is not problematic since then the internal boundary condition is at r = 0, but for other values of m the term requires special treatment. Similarly, the term causes difficulties for finite resistivity.

The following procedure was found convenient to deal with the abovementioned terms. A separate study is carried out where firstly all equations are expanded to low order as ordinary polynomials. Secondly, all internal boundary conditions (like) are imposed. Thirdly, conditions as (corresponding to finite compressibility) are imposed at r = 0. Fourthly, the resulting expansions near r = 0 are studied to determine whether all singularities imposed by the cylindrical coordinate system have vanished. This is indeed the case for each of the 14 MHD component equations. Thus all apparent singularities may be safely removed from the code before the coefficient Equation (13) is solved.

Removal of Chebyshev coefficients for physical terms, belonging to either of the two cases discussed above, is conveniently performed using the following procedure. The Chebyshev coefficients of the physical term to be ’washed’ are converted to coefficients of ordinary polynomials, using a procedure found in [

The screw-pinch stability problem defined by Equations (42)-(45) is now solved for the perturbation (m, k) = (1, 10) using the GWRM. Parameters are K = 5, L = 5, M = 0 and five equidistant temporal domains were used. A single SIR iteration is again sufficient due to linearity in u_{1}, b_{1} and p_{1}. We will not specifically compare efficiency with other methods here, but merely note that for this case, SIR solved only 372 coupled equations for the coefficients (13). This is considerably less than the 14 × 6 × 6 = 504 equations that obtain before the boundary conditions are applied. The Chebyshev coefficients corresponding to the boundary conditions are functions of the coefficients that are solved for in SIR.

Plots of and vs t and r are given in Figures 5(a) and (b). It is seen that the equilibrium (43) results in an unstable configuration. For comparison, we have also solved the same problem using an eigenvalue approach where time dependence has been eliminated through the asymptotic assumption [

Of particular interest in MHD stability analysis is the value of the obtained growth rate. Assuming an exponential time behaviour for the GWRM solution during the last 10% of the temporal evolution, the normalized growth rate = 0.83 is obtained in this highly unstable case (recall that a normalization to the Alfvén time, being less than a microsecond, is used).

The computed growth rate exactly coincides with that obtained from the eigenvalue code. Also for other perturbations (m, k) very good agreement is obtained. Considering that the two methods are radically different in idea and implementation, these results certainly confirm the applicability of the GWRM to complex systems of initial value partial differential equations. The equilibrium used in this example is easily changed within the existing computer code to more realistic cases without changing the basic GWRM performance demonstrated here.

If Ohm’s law is modified to include resistivity; E + u × B = ηj, the field lines are allowed to break up and stable,

infinitely conducting (ideal) MHD equilibria may become resistively unstable. The finite resistivity relation is now used rather than the less realistic assumption of ideal MHD. It is easily shown that Equation (42) must now be supplemented by two new boundary conditions at r = 1 due to that the tangential component of the electric field is zero. From Ohm’s law we find that and and thus

For equilibrium, we choose

characteristic for the so-called reversed-field pinch (RFP) configuration. This equilibrium is marginally stable to ideal current and pressure driven modes. Stability is accomplished from the strong radial variation of the axial magnetic field, which provides “magnetic shear”. We further assume the perturbation (m, k) = (1, −2) which has the implication that there is a so-called resonance at at r = 0.41, near which region the plasma is particularly vulnerable to local instability (the stabilising magnetic field line bending is a minimum there). The Lundquist number is in the range 10^{−4} - 10^{−3 in this ex}- ample. The number of spectral modes are (K, L, M) = (4, 5, 1). We use 5 time intervals, a total time of 1.2 × 10^{3} and a single spatial domain.

Results from GWRM computations are shown in

are shown. The GWRM solution also immediately enables the displayed plots of perturbed radial magnetic field and pressure as functions of plasma radius and resistivity. The growth rate of the instability (in inverse Alfvén times, assuming exponential dependence) at the end of the computation is easily obtained analytically from the temporal and parametrical dependence of the perturbed radial magnetic field as

and is plotted in

of the equilibrium. In ongoing work, we introduce spatial subdomains in order to provide improved resolution of both the equilibrium and the perturbations near the resonance.

We have applied the GWRM to basic linear and nonlinear initial value problems in the forms of ordinary and partial differential equations. Accuracy and efficiency have been studied by comparisons with exact solutions. Improved performance by using temporal and spatial subdomains was discussed. Comparisons with standard explicit and implicit finite difference methods showed positive results with regards to computational efficiency. Finally we have also solved advanced fusion plasma stability problems formulated within ideal and resistive magnetohydrodynamics as 14 simultaneous initial value partial differential equations. Very good agreement was here found with results from established eigenvalue methods.

Although faster computational environments than Maple exist, exact comparisons of efficiency are not essential at this stage. The examples we have given show that the efficiency and accuracy of the GWRM is comparable to that of both explicit and implicit finite difference schemes in a given environment. Further optimisation of both GWRM and finite difference codes could increase efficiency, but our ambition has been to determine whether time-spectral methods for solution of initialvalue pde’s are of interest for general use and for computations of problems in magnetohydrodynamic and fluid mechanics in particular.

A fully spectral method for solution of initial-value ordinary or partial differential equations has been outlined. The time and parameter generalized weighted residual method, GWRM, represents all time, spatial and physical parameter domains by Chebyshev series. Thus semianalytical solutions are obtained, explicitly showing the dependence on these variables. The essence of the GWRM is its ability to transform the implicit dependencies inherent in physical laws formulated as differential equations to solutions of explicit, semi-analytical form.

The method is global and avoids time step limitations due to its acausal nature. The characteristic form of the problem (hyperbolic, elliptic or parabolic) is thus unimportant. This fact makes the method potentially applicable to a large class of problems. The spectral coefficients are determined by iterative solution of a linear or nonlinear system of algebraic equations, for which a new and efficient semi-implicit root solver (SIR) has been developed. Accuracy is explicitly controlled by the number of modes used and the number of iterations. The use of subdomains further increases efficiency and accuracy. In a separate publication, details of the use of temporal and spatial subdomains for further enhancing efficiency are given [

It should be remarked that the focus of this paper is on introduction of the method and on example applications. The theory presented is minimal. Future theoretical work should in particular consider criteria for convergence of strongly non-linear problems with well separated time scales.

The GWRM is shown by example to be accurate and efficient and to have potential for applications in fluid mechanics and in MHD. A simple model example shows that the method averages over rapid time scale phenomena, and follows long time scale phenomena. The GWRM was indeed developed with the class of non-linear problems with widely separated time scales in mind, since they are both frequent and important in fusion plasma physics modelling. The time scale separation of these problems demand the use of extremely many time steps in the finite time difference methods that are presently used.

Special thanks go to Mr. Daniel Lundin and Mr. David Jackson for several constructive comments.