In the last decades, the numerical analysis has abridged the time and cost of oil extractive operations, and thus their implementation has been promoted so much that today is used to solve a large number of phenomena that occur during the extraction of hydrocarbons, not only economizing the operation, but also shedding light to the phenomenon, and helping to have a better understanding. Nevertheless in spite of the success of the applications of numerical simulations, it has been until today, looking for their optimization, because the discretization methods of the domain and the model are through meshes, which require modifying the physics of the problem so that this matches the numerical method, and not in backwards.
To achieve a broad state of the art and get a good understanding of the subject, it is necessary to explain generally the main techniques used to analyse, simulate and exploit the oil reservoirs and in what do they consist, reporting a briefly explanation of the more important development points. The aim of this review is to present a summary of numerical simulations used in Oil Extraction industry, which will help to select the best solution- schema that provides more chance of improvement in means of time processing and computational resources.
Two main goals can be identify history matching and forecast of future scenarios (consequent to the exploitation of the reservoir). History matching is usually done to check the reliability of a model and evaluate the sustainability level in retrospect. It is the analysis of an exploitation history according to the data log until present time or during a particular time interval. This also allows to check the numerical model in a “feedback loop” and calibrate or adapt it to what is reported in the history data (in order to improve future scenarios forecast). This is practically done by assigning temperature and mass flow rate of both, the geo fluid extracted and re-injected, and any other useful historical data recorded that can be translated into thermo-physical parameters or boundary/initial conditions. Some phenomena which affect the behaviour of field utilization could not be put into a simulation in a proper way, without knowing about them from the history (natural recharge, natural change of the pathways of circulation into the rock formations, losses of pressure, etc.). If a model is considered to be reliable, it can be used to study how the durability of the resource changes depending on different: extraction rates, reinjection temperatures, wells siting, and fractures. A productive strategy can be based on the results of a model simulation. This is true for both power plants and thermal energy direct uses. Some general and macroscopic aspects of the application of numerical simulation to the geothermal energy study are listed here:
o Equations describing the phenomena considered (circulation, energy/mass transport).
o Estimation of the main thermo-physical parameters.
o Boundary conditions (BC).
o Geothermal potential assessment.
o Coupling between the power/thermal utilization and reservoir.
The final goal of the potential assessment is the sustainable utilization prediction of the resource. It involves the complete characterization of the field, energy stored, maximum fluid rate, useful temperatures and chemical parameters of the fluid (to determine the minimum temperature for reinjection). This evaluation is surely important for each kind of water dominant geothermal field.
During the running of a plant the productivity (and wells deliverability) can show some remarkable variations (in terms of flow rate, fluid chemistry and specific enthalpy of geo fluid). These changes can be addressed to reservoir pressure decline because of excessive fluid production. The reservoir fluid pressure is also another important parameter, linked both to the productivity of the well and the scaling phenomena. High pressure in the geo fluid pipes can keep the scaling phenomena under controlled rates.
When reinjection is practiced, the geothermal energy is being used as a renewable energy source. The practice of reinjection avoids temperature and pressure decline in a geothermalfield. For the binary power plants it is a basic approach for resource management and it appears to be compulsory. The task is the optimization and enhancement of the durability of the resource [
The objectives of this methodarethe efficient restitution of the geo fluid to the reservoir (to optimize the recharge in terms of enthalpy and flow rate), and choosing the correct siting and depth of the wells (to guarantee a sustainable use of the resource). The optimum reinjection strategy is a quite complex task and it strongly depends on the type of geothermal system (Kaya et al. [
Numerical simulation of geothermal reservoirs allows to understand the hydrogeological behaviour and heat transport into the reservoir under a certain utilization rate. It is possible to study the geothermal reservoir by solving the balance equations of mass, momentum and energy in the particular volume in which hydro-thermal circulation of fluid occurs. The constitutive laws are peculiar for each kind of reservoir, while the numerical analysis issues are also important in order to achieve a reliable solution. The hydrogeological issues linked to the geothermal exploitation (knowledge of the geological structures and of the groundwater system) must be connected with the engineering tasks of the design and optimization of the energy conversion system. The development of the numerical model itself has to follow two main directions:
a. The unperturbed (or undisturbed) natural state.
b. The utilization scenarios (during the exploitation).
Different phases in the development of the model can be identified. A first “block-structure” has to be built together with the data of the parameters which best fit what it is expected by the conceptual model. This first step model should reproduce:
a. Geological structure of the reservoir.
b. Geometrical features of wells and fractures.
c. Hydraulic, Thermal, Mechanic and Chemical conditions (HTMC).
The conceptual scheme for the realization and simulation of a geothermal reservoir model is represented in
The model should then pass a further step of calibration and refinement. It is an iterative process ruled by the conceptual and physical model, in which the parameters and boundary conditions should be adjusted. Some
thermo-physical properties (permeability, thermal conductivity, porosity, specific heat capacity) change with siting, depth and hydrothermal alteration. Their value can be adjusted and the mesh can be refined at this step.
A first model of the unperturbed state is the result of execute these steps. The simulation time interval should be very large, between 80 - 100 years, in order to verify the model stability and convergence. Exploitation and energy utilization scenarios can be then run starting from the unperturbed state simulation as initial conditions. In particular, temperature and pressure should be kept stable in the reservoir as much as possible.
If chemical properties are known, scaling and chemical deposition phenomena can be also introduced in the calculations, in a multi-physics simulation environment. The models can couple different transport equations, referring to mass, heat and chemicals. From the simulation, some useful data for the plant design must be extracted.
It is important to remark that building a numerical model of a geothermal reservoir is not easy, a reservoir cannot be practically measured in all its features. Its evolution can be studied, and a model can represent a way of behaviour prediction only if it is based on reliable data. In
Once the geometry and mesh have been set up, the equation parameters (and constitutive law) must then be considered. The constitutive law (e.g. Darcy Law) is typical for each phenomena, the conceptual model must arrange and properly describe the transport phenomena occurring in the geological system. Thermo-physical parameters should then be put into the simulation.
One single value of a significant parameter like porosity or permeability is assigned to a layer, or element. Often is not possible to assigning tensors or directional parameters (e.g. preferential directions for circulation), as they derive from very accurate measurements and interpretations. Calibration is a good tool, as it allows addressing parameter values matching with real temperature and pressure data. Anyway, it is possible only when reliable measured data are available. The main parameters to be assigned are essentially:
o Porosity (ϕ, defined as the ratio between the voids and the total volume considered) and effective porosity (taking into account all the “interconnected” voids, allowing then the fluid circulation).
o Permeability (k), being property of both the rock and the circulating fluid, giving an idea of the productivity of such a rock formation.
o Density (ρ, its distribution derives from accurate measurements and usually it is approximated with a single value for each rock formation).
o Thermal conductivity (of both rock and fluid).
o Specific thermal capacity (of both rock and fluid).
The mathematical and engineering 7 physical background is a fundamental part of this very complex process, but is not necessary always to develop a new algorithm, is better first to try with a well-known algorithm. A proper time scale should be identified, considering both economic lifetime (20 - 50 years for new fields and 80 - 100 years to recovery operations) and natural (or induced) renewability of the resource itself.
Different strategies can be adopted, in order to keep the utilization sustainable. It is very important to define the size of the installation (power plant) only after the simulation and complete resource assessment. Oversizing or resource fast depletion (reservoir cooling, wrong reinjection) can be some consequences of incorrect sizing.
From a mathematical point of view, in studies about transport, the task is to understand the response of a porous or fractured system when a hydraulic gradient is applied. The hypothesis and the real situation of the field should be compared very carefully. The basic calculations executed by the software’s for numerical simulation substantially deal with the resolution of the flow into porous and/or fractured media. The mathematical tools used by the software’s are: constitutive laws and conservation equations. The constitutive laws deal with the particular phenomenon involved. The following conservation equations are usually solved: mass, momentum, energy, concentration of pollutant (or chemicals) dissolved.
Geothermal reservoirs fluid flow can be generally studied according to the Darcy Law about porous media, the Darcy fluid velocity q is de fined as (Saeid et al. [
where k is the intrinsic permeability, being proportional to the hydraulic conductivity K according to the definition (in [
In which ρ is the density, g is the acceleration of gravity and μ is the viscosity, p is the pressure and z is the vertical coordinate. One thing to be considered is that in case of anisotropy and hydraulic conductivity tensor K can be defined:
This element is important in order to consider the complexity of the phenomena involved. Average scalar values of the hydraulic conductivity or permeability are often assigned, but the real behaviour of rock formations can be very different from an averaged parameter. For example, porosity also changes with pressure and depth, it is important to understand which assumption must be done when assigning an average value to an extended rock body (having for example an unknown fractures field). The fractures are particularly difficult to reproduce in a numerical model, their accurate description is important if their spatial extension is to be compared to the rock formation size, otherwise average properties can be considered. In the most common reservoir simulation software’s, like for example TOUGH2 [
o Rock matrix is homogeneous and isotropic, in particular referring to porosity, permeability and thermal con- ductivity (supposed to be independent by temperature).
o Incompressible fluid; with density (ρ) and kinematic viscosity (v) dependent by temperature according to the laws:
with ρ0, v0, α, β and T0 being opportune constants, while s(T) is a function of T. Pressure work and dissipations due to viscosity can be neglected, internal energy of liquid (l) and solid matrix (r) being
with cl,vol and cr,vol specific volumetric heat capacities of rock and liquid. Under these assumptions the balance equations of mass, momentum and energy can be written respectively as (according to [
A double phase system, in which the vapour phase is also considered, is described by similar equations [
o Porosity ϕ only depends on local pressure of fluid.
o Liquid and vapour phases are in local thermal equilibrium.
The equation of conservation (respectively) of mass, momentum (liquid and vapour) and energy can be then written as follows:
where E is the internal energy of the liquid vapour mixture, k is the permeability tensor and Rl is the relative permeability of the ith phase (I = l/g liquid or vapour). Some fundamental differences between the flow in porous and coherent media than in fractured media must then be taken into account ( [
a. Permeability induced by fracturing is generally higher than average formation permeability.
b. Fracturing permeability is usually anisotropic (it depends on fracturing preferential direction).
c. The permeability due to fracturing is considerably more dependent on pressure and tension field in the rock with respect to rock matrix permeability.
In the recent paper by Zeng et al. [
1. Discrete fracture network method (fracture orientation, spacing and mechanical properties).
2. Equivalent continuous porous media method (the fracture system is seen as an equivalent continuous media, with similar methodologies as for double porosity and double permeability).
In Fox et al. [
Hot Dry Rock geothermal energy extraction [
Fluid flow rates in fractures are strongly dependent on the fracture aperture, typically the fluid velocity in response to a given pressure gradient is proportional to the square of the fracture aperture. This relation is commonly expressed in terms of the parallel plate laminar flow solution (Reynolds equation), using a hydraulic aperture related to the mean mechanical fracture aperture. The hydraulic aperture is a non-linear function of effective normal stress, the fracture morphology, material properties and its history. Shear displacement of the fracture wall, caused by engineering interventions such as high-pressure fluid injection (“stimulation”), or chemical deposition/solution may alter the hydraulic aperture irreversibly and change the reservoir properties. HDR systems, therefore, respond non-linearly, and sometimes irreversibly, to changes in operational parameters.
HDR field experiments (reservoir stimulation and circulation), are expensive to execute and cannot be repeated indefinitely from the same borehole due to progressive irreversible changes in the system under study. Numerical experimentation provides a mean to demonstrate understanding of the available experimental results and to extend that understanding to other conditions. This leads to a predictive capability, support to design and operational planning.
The HDR modelling task is, in some aspects, more demanding than that faced by the underground storage or hydrocarbon industries. This stems firstly from the strong dependence of reservoir flow properties on the fluid pressures encountered during high pressure injection and secondly from the lack of field data sufficient to adequately characterise the rock mass. Modelling for underground radioactive waste isolation is aided by detailed measurements made in in situ rock laboratories, whilst hydrocarbon reservoir modelling benefits from rock mass imaging using seismic and the possibility of direct measurement of the critical parameters of interest (porosity, permeability, wettability) from core recovery, often from many wells. In contrast, within the HDR community there is little opportunity for direct measurement of many of the parameters of critical interest, such as in situ fracture compliance, shear behaviour or fracture length distributions. Furthermore, in the absence of extensive operational experience of HDR systems, modelling plays a critical role in assessing design and performance expectations. In order to carry out these tasks effectively the HDR community needs to achieve a level of confidence in its modelling methods and also predictive capabilities consistent with the investment needed for commercial prototypes.
The objectives of HDR models are to attempt one or more of the following, but at present no model manages to achieve all five objectives.
a. To help understand the performance of actual field experimental systems and to help design measures to improve inadequate system performance (e.g. re-drilling, secondary stimulations, sealant treatments).
b. To predict/simulate the effects of different reservoir creation strategies; mainly high pressure fluid injections and proppant placement (stimulation models).
c. To predict the thermal performance with time of an engineered reservoir under a given operating strategy (thermal drawdown curve).
d. To predict the hydraulic performance of an engineered reservoir under a given operating strategy (impedance, water loss).
e. To optimise well placement and operating strategy in relation to both reservoir creation and circulation
HDR reservoir models can be examined in the light of their treatment of the coupling of fluid mechanical, rock mechanical, thermal, chemical and hydraulic processes within the reservoir [
The commonly observed distinction between models of stimulation and circulation is a natural one. There is a change from rapidly evolving fluid-rock mechanical coupling on the scale of hours to days during stimulation, to thermal-mechanical-chemical coupling over a time scale measured in years during circulation. HDR field experiments to date indicate that circulation through a virgin rock mass does not appear viable, therefore there is a necessity for models of heat extraction (i.e. circulation) to combine with, or use the results of, stimulation models, to show how the required starting conditions for potentially economic reservoirs may be engineered.
Rock stress and joint mechanical properties: Knowledge of in-situ stress and fluid pressure are critical for modelling reservoir behaviour. The interaction between stress, pressure, fracture orientation and fracture friction angles determines the fluid pressure that has to be applied to start the stimulation processes. The region activated by hydraulic stimulation often appears to be an ellipsoid with the major axis almost coinciding with the direction of maximum compressive stress. The vertical effective stress gradient determines the migration direction of the stimulation (upward or downward) [
Flow within fractures: Corey-type relative permeability is usually assumed. When the reservoir is highly frac- tured, overall permeability is dependent on the characteristics and spatial distribution of the fractures. Niibori and Chida [
Heat transfer: It is common for models to assume that the mean temperature of the fluid within a fracture element is equal to the mean temperature of the fracture surface. But, this assumption may not be appropriate when the flow rate is large or fluid flow is channelled. An experimental study [
The heat transfer effect of flow channelling within fractures needs further consideration in HDR reservoir models. In most reservoir-scale models, the individual fractures are modelled as uniform aperture discs or polygons. Within models suitable for PCs further small-scale resolution of flow channelling is not possible on computational grounds. Therefore, there is a need for simplified representation of the effect. Channelling on a scale of less than a few meters probably does not have to be spatially modelled provided local time-smearing on the scale of a few months is accepted. This time resolution is quite adequate for modelling long-term heat production. A simple lumped-parameter model should be sought to cover the heat transfer effects of channelling for use at such a geometric scale. Channelling makes it harder for heat to leave the rock mass. The effect could be modelled by the introduction of a layer of “virtual rock” lining each fracture. Such a layer of virtual rock would mimic the effective reduced heat transfer area near the fracture, but would be given no thermal mass. Effectively it would become a large-scale heat transfer coefficient.
Drilling noise measurement and acoustic emission (AE) reflection techniques can give information on reflecting layers that seem to be closely related to the larger subsurface structures. These can be used for modelling in the first stages of exploitation, i.e. before hydraulic stimulation.
Models to estimate the local transmissivity distribution within the reservoir by using AE magnitude through the usual approach of estimating the slip area and magnitude of slip movement have been proposed [
Almost always aseismic regions are found within the AE cloud detected during hydraulic stimulation. It is not known whether these regions are accepting fluid or are mostly mechanically and hydraulically inactive. In order to overcome this difficulty, new approaches to acquiring information on the flow paths are needed. The AE tomography method [
Tracers can be used to indirectly assess the state of a reservoir. Although tracers have been used for a long time, their full potential for understanding HDR/HWR reservoirs has not yet been realized. Reactive tracers (those that thermally degrade in a controlled way and those that attach themselves to the crack surface) could be used to monitor the cooling of the reservoir and the surface area of the reservoir. By using tracers repeatedly during operation, a measure of the dynamic changes occurring in a reservoir could be obtained. It may even be possible to use controlled diameter particles to measure joint opening during reservoir operations.
Models must be developed to ensure that they reflect the reservoir’s heterogeneity as revealed by increasingly sophisticated field data processing, whilst at the same time recognizing that interpretational ambiguities remain.
The modelling of a reservoir, including all interactions between fracture/channel flow paths and the host rock, all geometric irregularities and all changes in the flow paths during stimulation, development and operation, is a challenging problem. This is, of course, the original problem, which was posed more than 20 years ago.
Willis-Richards and Wallroth (1995) [
Lumped-parameter models: Lumped-parameter models [
Continuum modelling approach: In continuum modelling, using either finite difference or finite element approaches, fractures are represented as lying between volumes of matrix rock. These rock masses may be permeable, thermally conducting and/or elastic according to the situation under examination. A variant on this approach has recently been proposed by Vychytil and Horii [
Fracture network modelling: Computer-generated fracture networks conditioned by field data could be used to simulate the operation of the whole reservoir. However, there still remain some problems including this, which need to be solved. For example, the active and the stimulated networks in a HDR reservoir do not correspond exactly to the pre-stimulation (i.e. undisturbed) fracture network. The mechanical and hydraulic processes of stimulation determine which network members can become activated through shear movement and fracture dilation. The additional thermal, chemical and long-term mechanical effects during operation determine their further development. Thus, accurate representation of initial conditions and good representations of the relevant mechanisms are required. Furthermore, whether a statistically accurate realization is sufficient for most purposes has not yet been adequately explored. Also, it is difficult to introduce rock-fracture interactions into a fracture network model without modelling the host rock matrix in at least the same degree of detail mechanically with considerable computational burden ensuing. It is possible to constrain the generated network to ®t the observations in the near field of the access wells. The far field is necessarily treated statistically.
In recent years, it has become apparent that fracture-size distributions are fractal [
Many simulation codes have been developed and each has its own merit. It is helpful to summarize, classify and show which code is applicable to each of the four stages of exploration, stimulation, circulation and heat extraction.
Foam is a mix of gas, water and a foamer, and consists of liquid films/lamellae and Plateau borders (Schramm
Program code | Method | Applicable stages/processes | Coupling of processes | Remarks |
---|---|---|---|---|
SFV (Willis-Richards, 1995) | Semi-analytical | Simulation/elastic, joint stiffness | H-M, M-H | Penny-shaped crack aggregate |
FFD (Willis-Richards, 1995) | Semi-analytical | Heat extraction/hydraulic, thermal | H-T, T-H | Penny-shaped crack aggregate |
FRIP (Cundall et al., 1978; Markland, 1991) | DEM | Simulation/hydraulic, elastic, joint stiffness, heat transfer | H-M, M-H, H-T, T-M | |
Rock Flow (Kolditz, 1995a, 1995b) | FEM | Heat extraction/hydraulic, thermal | H-T, T-H | |
Vychytil and Horii (1999) | FEM | Simulation/hydraulic, elastic, joint stiffness | H-M, M-H | Micromechanics based Continuum modelling |
FEHM (Zyvoloski, 1983; Zyvoloski et al., 1998) | FEM | Heat extraction/hydraulic, thermal | H-T, T-H | Doubleporosity |
GEOTH3D (Yamamoto et al., 1995, 1996) | FDM | Heat extraction/hydraulicthermal | H-T, T-H | Porous medium, local porosity estimated from AE |
GEOCRACK (Du Teau et al., 1994; Swenson et al., 1995) | FEM | Simulation, heat extraction/hydraulic, elastic, joint stiffness | H-M, T-M, M-H, H-T, T-H | Semi-regular grid |
FRACTure (Kohl et al., 1992, 1993, 1995, 1997; Kohl and Hopkirk, 1995; Hopkirk et al., 1981; Evans et al., 1992) | FEM | Simulation, heat extraction/hydraulic, thermal, poroelastic, thermoelastic, joint stiffness | H-C | Continuum |
Watanabe and Takahashi (1995) | FDM | Heatextraction/hydraulic, htermal | H-T, T-H | Fractal length distribution |
Willis-Richards et al. (1996) | FDM | Simulation, heat extraction/hydraulic, thermal, elastic, joint stiffness | H-M, T-M, M-H, H-T, T-H | Fractal length distribution |
FRACAS (Bruel, 1993, 1995; Bruel et al., 1994; Centre d’Information Geologique, 1996) | BIM | Heat extraction/hydraulic, termal, elastic | H-T, T-H, H-M, M-H | Flow channels along stochastically generated fractures |
and Wassmuth, [
Ransohoff and Radke [
Roof [
The leave behind mechanism also occurs during invasion of a gas phase to a porous medium saturated with a liquid phase. Foams generated solely by leave-behind give approximately a five-fold reduction in steady-state gas permeability (Ransohoff and Radke, [
Increasing number of lamellae or bubbles by lamella division mechanism can be existed when mobile foam bubbles are pre-existed in the porous medium. When a moving lamella train encounters a branch in the flow path, it may split into two, one in each branch of the path (Tanzil et al., [
Foam in porous media can be in a stable state at a certain value of the capillary pressure, called the limiting capillary pressure (Pc*) (Khatib et al., [
Flow regimes of foam in porous media (Osterloh and Jante [
Experimental studies that investigated factors affecting foam dynamics in porous media have been reviewed and their important conclusions have been highlighted. The studies investigated different types of surfactant (anionic, non-ionic, cationic, and amphoteric surfactants) with types of porous media consisted of steel wool packs, sand packs, crush cores, and sandstone and carbonate cores. The used methodologies included static and flow tests (pre-prepared foam injection, co-injection of surfactant solution and gas, and SAG injection) at conditions ranged from room to reservoir conditions and in absence and presence of oil. The progress of foam front was monitored by measuring of the differential pressure. To identify the liquid saturation and propagation of foam in porous media, some studies used X-ray computed tomography e.g. Apaydin and Kovscek [
The surfactant has an important role in generation and stability of the foam in porous media. It affects the interfacial forces between the gas and liquid which in turn affects value of Pc*. The proper surfactant should have the following properties: be capable of generating ample, lasting foam at the reservoir conditions, should have low adsorption and decomposition losses, should increase the sweep efficiency and the oil recovery, in addition it should be commercially available and inexpensive (Casteel and Djabbarah, [
unconsolidated sand cores at temperatures ranging from 50˚C (122˚F) to 150˚C (302˚F) it was found that the surfactant adsorption decreases with increasing the temperature, and increases with the presence of clays in the core (Novosad et al., [
The injection flow rate affects foam dynamics strongly. Faster flow rate produces foam with smaller and more uniform bubble sizes, and the foam formed at the higher pressure is more stable (Friedmann and Jensen, [
Since foam is readily formed during drainage process whenever the porous medium is pre-saturated with a surfactant solution (Chou, [
The foam propagation rate in porous media is significantly affected by rock permeability (Friedmann and Jensen, [
media varies with the permeability (Rossen et al., [
For simplicity most of the experimental studies have been conducted in the absence of oil (Sayegh and Girard, [
Smooth Particle Hydrodynamics (SPH) method will be explained and developed in its original form based on updated Lagrangian formalism. SPH is a relatively new numerical technique for the approximate integration of partial differential equations. It is a meshless Lagrangian method that uses a pseudo-particle interpolation method to compute smooth field variables. Each pseudo-particle has a mass, Lagrangian position, Lagrangian velocity, and internal energy; other quantities are derived by interpolation or from constitutive relations.
The advantage of the meshless approach is its ability to solve problems that cannot be effectively solved using other numerical techniques. It does not suffer from the mesh distortion problems that limit Lagrangian approaches based on structured mesh when simulating large deformations. As it is a Lagrangian method it naturally tracks material history information, such as damage, without the diffusion that occurs in Eulerian approaches due to advection.
Gingold [
The introduction of material strength highlighted shortcomings in the basic method: accuracy, tensile instability, zero energy modes and artificial viscosity. These shortcomings were identified in a first comprehensive analysis of the SPH method by Swegle [
In spite of these improvements, the crucial issue of convergence in a rigorous mathematical sense and the links with conservation have not been well understood. Encouraging preliminary steps in this direction have already been put forward very recently by Ben Moussa [
The improvements of the methods in accuracy and stability achieved by kernel re-normalisation or correction, have not, however, come for free; now it is necessary to treat the essential boundary conditions in a rigorous way. The approximations in SPH do not have the property of strict interpolants so that in general they are not equal to the particle value of the dependent variable, i.e.
Consequently it does not suffice to impose zero values for uI at the boundary positions to enforce homogeneous boundary conditions.
The treatment of boundary conditions and contact was neglected in the conventional SPH method. If the imposition of the free surface boundary condition (stress free condition) is simply ignored, then conventional SPH will behave in an approximately correct manner, giving zero pressure for fluids and zero surface stresses for solids, because of the deficiency of particles at the boundary. This is the reason why conventional SPH gives physically reasonable results at free surfaces. Contact between bodies occurs by smoothing over all particles, regardless of material. Although simple, this approach gives physically incorrect results.
Campbell [
Randles et al. [
o Definition of the boundary (surface normal at the vertices).
o Communication of the boundary value of a dependent variable from the boundary to internal particles.
A penalty contact algorithm for SPH was developed at Cranfield by Campbell and Vignjevic [
Another unconventional solution to the SPH tensile instability problem was first proposed by Dyka [
To utilise the best aspects of the FE and SPH methods it was necessary to develop interfaces for the linking of SPH nodes with standard finite element grids [
From the review of the development of meshless methods, given above, the following major problems can be identified: consistency, stability and the treatment of boundary conditions.
The spatial discretisation of the state variables is provided by a set of points. Instead of a grid, SPH uses a kernel interpolation to approximate the field variables at any point in a domain. For instance, an estimate of the value of a function f(x) at the location x is given in a continuous form by an integral of the product of the function and a kernel (weighting) function.
where: the angle brackets denote a kernel approximation, h is a parameter that defines size of the kernel support known as the smoothing length, and x' is new independent variable.
The kernel function usually has the following properties: Compact support, which means that it’s zero everywhere but on a finite domain inside the range of the smoothing length 2h:
Normalised:
These requirements, formulated by Lucy [
Therefore, it follows that:
If the function f(x) is only known at N discrete points, the integral of equation 1 can be approximated by a summation:
where mj/ρj is the volume associated to the point or particle j. In SPH literature, the term particles is misleading as in fact these particles have to be thought of as interpolation points rather than mass elements.
The last equation constitutes the basis of SPH method. The value of a variable at a particle denoted by superscript is calculate by summing the contributions from a set of neighbouring particles (
The conservation equations in Lagrangian framework are given by:
The subscripts α and β denote the component. These equations are the forms proposed by Monaghan [
All the equations above contain integrals of the form:
Using a development in a Taylor series about x' = x, it follows:
As W is an even function, the terms containing odd powers of x' - x vanish. Neglecting second and higher order terms, which is consistent with the overall order of the method, gives:
Using the last relation:
All equations include kernel approximations of spatial derivatives:
Integrating by part gives:
The first term of the second member can be rewritten:
Using Greens theorem, it follows:
The surface integral is zero if the domain of integration is larger than the compact support of W or if the field variable assumes zero value on the boundary of the body (free surface). If none of these conditions is satisfied, modifications should be to make to account for boundary conditions.
One should note that in equations:
The spatial derivatives of the field variables are substitute by the derivatives of the kernel:
It follows:
The final step is to convert the continuous volume integrals to sums over discrete interpolation points. Finally, after a few arrangements in order to improve the consistency between all equations, the most common form of the SPH discretised conservation equations are obtain:
To complete the discretisation one has to define the kernel function. Numerous possibilities exist. A large number of kernel function types are discuss in literature, ranging from polynomial to Gaussian. The most common is the B-spline kernel that was propose by Monaghan [
D is the number of dimensions of the problem (i.e. 1, 2 or 3).
C is the scaling factor, which depends on the number of dimensions and ensures that the consistency conditions 2 and 3 are satisfied:
If large deformations occur, particles can largely separate from each other. If the smoothing length remains constant, the particle spacing can become so large than particles will no more interact. On the other hand, in compression, many particles might enter in the neighbouring of each other, which can significantly slow down the calculation. In order to avoid these problems, Benz [
where h0 and ρ0 are initial smoothing length, and density and n is the number of dimensions of the problem.
Another frequently used equation of evolution based on conservation of mass is:
where div(v) is the divergence of velocity.
An important step in the SPH computation is the neighbour search. This task can be extremely time consuming. The neighbour search routine lists the particles that are inside the neighbourhood of each particle at each time step. A direct search between every particle is particularly inefficient. A bucket sort algorithm is more efficient. In this method, an underlying grid of side 2 h is generated and the particles are sorted according to the box in which they are located (
contained in the same box and the surrounding boxes. This allows the computational time to be cut down from a default time proportional to N2 for a direct search to N*log (N), where N is the total number of particles.
The basic SPH method has shown several problems when used to model a solid body:
o Consistency.
o Tensile instability.
o Zero-energy modes
The SPH method in its continuous form is inconsistent within 2 h of the domain boundaries due to the kernel support incompleteness. In its discrete form the method loses its 0th order consistency not only in the vicinity of boundaries but also over the rest of the domain if particles have an irregular distribution. Meglicki [
The first order consistency of the method can be achieve in two ways. Firstly, by correcting the kernel function, second, by correcting the discrete form of the convolution integral of the SPH interpolation. Johnson [
Derivation of normalised corrected gradient SPH formula
The approximation of fields using a Normalised Corrected SPH (NCSPH) interpolation has been published [
An interpolation technique should not affect homogeneity of space. One way of demonstrating this is to prove that the interpolation of the solution space itself is independent of a translation of the coordinate axes. In order to express this statement mathematically one can start by writing the general expression for the SPH interpolation of a vector field:
If the field to be interpolated is the solution space then:
And the last equation becomes:
In a different, translated coordinate system, this equation is:
where X′ is the coordinate vector in the new coordinate system. If the translation vector by which the origin of the coordinate system was move is defined as ∆X then the relationship between X and X′ is:
If the interpolated coordinates of a point are independent of the translation of coordinate axes then the following should hold:
By substituting the last two equations, for both Xi and X′I one obtains:
Alternatively:
By comparison between:
It is clear that the discretised space will only be homogeneous if the following condition is satisfied:
Similarly, an interpolation technique should not affect isotropy of space. One way of demonstrating this is to prove that the interpolation of the solution space itself is independent of a rotation of the coordinate axes. The same holds for the SPH approximation. The change in coordinates due to a rotation of the coordinate axes is:
where C is the rotation matrix. For small rotations, this can also be write as:
where ∆φ is the rotation vector.
If one wants to ensure that, the SPH approximation does maintain the fact that space is isotropic then the approximation has to satisfy the following condition:
This means that the rotation matrix has to be approximate exactly.
In order to develop this equation one can start by rewriting:
where
This means that, for small rotations, the rotation matrix is given by:
The approximation of the rotated coordinates is:
This means that the requirement on the interpolation is:
Expanding this expression leads to:
Therefore to preserve space isotropy:
The following condition has to be satisfied.
The form of the normalised kernel function and the approximation of the first order derivatives which provides first order consistency is given in
Using the NCSPH approximations, the conservation equations assume the following form:
A Von Neumann stability analysis of the SPH method was conducted Swegle [
Space homogeneity | Space anisotropy | |
---|---|---|
Condit. | ||
Normalised-corrected |
from an effective stress resulting from a non-physical negative modulus being produce by the interaction between the constitutive relation and the kernel interpolation. In other words the kernel interpolation used in spatial discretisation changes the nature of original partial differential equations.
These changes in the effective stress amplify, rather than reduce, perturbations in the strain. From Swegle’s stability analysis, it emerged that the criterion for stability was that:
where W'' is the second derivative of W with respect to its argument and σ is the stress, negative in compression and positive in tension.
This criterion states that instability can also occur in compression, not only in tension. Indeed, if the slope of the derivative of the kernel function is positive, the method is unstable in tension and stable in compression and if the slope is negative, it is unstable in compression and stable in tension.
The fact that this instability manifests itself most often in tension can be explain.
In order to remedy this problem several solutions have been propose. Guenther [
The conservative smoothing and the artificial repulsive forces methods have limited applicability and have to be use with caution because they affect the strength of material being modelled. At present, the most promising approach is non-collocational spatial discretisation. This problem is in the focus of attention of a number of researchers working on mesh-less methods.
Zero-energy modes are a problem that is not unique to particle methods. These spurious modes, which correspond to modes of deformation characterised by a pattern of nodal displacement that produces zero strain energy, can also be find in the finite difference and finite element methods.
Swegle [
Then one would obtain:
At all points. Hence, this mode cannot be detect, and can grow unhindered. This means that this mode could grow to a level where it dominates the solution.
Zero energy or spurious modes are characterised by a pattern of nodal displacement that is not a rigid body but produces zero strain energy.
One of the key ideas to reduce spurious oscillations is to compute derivatives away from the particles where kernel functions have zero derivatives. Randles [
Smoothed particle hydrodynamics (SPH) is a meshfree particle method, which not only uses particles as the computational frame for interpolation or differencing, but also uses the particles to carry the material properties.
In this survey, the smoothed particle hydrodynamics method, and its recent development in numerical algorithms and applications have been reviewed. In methodology and numerical approximation, the basic concepts of kernel and particle approximations as well as the main technique for developing SPH formulation have been addressed. Some smoothing kernel functions have been reviewed with constructing conditions provided.
The emphasis is on the consistency problem, which traditionally limits the accuracy of the SPH method. The general definition of consistency has been surveyed. Several important numerical topics have been investigated, including boundary treatment, solid obstacles, material interface treatment and tensile instability.
The last decades have witnessed the great success of SPH developments in methodology and applications, and there are still many remaining tasks and challenges. To achieve a reliable solution, the computational accuracy, consistency, efficiency, stability and convergence need to incorporate into good SPH algorithms. In general, SPH has great potential in many problems in engineering and science. It has salient advantages over traditional grid based numerical models in treating large deformation, tracking free surfaces, moving interfaces, and deformable boundaries, resolving moving discontinuities such as cracks and shock waves.
The attraction of the SPH method has been showcased not only by saving time and computational resources, but also for its use in diversified applications. One obvious advantage is that the SPH method provides a feasible physical model for non-continuum, as it has some same features as the classic molecular dynamics method and the dissipative particle dynamics method. Therefore, it would be very attractive to apply the SPH method to simulating problems where the main concern of the object is a set of discrete physical particles rather than a continuum, e.g. environmental flows with solid and fluid particles such as landslide, mudslide and multiphase flow in an oil reservoir.
L. Castañeda gratefully acknowledges the financial support from the Escuela Superior de Ingeniería Mecánica y Eléctrica Unidad Ticomán, Instituto Politécnico Nacional, through Project no. 20150465. L. Castañeda also thanks Angel Maldonado Austria for useful discussions.