n be very different from an averaged parameter. For example, porosity also changes with pressure and depth, it is important to understand which assumption must be done when assigning an average value to an extended rock body (having for example an unknown fractures field). The fractures are particularly difficult to reproduce in a numerical model, their accurate description is important if their spatial extension is to be compared to the rock formation size, otherwise average properties can be considered. In the most common reservoir simulation software’s, like for example TOUGH2  , it is also possible to define different models for the relative permeability (Rl) as a function of the liquid saturation in the mixture (geothermal fluid) (Petrasim Manual  ). Let us now consider a single phase (liquid) flow in a geothermal system. According to Rybach and Muffler  the following assumptions have to be considered:
o Rock matrix is homogeneous and isotropic, in particular referring to porosity, permeability and thermal con- ductivity (supposed to be independent by temperature).
o Incompressible fluid; with density (ρ) and kinematic viscosity (v) dependent by temperature according to the laws:
with ρ0, v0, α, β and T0 being opportune constants, while s(T) is a function of T. Pressure work and dissipations due to viscosity can be neglected, internal energy of liquid (l) and solid matrix (r) being
with cl,vol and cr,vol specific volumetric heat capacities of rock and liquid. Under these assumptions the balance equations of mass, momentum and energy can be written respectively as (according to  ):
being the thermal conductivity of the mixture solid liquid.
A double phase system, in which the vapour phase is also considered, is described by similar equations  . The following assumptions about porosity have to be done in this case:
o Porosity ϕ only depends on local pressure of fluid.
o Liquid and vapour phases are in local thermal equilibrium.
The equation of conservation (respectively) of mass, momentum (liquid and vapour) and energy can be then written as follows:
where E is the internal energy of the liquid vapour mixture, k is the permeability tensor and Rl is the relative permeability of the ith phase (I = l/g liquid or vapour). Some fundamental differences between the flow in porous and coherent media than in fractured media must then be taken into account (   ):
a. Permeability induced by fracturing is generally higher than average formation permeability.
b. Fracturing permeability is usually anisotropic (it depends on fracturing preferential direction).
c. The permeability due to fracturing is considerably more dependent on pressure and tension field in the rock with respect to rock matrix permeability.
In the recent paper by Zeng et al.  a discussion about the methods for fractured systems simulation is given. Geothermal fracture systems can be represented essentially in two ways:
1. Discrete fracture network method (fracture orientation, spacing and mechanical properties).
2. Equivalent continuous porous media method (the fracture system is seen as an equivalent continuous media, with similar methodologies as for double porosity and double permeability).
In Fox et al.  an analytical and numerical model for fractured reservoirs (multi-fractured EGS) is presented and simulated. In the following Figure 3 an example of simplified scheme of fluid circulation into the fractures is shown (b being the fracture size and xs the fractures mutual spacing). There is also the possibility of coupling thermal-flow problems with chemical or geomechanics problems into the same numerical simulation. It is the case of some models presented in literature, like for example the case of a fully-coupled flow and geo- mechanical model by Hu et al.  .
Figure 3. Conceptual scheme for multi-fracture vertical geothermal system (xs is the spacing between fractures, b is the fracture aperture).
4. Hot Dry Rock (HDR) Reservoirs
Hot Dry Rock geothermal energy extraction  : has come to mean the forced circulation of water between injection and recovery boreholes through a naturally fractured rock mass. In such reservoirs, typically developed in granitic or similar crystalline basement, the intact rock usually has very low permeability and porosity. Heat extraction is controlled by conduction through the intact rock blocks towards water conducting fractures whose apertures may have been artificially increased, where advection transports thermal energy towards the recovery well(s).
Fluid flow rates in fractures are strongly dependent on the fracture aperture, typically the fluid velocity in response to a given pressure gradient is proportional to the square of the fracture aperture. This relation is commonly expressed in terms of the parallel plate laminar flow solution (Reynolds equation), using a hydraulic aperture related to the mean mechanical fracture aperture. The hydraulic aperture is a non-linear function of effective normal stress, the fracture morphology, material properties and its history. Shear displacement of the fracture wall, caused by engineering interventions such as high-pressure fluid injection (“stimulation”), or chemical deposition/solution may alter the hydraulic aperture irreversibly and change the reservoir properties. HDR systems, therefore, respond non-linearly, and sometimes irreversibly, to changes in operational parameters.
HDR field experiments (reservoir stimulation and circulation), are expensive to execute and cannot be repeated indefinitely from the same borehole due to progressive irreversible changes in the system under study. Numerical experimentation provides a mean to demonstrate understanding of the available experimental results and to extend that understanding to other conditions. This leads to a predictive capability, support to design and operational planning.
The HDR modelling task is, in some aspects, more demanding than that faced by the underground storage or hydrocarbon industries. This stems firstly from the strong dependence of reservoir flow properties on the fluid pressures encountered during high pressure injection and secondly from the lack of field data sufficient to adequately characterise the rock mass. Modelling for underground radioactive waste isolation is aided by detailed measurements made in in situ rock laboratories, whilst hydrocarbon reservoir modelling benefits from rock mass imaging using seismic and the possibility of direct measurement of the critical parameters of interest (porosity, permeability, wettability) from core recovery, often from many wells. In contrast, within the HDR community there is little opportunity for direct measurement of many of the parameters of critical interest, such as in situ fracture compliance, shear behaviour or fracture length distributions. Furthermore, in the absence of extensive operational experience of HDR systems, modelling plays a critical role in assessing design and performance expectations. In order to carry out these tasks effectively the HDR community needs to achieve a level of confidence in its modelling methods and also predictive capabilities consistent with the investment needed for commercial prototypes.
The objectives of HDR models are to attempt one or more of the following, but at present no model manages to achieve all five objectives.
a. To help understand the performance of actual field experimental systems and to help design measures to improve inadequate system performance (e.g. re-drilling, secondary stimulations, sealant treatments).
b. To predict/simulate the effects of different reservoir creation strategies; mainly high pressure fluid injections and proppant placement (stimulation models).
c. To predict the thermal performance with time of an engineered reservoir under a given operating strategy (thermal drawdown curve).
d. To predict the hydraulic performance of an engineered reservoir under a given operating strategy (impedance, water loss).
e. To optimise well placement and operating strategy in relation to both reservoir creation and circulation
HDR reservoir models can be examined in the light of their treatment of the coupling of fluid mechanical, rock mechanical, thermal, chemical and hydraulic processes within the reservoir  -  , together with their representation of the geometry of the underground system  -  . The way in which these two aspects, coupling and geometry, are approached will be the result of compromises between the objectives of the model and the resources available. Typically, strong coupling can be placed in geometrically reduced or abstract models, whilst geometrically “realistic” models require excessive computational resources to support strong coupling. Thus, at the present time, it is likely that the best approach to HDR modelling will be made through optimisation of the compromises required, rather than through efforts to simulate with exactitude all processes and the complex geometry of fractured rock within a single model.
The commonly observed distinction between models of stimulation and circulation is a natural one. There is a change from rapidly evolving fluid-rock mechanical coupling on the scale of hours to days during stimulation, to thermal-mechanical-chemical coupling over a time scale measured in years during circulation. HDR field experiments to date indicate that circulation through a virgin rock mass does not appear viable, therefore there is a necessity for models of heat extraction (i.e. circulation) to combine with, or use the results of, stimulation models, to show how the required starting conditions for potentially economic reservoirs may be engineered.
4.1. Modelling of Governing Factors
Rock stress and joint mechanical properties: Knowledge of in-situ stress and fluid pressure are critical for modelling reservoir behaviour. The interaction between stress, pressure, fracture orientation and fracture friction angles determines the fluid pressure that has to be applied to start the stimulation processes. The region activated by hydraulic stimulation often appears to be an ellipsoid with the major axis almost coinciding with the direction of maximum compressive stress. The vertical effective stress gradient determines the migration direction of the stimulation (upward or downward)  . The fracture mechanical properties, i.e. joint normal stiffness, shear stiffness and shear dilation rate (broadly determined by the roughness at wavelengths comparable to the displacement), are the key factors inducing the nonlinear coupling among the fluid pressure, fracture aperture and rock stress.
Flow within fractures: Corey-type relative permeability is usually assumed. When the reservoir is highly frac- tured, overall permeability is dependent on the characteristics and spatial distribution of the fractures. Niibori and Chida  , investigated the effect of the fracture distribution on the relative permeability. Channelling of flow on fracture surfaces is to be expected  . Channelling reduces the overall flow impedance compared to uniform flow in equivalent mechanical apertures. However, the heat exchange area will be reduced by channelling. Channelling may be treated stochastically by using a model of fracture aperture composed of mated fractal fracture surfaces  . Simulation of fluid flow through such a fracture is possible, but at this time it is numerically impractical for field-size fractures and approximate treatments are required. Turbulent, or at least non-lin- ear flow, has been suggested to occur in the experimental reservoir at Soultz  -  . The contrast in permeability between the intact rock and fractures is large in HDR reservoirs. Numerical calculations are difficult and special modelling is required  if both fracture and intact rock permeability are to be represented explicitly.
Heat transfer: It is common for models to assume that the mean temperature of the fluid within a fracture element is equal to the mean temperature of the fracture surface. But, this assumption may not be appropriate when the flow rate is large or fluid flow is channelled. An experimental study  on the heat transfer from the fracture surface to the fluid has provided surface heat transfer coefficients at fracture surfaces. Such considerations are only likely to be important in a small volumetric fraction of HDR reservoirs.
The heat transfer effect of flow channelling within fractures needs further consideration in HDR reservoir models. In most reservoir-scale models, the individual fractures are modelled as uniform aperture discs or polygons. Within models suitable for PCs further small-scale resolution of flow channelling is not possible on computational grounds. Therefore, there is a need for simplified representation of the effect. Channelling on a scale of less than a few meters probably does not have to be spatially modelled provided local time-smearing on the scale of a few months is accepted. This time resolution is quite adequate for modelling long-term heat production. A simple lumped-parameter model should be sought to cover the heat transfer effects of channelling for use at such a geometric scale. Channelling makes it harder for heat to leave the rock mass. The effect could be modelled by the introduction of a layer of “virtual rock” lining each fracture. Such a layer of virtual rock would mimic the effective reduced heat transfer area near the fracture, but would be given no thermal mass. Effectively it would become a large-scale heat transfer coefficient.
4.2. Structure of Reservoirs as Derived from Field Observations
Drilling noise measurement and acoustic emission (AE) reflection techniques can give information on reflecting layers that seem to be closely related to the larger subsurface structures. These can be used for modelling in the first stages of exploitation, i.e. before hydraulic stimulation.
Models to estimate the local transmissivity distribution within the reservoir by using AE magnitude through the usual approach of estimating the slip area and magnitude of slip movement have been proposed  -  . By the use of the “collapsing technique”  and “doublet analysis”  , both of which are methods for refining the internal structure of clouds of AE, structures that may be predominant fluid flow paths can be delineated from the AE clouds recorded during hydraulic stimulation. In subsequent circulation simulation, these main flow paths could be taken into account. This is important, because the evaluation of the behaviour of reservoirs during circulation and heat extraction is highly dependent on the condition of reservoirs at the start of circulation and heat extraction.
Almost always aseismic regions are found within the AE cloud detected during hydraulic stimulation. It is not known whether these regions are accepting fluid or are mostly mechanically and hydraulically inactive. In order to overcome this difficulty, new approaches to acquiring information on the flow paths are needed. The AE tomography method  and Electrical Resistance Tomography could be used to acquire images of fluid flow paths and of changes of the flow paths with time, but remain dependent on experimental data acquisition opportunities.
Tracers can be used to indirectly assess the state of a reservoir. Although tracers have been used for a long time, their full potential for understanding HDR/HWR reservoirs has not yet been realized. Reactive tracers (those that thermally degrade in a controlled way and those that attach themselves to the crack surface) could be used to monitor the cooling of the reservoir and the surface area of the reservoir. By using tracers repeatedly during operation, a measure of the dynamic changes occurring in a reservoir could be obtained. It may even be possible to use controlled diameter particles to measure joint opening during reservoir operations.
Models must be developed to ensure that they reflect the reservoir’s heterogeneity as revealed by increasingly sophisticated field data processing, whilst at the same time recognizing that interpretational ambiguities remain.
4.3. Overall Modelling of Reservoir Performance
The modelling of a reservoir, including all interactions between fracture/channel flow paths and the host rock, all geometric irregularities and all changes in the flow paths during stimulation, development and operation, is a challenging problem. This is, of course, the original problem, which was posed more than 20 years ago.
Willis-Richards and Wallroth (1995)  -  classified models using six categories for the representation of reservoir geometry: one abstract geometry class, four reduced geometry model classes and one realistic geometry class. All lumped-parameter models fall into the category of abstract geometry models. Single planar fracture models, such as PKN and CGDD models, fall into the category of reduced geometry. Network models with regular grids also fall into this category. Realistic geometry models use computer-generated fracture networks. In this review, we classify models into three categories: lumped-parameter models, continuum models and network models.
Lumped-parameter models: Lumped-parameter models  -   , proposed two simple models, based on standard analytic solutions for the mechanical behaviour of cracks combined with a simple representation of the frictional properties of fracture surfaces. The two models describe the stimulation and circulation of HDR reservoirs in a naturally-fractured basement. Hyodo  applied a simple model, which used an analogy to an electrical circuit, to the data from the Hijiori experiments. The model consists of the wellbore impedance, reservoir impedance, loss impedance and reservoir capacitance. All these are set to be dependent on the flow rate, and system parameters can be estimated by simple field experiments, such as step-rate tests.
Continuum modelling approach: In continuum modelling, using either finite difference or finite element approaches, fractures are represented as lying between volumes of matrix rock. These rock masses may be permeable, thermally conducting and/or elastic according to the situation under examination. A variant on this approach has recently been proposed by Vychytil and Horii  based on micromechanics. The main input data are joint spacing and preferred orientation of joint sets. It can take into account the aperture change of joints due to slippage along them and is very effective from the view point of the trade-o between the computing time and the geometric reality of the model, especially in the very early stage of exploitation during which information on subsurface structure is very restricted. Three-dimensional simulation is performed fairly easily. The model was successfully applied to data from the Fenton Hill site.
Fracture network modelling: Computer-generated fracture networks conditioned by field data could be used to simulate the operation of the whole reservoir. However, there still remain some problems including this, which need to be solved. For example, the active and the stimulated networks in a HDR reservoir do not correspond exactly to the pre-stimulation (i.e. undisturbed) fracture network. The mechanical and hydraulic processes of stimulation determine which network members can become activated through shear movement and fracture dilation. The additional thermal, chemical and long-term mechanical effects during operation determine their further development. Thus, accurate representation of initial conditions and good representations of the relevant mechanisms are required. Furthermore, whether a statistically accurate realization is sufficient for most purposes has not yet been adequately explored. Also, it is difficult to introduce rock-fracture interactions into a fracture network model without modelling the host rock matrix in at least the same degree of detail mechanically with considerable computational burden ensuing. It is possible to constrain the generated network to ®t the observations in the near field of the access wells. The far field is necessarily treated statistically.
In recent years, it has become apparent that fracture-size distributions are fractal    ; or at least close to fractally distributed, although it is commonly assumed that spatial distribution of fractures is random. The undisturbed aperture-fracture size relationship is not known at the scale of geothermal reservoirs. This is potentially critical, because the fluid flow resistance primarily depends on a high power of the aperture. Transmissivity, tortuosity and storage are common field data relating to aperture.
4.4. Simulation Codes
Many simulation codes have been developed and each has its own merit. It is helpful to summarize, classify and show which code is applicable to each of the four stages of exploration, stimulation, circulation and heat extraction. Table 1  summarizes codes for this purpose. In the table, the column “coupling of processes” represents the coupling between fundamental processes (Tsang, 1987, 1991), i.e. H-M, rock stresses responding to fluid pressure changes; T-M, rock stresses responding to temperature changes; M-H, fracture flow apertures responding to stress change; H-T, heat transfer by fluid flow water; and T-H, viscosity and flow patterns changing with temperature. DEM, FEM, FDM and BIM stand for Discrete Element Method, Finite Element Method, Finite Difference Method and Boundary Integral Equation Method, respectively. There are no simulation codes included in Table 1 for water/rock chemical interaction. Indeed, the review paper by Willis-Richards and Wallroth (1995) has cited only two papers by Robinson and Pendergrass (1989) and by Shoji et al. (1989), both of which are for a one-dimensional crack, which cover such interactions.
5. Foam Dynamics in Porous Media and Its Applications in Enhanced Oil Recovery
Foam is a mix of gas, water and a foamer, and consists of liquid films/lamellae and Plateau borders (Schramm
Table 1. Program codes for numerical simulation of hydraulic simulation and heat extraction.
and Wassmuth,  ; Vikingstad,  ). A plateau border is the connection point of three lamellae, at an angle of 120˚ (Figure 4). Foam is a special case of two-phase (gas-liquid) flow. For gas-liquid flow in porous media without foam, the gas phase resides in the center of the large pores occupying the main paths of flow, while the liquid phase fills the small pores and coats walls of the large pores. Existence of foam affects this diffusivity mechanism. The gas phase in foam will be trapped by films of the liquid lamellae. As a result the gas velocity decreases and gas and liquid phases will move together at the same velocity if a case of stable foam has been achieved. This section briefs mechanisms of generating, stability, and flow regimes of foam in porous media.
5.1. Generation of Foam in Porous Media
Ransohoff and Radke  observed three mechanisms lead to foam generation in porous media; snap-off, leave- behind, and lamella division. Figure 5 shows these mechanisms.
5.2. Snap-Off Mechanism
Roof  showed that when oil emerges from a water-wet constriction into a water-filled pore, the interfacial forces are such that a leading portion of the oil may separate into a droplet (snap off). The same mechanism occurs during invasion of gas to pores filled with liquid. It takes place regardless of the presence or absence of surfactant, but if a stabilizing surfactant is not present, snapped off bubbles quickly coalesce (Kovscek and Radke,  ). The snap-off process is a result of the difference in the capillary pressure between the pore body and pore throat. Thus occurrence of the process is a function of ratio of the body-to-throat equivalent diameters. Kovscek and Radke  , and Li  -  presented details of the snap-off process.
Figure 4. Sketch of a foam system.
Figure 5. Mechanisms of foam generation in porous media.
5.3. Leave-Behind Mechanism
The leave behind mechanism also occurs during invasion of a gas phase to a porous medium saturated with a liquid phase. Foams generated solely by leave-behind give approximately a five-fold reduction in steady-state gas permeability (Ransohoff and Radke,  ; Kovscek and Radke,  ), whereas discontinuous-gas foam created by snap-off resulted in a several-hundred fold reduction in gas mobility (Persoff et al.,  ; Ettinger and Radke,  ; Kovscek and Radke,  ). This indicates that the strength of foam (i.e. number and stability of lamellae) is affected by the dominant mechanism of foam generation.
5.4. Lamella Division Mechanism
Increasing number of lamellae or bubbles by lamella division mechanism can be existed when mobile foam bubbles are pre-existed in the porous medium. When a moving lamella train encounters a branch in the flow path, it may split into two, one in each branch of the path (Tanzil et al.,  ). Lamella division is thought to be the primary foam-generation mechanism in steady gas-liquid flow (Gauglitz et al.,  ; Li,  -  ).
5.5. Stability of Foam in Porous Media
Foam in porous media can be in a stable state at a certain value of the capillary pressure, called the limiting capillary pressure (Pc*) (Khatib et al.,  ). The limiting capillary pressure is a function of the surfactant formulation and concentration, gas velocity, permeability of the porous medium, and presence of oil. The corresponding liquid saturation of (Pc*) is (Sw*).
Flow regimes of foam in porous media (Osterloh and Jante  ) showed that two flow regimes of foam in porous media can be identified; high-quality (dry) regime in which steady-state pressure gradient is independent of gas flow rate and low quality (wet) regime in which pressure gradient is independent of liquid flow rate. The transition zone between the two regimes was characterized by a specific value of gas fractional flow (gas velocity divided by total velocity) (fg*). Alvarez et al.  confirmed conclusions of Osterloh and Jante  and showed that this behaviour of foam flow is general by using data of foam flow with a variety of porous media and surfactant formulations, over a range of flow rates. There is a critical pressure gradient that must be exceeded to generate a high-quality regime of foam during the flow of surfactant solution and gas through homogeneous porous media (Li et al., 2008). Figure 6 shows a schematic plot of parameters of the foam flow in porous media.
5.6. Factors Affecting Foam Dynamics in Porous Media
Experimental studies that investigated factors affecting foam dynamics in porous media have been reviewed and their important conclusions have been highlighted. The studies investigated different types of surfactant (anionic, non-ionic, cationic, and amphoteric surfactants) with types of porous media consisted of steel wool packs, sand packs, crush cores, and sandstone and carbonate cores. The used methodologies included static and flow tests (pre-prepared foam injection, co-injection of surfactant solution and gas, and SAG injection) at conditions ranged from room to reservoir conditions and in absence and presence of oil. The progress of foam front was monitored by measuring of the differential pressure. To identify the liquid saturation and propagation of foam in porous media, some studies used X-ray computed tomography e.g. Apaydin and Kovscek  and Nguyen  , or Gama ray e.g. Persoff et al.  . The investigated factors include; surfactant, injection parameters, permeability and heterogeneity of porous media, and presence of oil.
The surfactant has an important role in generation and stability of the foam in porous media. It affects the interfacial forces between the gas and liquid which in turn affects value of Pc*. The proper surfactant should have the following properties: be capable of generating ample, lasting foam at the reservoir conditions, should have low adsorption and decomposition losses, should increase the sweep efficiency and the oil recovery, in addition it should be commercially available and inexpensive (Casteel and Djabbarah,  ). Foam is readily formed during a drainage process (displacing the liquid phase by the gas phase) whenever the porous medium is pre-saturated with a surfactant solution (Chou, 1991). The reduction of surfactant concentration below the Critical Miscelle Concentration (CMC) caused a shift of the transition zone of the flow regimes to lower values of fg* and Pc* (Alvarez et al., 1999). Foam coalescence forces are inversely proportional to surfactant concentration, thus the foam weakens and the displacement efficiency decreases as the surfactant concentration decreases (Apaydin and Kovscek,  ). Friedmann and Jensen  showed that size of foam bubbles inside porous media slightly decreases with increasing of the surfactant concentration. Li  -  found that higher velocity is required to create foam when the surfactant concentration is reduced. Adsorption of the surfactant on the reservoir rock reduces the surfactant concentration in the injected fluid. The adsorption is a function of the surfactant formulation, reservoir fluids, reservoir lithology, and reservoir conditions (Casteel and Djabbarah,  ). For
Figure 6. Schematic plot of parameters of the foam flow in porous media.
unconsolidated sand cores at temperatures ranging from 50˚C (122˚F) to 150˚C (302˚F) it was found that the surfactant adsorption decreases with increasing the temperature, and increases with the presence of clays in the core (Novosad et al.,  ). Grigg and Mikhalin  showed that the adsorption is a function of state of the fluid movement beside the other parameters, and the density of adsorption of the surfactant on the rock is best described as a function of the surfactant available in the system (concentration plus volume), rather than by surfactant concentration only. In experiments on carbonate cores, Liu et al.  -  observed that the presence of gas with the surfactant solution in the rock does not affect the surfactant adsorption.
5.8. Injection Parameters
5.8.1. Pre-Prepared Foam Injection and Co-Injection of Surfactant and Gas
The injection flow rate affects foam dynamics strongly. Faster flow rate produces foam with smaller and more uniform bubble sizes, and the foam formed at the higher pressure is more stable (Friedmann and Jensen,  ). The relationship between the gas fractional flow (also called foam quality, fg) and gas mobility is characterized by two straight lines intersecting at a critical foam quality (fg*) (Khatib et al.,  ; Liu et al.,  -  ), as shown in Figure 7. At values of fg slightly below fg*, gas mobility slightly decreases or remains constant with increasing of fg, which indicates a stable state of foam texture (high-quality regime). While at higher fg (above fg*) the gas mobility increases with increasing of fg which indicates an unstable state of the foam texture (low- quality regime). Persoff et al.   found that during the transient foam flooding, the flow characteristics vary from characteristic of free gas to that of strong, fine-textured foam. Nguyen  showed that before the breakthrough the foam displaces the liquid in a piston-like manner and after the breakthrough a stage of strong secondary liquid de-saturation initiates in central portion of the porous medium and propagates towards the inlet and the outlet.
5.8.2. SAG Injection
Since foam is readily formed during drainage process whenever the porous medium is pre-saturated with a surfactant solution (Chou,  ), thus SAG injection is a favourite method to create in-situ foam in porous media. (Xu,  ) proved that the SAG injection can produce stable and persistent foam. Li et al.   pointed that foam generated in situ by SAG injection can be a substitute for polymer drive in the alkaline-surfactant-polymer EOR process. Gas mobility in SAG injection is higher than that in co-injection of surfactant and gas  -  . This makes SAG injection more favourite than co-injection to overcome problem of the injectivity reduction that accompanies foam applications in EOR processes.
5.9. Permeability and Heterogeneity of the Porous Media
The foam propagation rate in porous media is significantly affected by rock permeability (Friedmann and Jensen,  ). The minimum pressure gradient required for triggering foam generation in steady flow through porous
Figure 7. Schematic plot for the relationship between fg and gas mobility.
media varies with the permeability (Rossen et al.,  ). As permeability increases, the pressure gradient and gas velocity required for foam generation decrease (Li,  -  ), and value of fg* is much higher in the high-per- meability medium (Alvarez et al., 1999). In SAG injection, gas breakthrough is much faster in a high-perme- ability bead-pack than in a low-permeability pack (Li, 2006). Khatib et al.  concluded from their experiments on porous media with permeability range of 69.09 to 8882.31 µm2 (70 to 9000 Darcies) that Pc* decreases with increasing of the permeability, and there is a straight line relationship between Pc* and logarithm of permeability. This conclusion needs to be verified for low permeability porous media. Siddiqui et al. (1997) stated that the dependence of foam mobility on the injection parameters is different for the low-permeability porous media than that of higher-permeability porous media. (Rossen et al.,  ) showed that the relation between the minimum pressure gradient required for creating foam and permeability is more complex in consolidated media. Casteel and Djabbarah  conducted an interesting experiment to optimize oil recovery from parallel layers have different permeabilities. The result was the foam is more effective to improve oil recovery if it is conducted after the recovery by gas injection. This result agrees with findings of Apaydin et al.  where they showed that in heterogeneous layers the foam fronts move at identical rates in all layers if the layers are in capillary communication and cross flow is allowable, but if the cross flow is prohibited between the layers, foam partially plugs the high permeability zone and diverts flow into the low permeability zone. Based on that, it can be concluded that the stage of using foam in EOR should be after a stage of gas flooding alone to get the maximum oil recovery. Thus, foam injection must be the fourth stage of the recovery after primary recovery, water flooding, and gas flooding.
5.10. Presence of Oil
For simplicity most of the experimental studies have been conducted in the absence of oil (Sayegh and Girard,  ; Liu,  ; Persoff et al.,  ; Apaydin et al.,  ; Schramm and Kutay,  ; Apaydin and Kovscek,   ; Dong,  ; Xu,  ; Nguyen,  ; Xu and Rossen,  ; Kutay and Schramm,  ; Du et al.,  ; Kim et al.,   ; Li,  -  ; Liu et al.,  ; Li,   ). Success without oil is a precondition to success with oil (Ashoori and Rossen,  ). But presence of oil has important effects on foam flow in porous media. Presence of oil represents an additional phase to the gas phase and the surfactant solution phase in the porous medium. This will affect the interfacial tension between phases and the saturations of the phases in the porous medium. Nikolov et al.  showed that brine concentration, and surfactant type and concentration affect directly the stability of emulsion and foam films. Oil saturation has a stronger effect on foam than the variances in properties of oil (Jensen and Friedmann,  ), and behaviour of foam to displace oil is not the same for all the surfactants (Novosad et al.   ). Effect of the presence of oil on the propagation of foam in porous media is strongly surfactant-specific, this was confirmed by Jensen and Friedmann  , Novosad et al.   , Mannhardt et al.  , and Vikingstad and Aarra  . For instance Friedmann and Jensen  found that presence of oil greater than 20% saturation is detrimental to both foam generation and to propagation of preformed foam, while with different porous medium, oil, and surfactant Vassenden et al.  showed that in presence of oil even at very low saturation (5%), gas and surfactant solution were found to flow together without forming foam, and a low-mobility foam zone was found to propagate from the inlet significantly slower than the gas and surfactant fronts. This high difference in the critical oil saturation at which presence of oil be a detrimental factor for propagation and stability of foam indicates that results of experiment of the foam flooding are very restricted to the used type of porous medium, type and saturation of oil, and type and concentration of surfactant, i.e. it is difficult to develop generalized correlations for predicting performance of foam in oil reservoirs.
6. Smooth Particle Hydrodynamics (SPH) Method
Smooth Particle Hydrodynamics (SPH) method will be explained and developed in its original form based on updated Lagrangian formalism. SPH is a relatively new numerical technique for the approximate integration of partial differential equations. It is a meshless Lagrangian method that uses a pseudo-particle interpolation method to compute smooth field variables. Each pseudo-particle has a mass, Lagrangian position, Lagrangian velocity, and internal energy; other quantities are derived by interpolation or from constitutive relations.
The advantage of the meshless approach is its ability to solve problems that cannot be effectively solved using other numerical techniques. It does not suffer from the mesh distortion problems that limit Lagrangian approaches based on structured mesh when simulating large deformations. As it is a Lagrangian method it naturally tracks material history information, such as damage, without the diffusion that occurs in Eulerian approaches due to advection.
Gingold  and Lucy  initially developed SPH in 1977 for the simulation of astrophysics problems. Their breakthrough was a method for the calculation of derivatives that did not require a structured computational mesh. Review papers by Benz  and Monaghan  cover the early development of SPH. Libersky and Petchek  extended SPH to work with the full stress tensor in 2D. This addition allowed SPH to be used in problems where material strength is important. The development SPH with strength of materials continued with extension to 3D  , and the linking of SPH with existing finite element codes   .
The introduction of material strength highlighted shortcomings in the basic method: accuracy, tensile instability, zero energy modes and artificial viscosity. These shortcomings were identified in a first comprehensive analysis of the SPH method by Swegle  , and Wen  . The problems of consistency and accuracy of the SPH method, identified by Belytschko  , were addressed by Randles and Libersky  and Vignjevic and Campbell  . This resulted in a normalised first order consistent version of the SPH method with improved accuracy. The attempts to ensure first order consistency in SPH led to the development of a number of variants of the SPH method, such as Element Free GalerkinMehod (EFGM)   , Reproducing Kernel Particle Method (RKPM)   , Moving Least Square Particle Hydrodynamics (MLSPH)  , Meshless Local PetrovGalerkin Method (MLPG)  . These methods allow the restoration of consistency of any order by means of a correction function. It has been shown by Atluri   that the approximations based on corrected kernels are identical to moving least square approximations. The issue of stability was dealt with in the context of particle methods in general by Belytschko  , and independently by Randles  . They reached the same conclusions as Swegle  in his initial study.
In spite of these improvements, the crucial issue of convergence in a rigorous mathematical sense and the links with conservation have not been well understood. Encouraging preliminary steps in this direction have already been put forward very recently by Ben Moussa  , who proved convergence of their meshless scheme for non-linear scalar conservation laws; see also  . This theoretical result appears to be the first of its kind in the context of meshless methods. Furthermore, Ben Moussa proposed an interesting new way to stabilise normalised SPH and allow for treatment of boundary conditions by incorporating upwinding, an approach usually associated with finite volume shock-capturing schemes of the Godunov type, see  -  . The task of designing practical schemes along these lines is pending, and there is scope for cross-fertilisation between engineers and mathematicians and between SHP specialists and Godunov-type schemes specialists.
The improvements of the methods in accuracy and stability achieved by kernel re-normalisation or correction, have not, however, come for free; now it is necessary to treat the essential boundary conditions in a rigorous way. The approximations in SPH do not have the property of strict interpolants so that in general they are not equal to the particle value of the dependent variable, i.e.
Consequently it does not suffice to impose zero values for uI at the boundary positions to enforce homogeneous boundary conditions.
The treatment of boundary conditions and contact was neglected in the conventional SPH method. If the imposition of the free surface boundary condition (stress free condition) is simply ignored, then conventional SPH will behave in an approximately correct manner, giving zero pressure for fluids and zero surface stresses for solids, because of the deficiency of particles at the boundary. This is the reason why conventional SPH gives physically reasonable results at free surfaces. Contact between bodies occurs by smoothing over all particles, regardless of material. Although simple, this approach gives physically incorrect results.
Campbell  made an early attempt to introduce a more systematic treatment of boundary condition by re-considering the original kernel integral estimates and taking into account the boundary conditions through residual terms in the integral by parts. Probably the most sophisticated work on boundary conditions in SPH is due to Takeda et al.  , who have applied SPH to a variety of viscous flows. A similar approach has also been used to a limited extent by Libersky   with the ghost particles added to accomplish a reflected symmetrical surface boundary condition. In, Belytschko, Lu and Gu  the essential boundary conditions were imposed by the use of Lagrange multipliers leading to an awkward structure of the linear algebraic equations, which are not positive definite. Krongauz and Belytschko  proposed a simpler technique for the treatment of the essential boundary conditions in meshless methods, by employing a string of finite elements along the essential boundaries. This allowed for the boundary conditions to be treated accurately, but reintroduced the short- comings inherent to structured meshes.
Randles et al.   were first to propose a more general treatment of boundary conditions based on an extension of the ghost particle method. In this, the boundary is considera surface one-half of the local smoothing length away from the so-called boundary particles. A boundary condition is apply to a field variable by assigning the same boundary value of the variable to all ghost particles. A constraint isimpose on the boundary by interpolating it smoothly between the specified boundary particle value and the calculated values on the interior particles. This serves to communicate to the interior particles the effect of the specific boundary condition. There are two main difficulties in this:
o Definition of the boundary (surface normal at the vertices).
o Communication of the boundary value of a dependent variable from the boundary to internal particles.
A penalty contact algorithm for SPH was developed at Cranfield by Campbell and Vignjevic  . This algorithm was tested on normalised SPH using the Randles approach for free surfaces. The contact algorithm considered only particle-particle interactions, and allowed contact and separation to be correctly simulated. However tests showed that this approach often excited zero-energy modes.
Another unconventional solution to the SPH tensile instability problem was first proposed by Dyka  in which the stresses are calculated at the locations other than the SPH particles. The results achieved in 1D were encouraging but a rigorous stability analysis was not performed. A 2D version of this approach was investigated by Vignjevic  , based on the normalised version of SPH. This investigation showed that extension to 2D was possible, although general boundary condition treatment and simulation of large deformations would require further research.
To utilise the best aspects of the FE and SPH methods it was necessary to develop interfaces for the linking of SPH nodes with standard finite element grids   and contact algorithms for treatment of contact between the two particles and elements  .
From the review of the development of meshless methods, given above, the following major problems can be identified: consistency, stability and the treatment of boundary conditions.
6.2. Basic Formulations
The spatial discretisation of the state variables is provided by a set of points. Instead of a grid, SPH uses a kernel interpolation to approximate the field variables at any point in a domain. For instance, an estimate of the value of a function f(x) at the location x is given in a continuous form by an integral of the product of the function and a kernel (weighting) function.
where: the angle brackets denote a kernel approximation, h is a parameter that defines size of the kernel support known as the smoothing length, and x' is new independent variable.
The kernel function usually has the following properties: Compact support, which means that it’s zero everywhere but on a finite domain inside the range of the smoothing length 2h:
These requirements, formulated by Lucy  , ensure that the kernel function reduces to the Dirac delta function when h tends to zero:
Therefore, it follows that:
If the function f(x) is only known at N discrete points, the integral of equation 1 can be approximated by a summation:
where mj/ρj is the volume associated to the point or particle j. In SPH literature, the term particles is misleading as in fact these particles have to be thought of as interpolation points rather than mass elements.
The last equation constitutes the basis of SPH method. The value of a variable at a particle denoted by superscript is calculate by summing the contributions from a set of neighbouring particles (Figure 8), denoted by superscript j and for which the kernel function is not zero:
6.3. Conservation Equations
The conservation equations in Lagrangian framework are given by:
The subscripts α and β denote the component. These equations are the forms proposed by Monaghan  . The kernel interpolation allows the derivation of the basic SPH form of these conservation equations as:
Figure 8. Set of neighbouring particles.
All the equations above contain integrals of the form:
Using a development in a Taylor series about x' = x, it follows:
As W is an even function, the terms containing odd powers of x' - x vanish. Neglecting second and higher order terms, which is consistent with the overall order of the method, gives:
Using the last relation:
All equations include kernel approximations of spatial derivatives:
Integrating by part gives:
The first term of the second member can be rewritten:
Using Greens theorem, it follows:
The surface integral is zero if the domain of integration is larger than the compact support of W or if the field variable assumes zero value on the boundary of the body (free surface). If none of these conditions is satisfied, modifications should be to make to account for boundary conditions.
One should note that in equations:
The spatial derivatives of the field variables are substitute by the derivatives of the kernel:
The final step is to convert the continuous volume integrals to sums over discrete interpolation points. Finally, after a few arrangements in order to improve the consistency between all equations, the most common form of the SPH discretised conservation equations are obtain:
6.4. Kernel Function
To complete the discretisation one has to define the kernel function. Numerous possibilities exist. A large number of kernel function types are discuss in literature, ranging from polynomial to Gaussian. The most common is the B-spline kernel that was propose by Monaghan  :
D is the number of dimensions of the problem (i.e. 1, 2 or 3).
C is the scaling factor, which depends on the number of dimensions and ensures that the consistency conditions 2 and 3 are satisfied:
6.5. Variable Smoothing Length
If large deformations occur, particles can largely separate from each other. If the smoothing length remains constant, the particle spacing can become so large than particles will no more interact. On the other hand, in compression, many particles might enter in the neighbouring of each other, which can significantly slow down the calculation. In order to avoid these problems, Benz  proposed the use of a variable smoothing length. The intent was to maintain a healthy neighbourhood as continuum deforms. The equation for evolution of h derived by  is:
where h0 and ρ0 are initial smoothing length, and density and n is the number of dimensions of the problem.
Another frequently used equation of evolution based on conservation of mass is:
where div(v) is the divergence of velocity.
6.6. Neighbour Search
An important step in the SPH computation is the neighbour search. This task can be extremely time consuming. The neighbour search routine lists the particles that are inside the neighbourhood of each particle at each time step. A direct search between every particle is particularly inefficient. A bucket sort algorithm is more efficient. In this method, an underlying grid of side 2 h is generated and the particles are sorted according to the box in which they are located (Figure 9). Then for each particle, the neighbours are searched among the particles
Figure 9. Bucket sort and neighbour search.
contained in the same box and the surrounding boxes. This allows the computational time to be cut down from a default time proportional to N2 for a direct search to N*log (N), where N is the total number of particles.
6.7. SPH Shortcomings
The basic SPH method has shown several problems when used to model a solid body:
o Tensile instability.
o Zero-energy modes
The SPH method in its continuous form is inconsistent within 2 h of the domain boundaries due to the kernel support incompleteness. In its discrete form the method loses its 0th order consistency not only in the vicinity of boundaries but also over the rest of the domain if particles have an irregular distribution. Meglicki  showed that node disorder results in a systematic error. Therefore, a proper SPH grid should be as regular as possible and not contain large discrepancies in order to perform most accurate simulation.
The first order consistency of the method can be achieve in two ways. Firstly, by correcting the kernel function, second, by correcting the discrete form of the convolution integral of the SPH interpolation. Johnson  uses this correction procedure, and proposed the Normalised Smoothing Function. Vignjevic  also implemented a kernel normalisation and correction to lead to a Corrected Normalised Smooth Particle Hydrodynamics (CNSPH) method which is first order consistent. The full derivation of this correction is given below. In SPH methods based on a corrected kernel, it is no-longer possible to ignore boundary conditions. In basic SPH, free surface boundary conditions are not imposed and are simply ignored as variables tends to zero at boundaries because of the deficiency of neighbour particles.
Derivation of normalised corrected gradient SPH formula
The approximation of fields using a Normalised Corrected SPH (NCSPH) interpolation has been published    . Some authors have chosen to use properties of the integrals of motion (linear and angular momentum) to derive Normalisation and Gradient Correction for kernel interpolation, see  . This approach lacks generality and does not provide the insight into the origin and the nature of the problem. A full derivation of the correction proposed by Vignjevic  , which has not been published before, is given below. The derivation is based on the homogeneity and isotropy of space, the space properties, which have as a consequence conservation of linear and angular momentum, see  . The mixed correction insures that homogeneity and isotropy of space are preserved in the process of spatial discretisation.
An interpolation technique should not affect homogeneity of space. One way of demonstrating this is to prove that the interpolation of the solution space itself is independent of a translation of the coordinate axes. In order to express this statement mathematically one can start by writing the general expression for the SPH interpolation of a vector field:
If the field to be interpolated is the solution space then:
And the last equation becomes:
In a different, translated coordinate system, this equation is:
where X′ is the coordinate vector in the new coordinate system. If the translation vector by which the origin of the coordinate system was move is defined as ∆X then the relationship between X and X′ is:
If the interpolated coordinates of a point are independent of the translation of coordinate axes then the following should hold:
By substituting the last two equations, for both Xi and X′I one obtains:
By comparison between:
It is clear that the discretised space will only be homogeneous if the following condition is satisfied:
Similarly, an interpolation technique should not affect isotropy of space. One way of demonstrating this is to prove that the interpolation of the solution space itself is independent of a rotation of the coordinate axes. The same holds for the SPH approximation. The change in coordinates due to a rotation of the coordinate axes is:
where C is the rotation matrix. For small rotations, this can also be write as:
where ∆φ is the rotation vector.
If one wants to ensure that, the SPH approximation does maintain the fact that space is isotropic then the approximation has to satisfy the following condition:
This means that the rotation matrix has to be approximate exactly.
In order to develop this equation one can start by rewriting:
where is a skew-symmetric matrix:
This means that, for small rotations, the rotation matrix is given by:
The approximation of the rotated coordinates is:
This means that the requirement on the interpolation is:
Expanding this expression leads to:
Therefore to preserve space isotropy:
The following condition has to be satisfied.
The form of the normalised kernel function and the approximation of the first order derivatives which provides first order consistency is given in Table 2 below.
Using the NCSPH approximations, the conservation equations assume the following form:
6.7.2. Tensile Instability
A Von Neumann stability analysis of the SPH method was conducted Swegle  and Balsara  separately. This has revealed that the SPH method suffers from a tensile instability. This instability manifests itself as a clustering of the particles, which resembles fracture and fragmentation, but is in fact a numerical artefact. Swegle  concluded that the instability does not result from the numerical time integration algorithm, but rather
Table 2. Corrected forms of the kernel function and its gradient.
from an effective stress resulting from a non-physical negative modulus being produce by the interaction between the constitutive relation and the kernel interpolation. In other words the kernel interpolation used in spatial discretisation changes the nature of original partial differential equations.
These changes in the effective stress amplify, rather than reduce, perturbations in the strain. From Swegle’s stability analysis, it emerged that the criterion for stability was that:
where W'' is the second derivative of W with respect to its argument and σ is the stress, negative in compression and positive in tension.
This criterion states that instability can also occur in compression, not only in tension. Indeed, if the slope of the derivative of the kernel function is positive, the method is unstable in tension and stable in compression and if the slope is negative, it is unstable in compression and stable in tension.
The fact that this instability manifests itself most often in tension can be explain. Figure 10 shows the stability regime for the B-spline kernel function. The minimum of the derivative is situated at u = 2/3 h. In standard configurations, the particle spacing is equal to the smoothing length, u = h. Thus, standard configurations are unstable in tension. This explains why this unstable phenomenon is generally observe in tension and hence, it is misleading name “tensile instability”.
In order to remedy this problem several solutions have been propose. Guenther  and Wen  have proposed a solution, known as Conservative Smoothing. Randles and Libersky  proposed adding dissipative terms, which is relate to conservative smoothing. Dyka  proposed an original solution by using a non- collocated discretisation of stress and velocity points. At one set of points, the stresses are evaluate, while the momentum equation is calculated at another set of points. The “stress” points are equivalent to the Gauss quad- rature points in FE, the other set of points is equivalent to the element nodes. This approach was extende to two dimensions, in combination with kernel normalisation, by Vignjevic and Campbell  . Other solutions were proposed, for instance see  . The former proposes a corrective SPH method by enforcing higher order con- sistency, while the latter proposes the addition of an artificial force to stabilise the computation. Recently Randles and Libersky combined MLS interpolation with the stress and velocity point approach. They called this approach the Dual Particle Dynamics method  .
The conservative smoothing and the artificial repulsive forces methods have limited applicability and have to be use with caution because they affect the strength of material being modelled. At present, the most promising approach is non-collocational spatial discretisation. This problem is in the focus of attention of a number of researchers working on mesh-less methods.
Figure 10. Stability regimes for the B-spline kernel function  .
6.7.3. Zero-Energy Modes
Zero-energy modes are a problem that is not unique to particle methods. These spurious modes, which correspond to modes of deformation characterised by a pattern of nodal displacement that produces zero strain energy, can also be find in the finite difference and finite element methods.
Swegle  was first to show that SPH suffers from zero energy modes. These modes arise from the nodal under integration. The fundamental cause is that all field variables and their derivatives are calculate at the same locations (particle positions). For instance, for an oscillatory velocity field, illustrated in Figure 1, the kernel approximation would give negligible gradients and consequently stresses at the particles. These modes of deformation are not resist and can be easily excite by rapid impulsive loading. Another explanation can be find in the origin of the kernel approximation. As the kernel approximation, which is the basis of SPH, is an interpolation of a set of discrete data, a constant field, can be fitted with a sinusoidal curve/surface if the order of the interpolation is high enough.
Figure 11 illustrates this spurious mode for a field in 1D SPH. If one would approximate the derivative of the field shown in Figure 4 with a central difference formula:
Then one would obtain:
At all points. Hence, this mode cannot be detect, and can grow unhindered. This means that this mode could grow to a level where it dominates the solution.
Zero energy or spurious modes are characterised by a pattern of nodal displacement that is not a rigid body but produces zero strain energy.
One of the key ideas to reduce spurious oscillations is to compute derivatives away from the particles where kernel functions have zero derivatives. Randles  proposed a stress point method. Two sets of points are create for the domain discretisation, one carries velocity, and another carries stress. The velocity gradient and stress are compute on stress points, while stress divergence is sample at the velocity points using stress point neighbours. According to  , these spurious modes can be eliminate by replacing the strain measure by a non-local approximation based on gradient approach. Beissel  proposed another way to stabilise nodal integration, the least square stabilisation method.
Smoothed particle hydrodynamics (SPH) is a meshfree particle method, which not only uses particles as the computational frame for interpolation or differencing, but also uses the particles to carry the material properties.
Figure 11. Zero energy modes.
In this survey, the smoothed particle hydrodynamics method, and its recent development in numerical algorithms and applications have been reviewed. In methodology and numerical approximation, the basic concepts of kernel and particle approximations as well as the main technique for developing SPH formulation have been addressed. Some smoothing kernel functions have been reviewed with constructing conditions provided.
The emphasis is on the consistency problem, which traditionally limits the accuracy of the SPH method. The general definition of consistency has been surveyed. Several important numerical topics have been investigated, including boundary treatment, solid obstacles, material interface treatment and tensile instability.
The last decades have witnessed the great success of SPH developments in methodology and applications, and there are still many remaining tasks and challenges. To achieve a reliable solution, the computational accuracy, consistency, efficiency, stability and convergence need to incorporate into good SPH algorithms. In general, SPH has great potential in many problems in engineering and science. It has salient advantages over traditional grid based numerical models in treating large deformation, tracking free surfaces, moving interfaces, and deformable boundaries, resolving moving discontinuities such as cracks and shock waves.
The attraction of the SPH method has been showcased not only by saving time and computational resources, but also for its use in diversified applications. One obvious advantage is that the SPH method provides a feasible physical model for non-continuum, as it has some same features as the classic molecular dynamics method and the dissipative particle dynamics method. Therefore, it would be very attractive to apply the SPH method to simulating problems where the main concern of the object is a set of discrete physical particles rather than a continuum, e.g. environmental flows with solid and fluid particles such as landslide, mudslide and multiphase flow in an oil reservoir.
L. Castañeda gratefully acknowledges the financial support from the Escuela Superior de Ingeniería Mecánica y Eléctrica Unidad Ticomán, Instituto Politécnico Nacional, through Project no. 20150465. L. Castañeda also thanks Angel Maldonado Austria for useful discussions.