+ All Categories
Home > Documents > 4.5 Static reservoir study - Treccani

4.5 Static reservoir study - Treccani

Date post: 15-Oct-2021
Category:
Upload: others
View: 8 times
Download: 1 times
Share this document with a friend
22
4.5.1 Introduction The most important phase of a reservoir study is probably the definition of a static model of the reservoir rock, given both the large number of activities involved, and its impact on the end results. As we know, the production capacity of a reservoir depends on its geometrical/structural and petrophysical characteristics. The availability of a representative static model is therefore an essential condition for the subsequent dynamic modelling phase. A static reservoir study typically involves four main stages, carried out by experts in the various disciplines (Cosentino, 2001). Structural modelling. Reconstructing the geometrical and structural properties of the reservoir, by defining a map of its structural top and the set of faults running through it. This stage of the work is carried out by integrating interpretations of geophysical surveys with available well data. Stratigraphic modelling. Defining a stratigraphic scheme using well data, which form the basis for well- to-well correlations. The data used in this case typically consist of electrical, acoustic and radioactive logs recorded in the wells, and available cores, integrated where possible with information from specialist studies and production data. Lithological modelling. Definition of a certain number of lithological types (basic facies) for the reservoir in question, which are characterized on the basis of lithology proper, sedimentology and petrophysics. This classification into facies is a convenient way of representing the geological characteristics of a reservoir, especially for the purposes of subsequent three-dimensional modelling. Petrophysical modelling. Quantitative interpretation of well logs to determine some of the main petrophysical characteristics of the reservoir rock, such as porosity, water saturation, and permeability. Core data represent the essential basis for the calibration of interpretative processes. The results of these different stages are integrated in a two (2D) or three-dimensional (3D) context, to build what we might call an integrated geological model of the reservoir. On the one hand, this represents the reference frame for calculating the quantity of hydrocarbons in place, and on the other, forms the basis for the initialization of the dynamic model. In the following paragraphs we will describe these stages in greater detail. 4.5.2 Structural model The construction of a structural reservoir model basically involves defining the map of the structural top, and interpreting the set of faults running through the reservoir. Traditionally, this phase of the study falls under the heading of geophysics, since seismic surveys are without doubt the best means of visualizing subsurface structures, and thus of deducing a geometric model of the reservoir. Further contributions may be made by specialist research such as regional tectonic studies and, for fault distribution, by available dynamic data (pressures, tests and production data). The definition of the reservoir’s structural top involves identifying the basic geometrical structure of the hydrocarbon trap. In this case we are dealing with the external boundaries of the reservoir, since its internal structure is considered in relation to the stratigraphic reservoir model (see below). 553 VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT 4.5 Static reservoir study
Transcript
Page 1: 4.5 Static reservoir study - Treccani

4.5.1 Introduction

The most important phase of a reservoir study isprobably the definition of a static model of thereservoir rock, given both the large number ofactivities involved, and its impact on the end results.As we know, the production capacity of a reservoirdepends on its geometrical/structural andpetrophysical characteristics. The availability of arepresentative static model is therefore an essentialcondition for the subsequent dynamic modellingphase.

A static reservoir study typically involves fourmain stages, carried out by experts in the variousdisciplines (Cosentino, 2001).

Structural modelling. Reconstructing thegeometrical and structural properties of the reservoir,by defining a map of its structural top and the set offaults running through it. This stage of the work iscarried out by integrating interpretations ofgeophysical surveys with available well data.

Stratigraphic modelling. Defining a stratigraphicscheme using well data, which form the basis for well-to-well correlations. The data used in this casetypically consist of electrical, acoustic and radioactivelogs recorded in the wells, and available cores,integrated where possible with information fromspecialist studies and production data.

Lithological modelling. Definition of a certainnumber of lithological types (basic facies) for thereservoir in question, which are characterized on thebasis of lithology proper, sedimentology andpetrophysics. This classification into facies is aconvenient way of representing the geologicalcharacteristics of a reservoir, especially for thepurposes of subsequent three-dimensionalmodelling.

Petrophysical modelling. Quantitativeinterpretation of well logs to determine some of themain petrophysical characteristics of the reservoirrock, such as porosity, water saturation, andpermeability. Core data represent the essential basisfor the calibration of interpretative processes.

The results of these different stages are integratedin a two (2D) or three-dimensional (3D) context, tobuild what we might call an integrated geologicalmodel of the reservoir. On the one hand, thisrepresents the reference frame for calculating thequantity of hydrocarbons in place, and on the other,forms the basis for the initialization of the dynamicmodel. In the following paragraphs we will describethese stages in greater detail.

4.5.2 Structural model

The construction of a structural reservoir modelbasically involves defining the map of the structuraltop, and interpreting the set of faults running throughthe reservoir. Traditionally, this phase of the study fallsunder the heading of geophysics, since seismic surveysare without doubt the best means of visualizingsubsurface structures, and thus of deducing ageometric model of the reservoir. Furthercontributions may be made by specialist research suchas regional tectonic studies and, for fault distribution,by available dynamic data (pressures, tests andproduction data).

The definition of the reservoir’s structural topinvolves identifying the basic geometrical structure ofthe hydrocarbon trap. In this case we are dealing withthe external boundaries of the reservoir, since itsinternal structure is considered in relation to thestratigraphic reservoir model (see below).

553VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

4.5

Static reservoir study

Page 2: 4.5 Static reservoir study - Treccani

In most cases, the map of the reservoir’sstructural top is defined on the basis of ageophysical interpretation of 2D or 3D data. In thiscase, the most frequent, the geophysicist interpretssignificant horizons in a seismic block as afunction of times. This generates a data set (x, y, t),forming the basis for the subsequent griddingphase; in other words, the generation of a surfacerepresenting the ‘time map’ of the horizon underconsideration. This time map is then converted intoa depth map, using the relevant laws governing thevelocities of seismic waves, which are calculatedaccording to the characteristics of the formationsoverlying the reservoir. There are varioustechniques for performing this conversion, some ofwhich are highly sophisticated. The choice of themost suitable depends on geological complexity,and on the human, technological and financialresources available. In any case, the resulting mapis calibrated against well data.

In some cases, the map of the structural topmay be generated solely on the basis of availablewell data, with the help of data from the geologicalsurface survey, if the reservoir lies in an area withoutcrops of geological formations. This mayhappen where no seismic survey has been carriedout, or when there are enough wells available toprovide adequate coverage of the structure. In theseinstances, the improvement in quality of thestructural top map resulting from a seismicinterpretation is not sufficient to justify the extrawork involved, which is due above all to theproblem of calibrating a large number of wells.

The interpretation of the set of faults runningthrough a reservoir has considerable impact on itsproduction characteristics, and in particular on themost appropriate plan for its development. Givenan equal volume of hydrocarbons in place, thenumber of wells required is higher for reservoirscharacterized by faults which isolate independentor partially independent blocks from the point ofview of fluid content. In the case of deep seareservoirs (for example in the Gulf of Mexico,West Africa, etc.), the number of wells is oftencrucial in the evaluation of development plans.Consequently, an accurate assessment of faults andtheir characteristics may be a decisive factor.

The interpretation of the set of faults within areservoir is generally based on four types of data thatare subsequently integrated.

Inconsistencies in correlation. The presence offaults may sometimes be apparent from well data,indicated by inconsistencies in the correlationscheme. Typically, for example, the depth of ahorizon in a well may turn out to be greater or

smaller than expected, indicating the possiblepresence of a fault. In the past, when 3D seismicsurveys were much less readily available than today,this technique allowed only the largest faults to beidentified and located with a good degree ofaccuracy.

Well data. The presence of faults in a well cangenerally be ascertained through the analysis of thestratigraphic sequence. Missing geological sequencesindicate the presence of normal faults, whereasrepeated sequences indicate the presence of reversefaults.

Geophysical tests. Geophysical data representsthe main source of information on the presence offaults since, unlike the two previous techniques, italso investigates parts of the reservoir which aredistant from the wells. The presence of faults maybe indicated by discontinuities in the seismicsignal. This applies to both data from surfaceseismic surveys and data recorded in well seismics(VSP, crosswell seismics). Furthermore, this datacan be interpreted both traditionally, by mapping areflecting geological horizon, and by using seismicattributes (dip, azimuth, amplitude, etc.).

Dynamic well test data. The interpretation ofdynamic well tests (see Chapter 4.4) may show thepresence of faults in cases where the faults have animpact on fluid flow, and thus on pressure patternsover time.

An adequate integration of these types ofinformation generally allows the set of faults runningthrough the reservoir to be reconstructed with sufficientaccuracy. However, in carrying out the integration, weshould take into account a series of factors which maybe crucial for the quality of the end result.

The first factor concerns the degree of detailaimed for in the interpretation. In most cases thisdepends more on the tools available than on theactual aims of the study. Geophysicists tend toinclude in their interpretation all thosediscontinuities which can be identified from theseismic survey, regardless of whether these have animpact on fluid flow. As a result, the reservoirengineer often has to simplify the map during thedynamic simulation phase, keeping only thosefaults which turn out to have a significant impacton the results of the simulation model. Forexample, faults shorter than the average size of themodel cells can obviously be disregarded. For thisreason, the degree of detail in a geophysicalinterpretation should match the overallrequirements of the study, and be discussed andagreed with the other members of the study team.

Another factor is linked to the hydraulictransmissibility of the faults. In a reservoir study, we

554 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Page 3: 4.5 Static reservoir study - Treccani

are interested only in those faults which act assealing faults, or which are more transmissive thanthe reservoir rock. Faults which have no impact onfluid flow, on the other hand, may be disregarded.From this point of view it is important to stress thatgeophysical tests provide a means of locating thefaults in space relatively accurately, but give noinformation on their sealing effect. By contrast,dynamic well tests allow us to quantify the impact ofthe faults on fluid flow, but not to locate themprecisely in space. It is therefore obvious that thesetwo techniques are complementary, and that adequateintegration improves the end result.

Reconstructing the network of faults runningthrough a reservoir is a complex activity, requiring acombination of data of differing type, quality andreference scale. The quality of the final reconstructionis generally tested during the validation phase of thesimulation model (see Chapter 4.6), in which weattempt to reconstruct the reservoir’s past productionhistory (history match). It may be found necessary toadjust the preliminary interpretation during this phaseof the work, which consequently takes on an iterativenature, aimed at progressively refining the initialassumptions. Obviously, all modifications must bemade after consultation with thegeologist/geophysicist who carried out the work, so asto maintain the model’s necessary geological andstructural coherence.

The structural model of a reservoir is based onthe combination of results obtained during the stagesof defining the structural top, and interpreting thefault network. In a 2D context, the result is simply adepth map calibrated against the wells, with thesuperimposition of fault lines where these interceptthe structural top. The model, understood as theexternal framework of the reservoir, is completedwith a map of the bottom which is derived using thesame method.

Recent years, however, have seen the increased useof software allowing us to model subsurface structuresin 3D, and this now represents the most widelyadopted approach in the field. The main advantages of3D techniques are their speed and ease of use, as wellas the ability to model complex structures (e.g. reversefaults), which are impossible to represent withtraditional 2D techniques, based on the mapping ofsurfaces representing geometrical and petrophysicalparameters.

The procedures for constructing a three-dimensional structural reservoir model vary accordingto the applications used, but generally include thefollowing steps.

Modelling of major faults. These are faults whichbound the main blocks forming the reservoir. The fault

planes are in this case explicitly modelled as surfaces,which in turn define the boundaries of the main blocksof the three-dimensional model.

Construction of geological surfaces. Within eachmain block, parametric surfaces are generated, whichrepresent the main geological horizons, typically thetop and bottom of the main sequences. These surfacesmust be consistent with the depths measured in allavailable wells.

Modelling of minor faults. Whilst affecting fluiddynamics, these faults have only a slight impact on theoverall geometry of the reservoir, locally displacingthe geological surfaces.

Fig. 1 shows an example of a three-dimensionalstructural reservoir model: the major faults, surfacesand minor faults are clearly visible. It is obvious thatstructures of this complexity cannot be modelled usingtraditional two-dimensional mapping methods.

4.5.3 Stratigraphic model

The development of the stratigraphic model is withoutdoubt one of the most traditional tasks of the reservoirgeologist, who must perform a well-to-well correlationwith the aim of defining the stratigraphic horizonsbounding the main geological sequences within thehydrocarbon formation. This task is of vitalimportance for the study’s overall accuracy, since fluidflow is heavily influenced by the reservoir’s internalgeometry. It is therefore essential to devote thenecessary time and resources, both human andtechnological, to this stage of the project, in order tobuild an accurate model.

555VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Fig. 1. Example of a 3D structural reservoir model (courtesy of L. Cosentino).

Page 4: 4.5 Static reservoir study - Treccani

The difficulties encountered at this stage of thereservoir study are mainly linked to the definition ofthe depositional environment of the reservoir. In somecases, when the sedimentary sequences present asignificant lateral extension, the correlations betweenwells may be relatively simple. This is true, forexample, for shelf areas, with both terrigenous andcarbonate sedimentation, dominated by tidalphenomena. An extreme example of correlativity isrepresented by the distal facies of some deep seaturbidite sediments, as in various Adriatic fields,where we can correlate with certainty individualevents just a few centimetres thick, even between wellsseveral kilometres apart. However, such examples areexceptions to the rule. In most cases, the lateralextension of the sedimentary bodies is much lower,and in many cases, unfortunately, is less than theaverage distance between wells. This is true of mostcontinental and transitional geological formations,such as alluvial, fluvial and deltaic sediments, wherereconstructing the internal geometry of the reservoirmay turn out to be extremely complicated,representing an often insurmountable challenge for thereservoir geologist. In these cases, as we will seebelow, integration of the various disciplinesparticipating in the reservoir study may be crucial forimproving the accuracy of the end result.

Correlation techniquesThe basic data used for well-to-well correlations are

the logs recorded in open hole or cased hole, and cores.These data are used to create stratigraphic sections and

correlations, in terms of real depth or with respect to areference level, through which we can generallyidentify the lines corresponding to significantgeological variations. Fig. 2 depicts a classic example ofa geological cross-section between two wells, showingthe logs used for the correlation itself.

As already mentioned, there is often a high risk ofgenerating spurious correlations, and the reservoirgeologist must carefully choose the suitablemethodologies to minimize possible errors. To thisend, one of the best techniques is sequencestratigraphy. Sequence stratigraphy is a relatively newapproach, whose official appearance can be dated to1977 (Vail et al., 1977). This is a chronostratigraphicsystem, based on the hypothesis that the deposition ofsedimentary bodies is governed by the combinedeffects of changes in sea-level (eustatic phenomena),sedimentation, subsidence and tectonics.

On this basis, we can identify sequences of differenthierarchical order within a geological unit, separated bysequence boundaries which represent unconformities ormaximum flooding surfaces. These surfaces are themost important reference levels (or markers) that areservoir geologist may find in well profiles.

The correct identification of these units allows usto generate an extremely detailed chronostratigraphicframework. This is especially well-suited to reservoirstudies, since chronostratigraphic units and fluid floware usually closely linked. This link does notnecessarily exist if we consider traditionallithostratigraphic units (for example by correlating thetops of arenaceous units).

556 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Fig. 2. Example of a correlation between wells (courtesy of L. Cosentino).

Page 5: 4.5 Static reservoir study - Treccani

Where it is not possible to apply sequencestratigraphy, or where this does not provide the desiredresults, we may resort to correlations based on thehydraulic properties of the sedimentary bodies. Thisapproach aims to define flow units (or hydraulicunits), which do not necessarily coincide with thegeological units, but which can be consideredhomogeneous from a dynamic point of view. One ofthe classic methodologies for the definition of flowunits is described in Amaefule et al. (1993).

Validation of the stratigraphic schemeOnce we have defined the reference correlation

scheme, it is good practice to check its accuracy usingother types of techniques and data which may provideuseful information for this purpose.

Biostratigraphy and palynology. Available rocksamples (cores or cuttings) are frequently analysedwith the aim of studying micropalaeontological and/orpalynological associations (spores and pollens). Thisdata may in some cases help to confirm thestratigraphic scheme. However, it is important tocheck that chronostratigraphy and biostratigraphy areconsistent, and, in the case of drilling debris (cuttings),to take into account the limited vertical resolution ofthe data.

Pressure data. Available static pressure data, andparticularly those collected in wells with WFT(Wireline Formation Tester) instruments, provideextremely significant information on the continuityand connectivity of the various sedimentary bodies. Inthe absence of structural discontinuities (e.g. faults),the pressures measured in different wells in identicalgeological sequences should be similar. If this is notthe case, there may be correlation problems.

Production data. Within a geological unit weshould be able to observe a thermodynamicequilibrium, corresponding to specificcharacteristics of the fluids produced at the surface(gas-oil ratio and oil density). The presence ofanomalies in these characteristics may be due tocorrelation problems. Obviously, in these cases weshould first rule out problems with the well (e.g.defective cementing).

Drilling data. The Rate Of Penetration (ROP) mayprovide useful information on the stratigraphicsequence crossed. Different geological units oftenpresent varying resistance to the advancement of thebit. In these cases, data supplied by the drilling activitymay be used to check the consistency of availablecorrelations.

It is obvious that this list of techniques is not, andcannot be, exhaustive; every reservoir study possessesdistinctive data and information which can be usedduring the various stages of the study. It is therefore

the responsibility of the reservoir geologist to examineall the existing opportunities, and exploit these to theutmost.

Construction of a stratigraphic modelThe stratigraphic horizons defined at the wells

during the correlation phase are subsequently linked toone another by constructing surfaces which togetherform what we might call the stratigraphic model of thereservoir. This model consists of a series of thicknessmaps of the individual geological horizons locatedbetween the upper and lower boundary surfaces of thereservoir. These maps are usually created usingappropriate computer mapping programmes. Forstratigraphic modelling, too, the three-dimensionalapproach is the most commonly adopted by reservoirgeologists today. In this case, after constructing theexternal framework of the reservoir according to theprocedure described in the previous paragraph, weproceed to define the internal geometry; in otherwords, to create that set of surfaces between the topand the bottom of the reservoir which represent theboundaries between the geological sequences selectedfor correlation. Generally, as already stressed, thesesurfaces form boundaries between flow units whichare independent of one another.

The specific procedure allowing us to constructthis stratigraphic scheme obviously depends on theapplications used. Generally speaking, it is possible tomodel all existing sedimentary geometries (conformityand erosion surfaces, pinch-out, onlap, toplap,downlap, etc.) and to achieve an accurate reproductionof the depositional scheme under consideration.

Fig. 3 shows an example of a three-dimensionalstratigraphic model, where we can see the differentdepositional geometries of the various sedimentaryunits. Note especially the onlap type geometry of thelower unit in the area of structural high.

557VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Fig. 3. Example of a 3D stratigraphic model(courtesy of L. Cosentino).

Page 6: 4.5 Static reservoir study - Treccani

4.5.4 Lithological model

The structural and stratigraphic models described inthe previous paragraphs together form the reservoir’s‘reference framework’. The next stage in a reservoirstudy involves defining the spatial distribution of thereservoir rock’s petrophysical characteristics. Inthree-dimensional geological modelling jargon, thisoperation is often described as the ‘populating of thereservoir model’. As a rule, it can be performed usingappropriate deterministic or stochastic functionswhich allow us to generate two or three-dimensionalspatial distributions of significant characteristics,such as porosity and permeability, directly from welldata. However, this operation is often difficult toperform, since the lateral and vertical continuity ofthese reservoir parameters is often uncertain, and themodelling process is based on the a prioriassumption of continuity and spatial regularity whichdoes not necessarily reflect the real situation. This isespecially true of parameters such as permeability,whose spatial continuity is usually considerablylower than the average distance between controlpoints (wells). For this reason, when working in threedimensions, it is often preferable to construct apreliminary lithological model of the reservoir; thatis, a model based on the identification andcharacterization of a certain number of basic facieswhich are typical of the reservoir under examination.These facies are identified on the basis of datagathered in the wells using specific classificationcriteria, and subsequently distributed within thethree-dimensional structural-stratigraphic modelusing special algorithms. The main advantage of thisapproach is that it is usually much easier to carry outthe spatial distribution of basic facies than that ofreservoir-rock parameters. This is because thedistribution of facies is based on precise geologicalcriteria which depend on the sedimentaryenvironment under consideration. The distribution ofpetrophysical parameters is therefore implementedsubsequently, and is based on the previously createdlithological model. The idea behind this procedure isthat the petrophysical characteristics of the reservoircan be considered intimately linked to the lithologicalfacies.

The concept of facies is particularly well-suited toreservoir studies. Once the facies have beendetermined and characterized by integrating log andcore data, and, where possible, seismic data, thisclassification system can be used in various stages ofthe study, including the following.

Three-dimensional modelling. The facies can beemployed as building blocks for the development ofthree-dimensional geological models, generally

through the application of stochastic algorithms. Aswe have said, this is the most typical use of facies.

Quantitative interpretation of logs. We canassociate each facies, or group of facies, with a typicalinterpretative model, for example in terms ofmineralogy (matrix density), saturation exponent orcementation factor.

Definition of rock types. Although it is not possibleto perform a direct upscaling on the facies for thesimulation stage (since this is a discrete parameter),their distribution may be used as a qualitativereference in the dynamic model for the assignment ofsaturation functions (capillary pressure and relativepermeability). This stage is usually known as thedefinition of rock types.

It is evident that the small-scale, three-dimensionalgeological model which describes and characterizesthe facies can be used in different stages of the study,and in various contexts. The facies may thus beconsidered the most suitable tool for conveying wide-ranging geological information through the variousstages of the study up to the simulation model,guaranteeing the consistency of the work flow. It isworth noting that the concept of facies also representsa convenient common language for all the expertsinvolved in the study.

In practical terms, the lithological model of areservoir is constructed by integrating an idealrepresentation of the reservoir (sedimentologicalmodel), a classification stage (definition of facies)and a spatial distribution stage (three-dimensionalmodel).

Sedimentological modelThe sedimentological/depositional model of the

reservoir forms the basis for the lithological model,and is defined in two main stages: the description andclassification of the individual lithological units(lithotypes) making up the reservoir rock, carried outon available cores; and the definition of a depositionalmodel describing the sedimentary environment(fluvial, deltaic, marine, etc.). This model also allowsus to formulate hypotheses regarding the geometriesand dimensions of the geological bodies; thisinformation is used in the three-dimensionalmodelling stage.

Classification of faciesThe facies can be considered the building

‘blocks’ of the lithological reservoir model. Theycan be defined in various ways, the simplest ofwhich involves the application of cut-off values tolog curves recorded in the wells. For example, asimple sands-clays classification may be realizedby identifying a cut-off value in the ‘gamma ray

558 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Page 7: 4.5 Static reservoir study - Treccani

log’ curve (log of the gamma rays emitted by therock as a function of depth).

Usually, the classification into facies is obtainedthrough a more complex process. This involvesselecting the most suitable log curves, identifying anumber of reference wells (i.e. those wells which havebeen cored, and have high-quality logs), and applyingstatistical algorithms such as cluster analysis, or morecomplex processes based on neural networks. In thisway, a lithological column is generated for eachreference well, where each depth interval is associatedwith a specific facies (log facies). This process isiterative, and aims to identify the optimal number offacies needed to describe the reservoir rock in the rightdegree of detail.

Next, these log facies are compared withavailable core data and characterized froma lithological and petrophysical point of view.Basically, each log facies is associated with typicallithological descriptions and values (mean and/orstatistical distributions) for petrophysicalparameters. The degree of detail and the accuracy ofthis characterization stage obviously depend on thenumber and quality of logs used. In the case of oldwells, with a limited availability of logs (e.g.electrical logs of spontaneous potential and/orresistivity), the classification process is perfunctoryand the characterization stage is limited to a basiclithological description, for example sands/silt/clays,with limited vertical resolution. By contrast, wherelogs of more recent generations are available (e.g.density/neutron, PEF, sonic and NMR), the faciesemerging from the classification process can becharacterized more completely. For example, each

facies can be linked not only to the most obviouslithological characteristics, but also to precisepetrophysical values such as porosity, permeability,capillary behaviour, compressibility, cementationfactor, saturation exponent, etc.

During the final stage, the defined classificationon reference wells is extended to all the other wells inthe reservoir through a process of statisticalaggregation. This stage allows us to obtainlithostratigraphic columns in terms of facies for all thewells in the reservoir.

Three-dimensional distribution of faciesThe three-dimensional distribution of facies is

usually obtained by applying stochastic algorithms,using the three-dimensional stratigraphic model as abase (see above).

These algorithms, which will be discussed ingreater detail below, allow us to generate extremelyrealistic geological models, related to all availabledata; geophysics, log and core data, and sometimeseven dynamic data. Fig. 4 shows an example of thistype of model, demonstrating the degree of detailwhich can be obtained in what are now routinegeological studies.

These models use a vast number of basic cells,often in the order of tens of millions, thus allowing anextremely detailed representation of the real geologicalstructure of the reservoir. In a later stage, after aprocess of simplification and reduction of the numberof cells (upscaling), these geological models (in termsof the petrophysical characteristics of the reservoirrock) are input into the dynamic model to simulate theproduction behaviour of the reservoir.

559VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Fig. 4. Example of a stochasticmodel of facies(courtesy of L. Cosentino).

Page 8: 4.5 Static reservoir study - Treccani

4.5.5 Petrophysical model

Fluid flow in a reservoir takes place in aninterconnected grid of porous spaces within thereservoir rock. The characteristics of this griddetermine the quantity of fluids present, their relativedistribution, and the ease with which they can flowtowards production wells.

The properties of this porous system are linked tothe characteristics (mineralogical, granulometric, andtextural) of the solid particles which bound it. These inturn are a function of the original depositionalenvironment and the post-depositional processes(diagenesis, cementation, dissolution, fracturing)which may have affected the rock after its formation.

The quantitative study of the porous space inreservoir rock forms a part of petrophysics, adiscipline which plays a fundamental role in reservoirstudies. This represents the basis for the dynamicdescription of fluid flow, and thus of the behaviour(observed or predicted) of production wells. For thisreason it is essential to devote sufficient time andresources to this stage, both in terms of data collectionand analysis (including laboratory tests on cores), andin terms of interpretation, in order to generate arepresentative petrophysical model of the reservoir.

This section is divided into two parts: the first isdevoted to the petrophysical interpretation in the strictsense of the word – the quantitative evaluation ofreservoir properties in the wells. Special emphasis willbe given to the most important parameters (porosity,water saturation and permeability) which make up atypical petrophysical well interpretation. This will befollowed by a discussion of the problem ofdetermining the cut-off value to be applied topetrophysical parameters to obtain the net-pay of thereservoir in question – in other words the portion ofrock which actually contributes to production. Thesecond part is devoted to the distribution within thereservoir of the petrophysical parameters calculated atthe wells, dealing separately with 2D and 3Drepresentations, and a description of the maindeterministic and stochastic techniques adopted forthis purpose.

Petrophysical well interpretationIn a classic petrophysical interpretation we

generate, for each well in the reservoir, a series ofvertical profiles describing the main properties of thereservoir rock’s porous system, such as porosity, watersaturation and permeability. This analysis alsoprovides a more or less sophisticated mineralogicalinterpretation of the solid part of the system, in otherwords the reservoir rock itself. Fig. 5 shows a typicalexample of a petrophysical interpretation, including

the results in terms of petrophysical and mineralogicalparameters.

Both the properties of the porous system and thecomposition of the solid part can be analysed andmeasured directly on cores. In this case, the results cangenerally be considered fairly accurate, at least wherethe cored portions are actually representative of thereservoir rock. However, cores often cover only alimited portion of the total number of intervals crossedby the wells. Consequently, the petrophysicalinterpretation is generally carried out using availablelogs, whereas cores are used to calibrate theinterpretation algorithms and to check the results.Below is a brief description of the main petrophysicalparameters, and the techniques used to determinethem, as already described in Chapter 4.1.

PorosityThe determination of porosity (see Chapters 1.3

and 4.1) can generally be considered the least complexstage in a petrophysical interpretation. However, thisstage is extremely important, since it defines thequantity of hydrocarbons present in the reservoir. Inthe laboratory, porosity is measured on rock sampleswhose linear dimensions are generally limited to1�1.5 inches, using techniques which involve theextraction of fluids, or, vice-versa, the introduction offluids into the sample’s porous system. Thesetechniques, which have been in use for over 40 years,

560 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Fig. 5. Example of a petrophysical well interpretation(courtesy of L. Cosentino).

Page 9: 4.5 Static reservoir study - Treccani

generally provide fairly accurate values, and can alsobe applied under conditions of temperature andpressure corresponding to original reservoirconditions.

The problems associated with this type ofmeasurement, where they exist, are linked to therepresentativeness of the rock sample. A typicalexample is the measurement of secondary porosity,which, being linked to genetic factors whose spatialintensity is extremely irregular, may not be at allrepresentative of average reservoir conditions. Assuch, it may be difficult to determine the porosity offractured rocks, or rocks affected by intensedissolution and/or cementation phenomena. Anotherexample of poor representativeness is provided byrocks of conglomerate type, in which the distributionof the porous system is highly irregular, at least at thecore scale.

The methods most frequently used to determineporosity are those based on the interpretation of welllogs. The quantitative interpretation of porosity is ofparticular significance in reservoir studies when thedetermination of the porous volume of the reservoirturns out to be highly complex. This is true, forexample, of old fields, with little data of low qualityand resolution; of carbonate reservoirs, characterizedby prevalently secondary porosity; and of fracturedreservoirs, where well instruments may at times turnout to be completely inadequate for a quantitativecalculation of porosity.

In all of these cases it is essential to integrate thenormal petrophysical interpretation, based on log andcore data, with all those techniques, static anddynamic, which may provide information, even of anindirect nature, on the porous volume of the reservoir.This integration process may make a fundamentalcontribution to the evaluation of the porous volume ofthe reservoir, and the understanding of its spatialdistribution.

Water saturationThe porous system of a reservoir rock is filled

with fluids, typically water and hydrocarbons. Therelative distribution of these fluid phases withinthe porous space depends on a series of factorslinked to the chemical and physical properties ofthe rock and the fluids themselves, as well as theinteraction between rock and fluid (the rockwettability). Determining the saturation conditionsof the reservoir rock represents one of the mostimportant stages in a reservoir study, since itinfluences not only the calculation of the amountof hydrocarbons in place, but also thedetermination of fluid mechanics, and thus theproductivity of the wells. This is generally a

complex stage, which frequently involves a highdegree of uncertainty in the final construction ofthe integrated reservoir model.

The water saturation of a rock, like its porosity,may be measured on cores, or on the basis of logs. Inthe laboratory, meaningful measurements of watersaturation may be obtained using Dean-Stark typeextraction data on preserved samples, at least in caseswhere mud filtrate invasion is limited, and where theexpansion of the gaseous phase does not lead to asignificant change in the sample’s initial saturationconditions. We can often obtain data of considerableaccuracy by using suitable coring techniques and non-invasive oil-based drilling muds, at least in areas of thereservoir which are distant from the transition zone,also known as the capillary fringe. An example of asystematic study of this type, carried out on thePrudhoe Bay field in Alaska, is described in McCoy etal. (1997).

The water saturation of a rock may also bedetermined using capillary pressure measurements,based on the fact that capillary forces are responsiblefor the relative distribution of water and hydrocarbonswithin the porous space.

For the purposes of reservoir studies, watersaturation is mainly measured on the basis of welllogs recorded in uncased boreholes, and particularlyelectrical resistivity/induction logs, generally usingthe famous Archie equation, first published back in1942 (Archie, 1942). In cased boreholes, on theother hand, water saturation can be measured usingdata obtained with pulsed neutron-type instruments.These also have the advantage of being recordablethrough the production tubing, while the well isproducing. These instruments are often used insystematic monitoring of the evolution of saturationconditions in reservoirs, and therefore represent anextremely interesting source of information forreservoir studies. For example, the ability tomonitor the advancement of oil-water or gas-watercontacts in the various zones of the reservoir as afunction of time, not only allows us to optimize themanagement of the field, but also providesinformation which is essential in calibrating theresults of the reservoir model.

PermeabilityPermeability (see Chapter 4.1) is without doubt the

most important petrophysical reservoir parameter.Permeability determines both the productivity of thewells and the reservoir’s ability to feed drainage areas,and thus the reservoir’s capacity to sustain economicrates in the long term. On the other hand, this is alsothe most difficult parameter to determine.Permeability is a property which can be measured

561VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Page 10: 4.5 Static reservoir study - Treccani

directly only on cores, logs generally allowing us toobtain only rough estimates. Further, permeability isusually characterized by extremely high spatialvariability, which makes it difficult to estimate even inareas adjoining available measurement points. Thedetermination of permeability is thus an important andcomplex stage in the reservoir study, which requires anintegration of all available data, and consequently ahigh degree of cooperation between the engineers andgeologists in the work team.

Estimates of a reservoir’s permeability aregenerally carried out on the basis of available coredata, preferably calibrated on the results ofproduction test interpretations, when these exist.Although this approach generally provides acceptableresults, reservoir engineers are frequently forced tomodify radically the distribution of permeabilityvalues in the simulation model during the subsequentdynamic modelling phase, so as to replicate theproduction behaviour observed in the wells. Thisclearly indicates an incorrect initial determination ofpermeability.

The best method for satisfactorily defining theinitial distribution of permeability values in a reservoiris doubtlessly the integration of the different sourceswhich provide direct or indirect information on thisproperty. These sources are more numerous than isgenerally thought, and the integration process oftenallows us to generate fairly accurate permeabilitymodels, which turn out to be adequate during thesimulation phase.

Below is a brief list of some of the availabletechniques providing information on the permeabilityof a reservoir. Each of these techniques supplies datareferring to a given ‘support volume’ (i.e. referencescale), given saturation conditions (i.e. relative orabsolute permeability) and given measuring conditions(in situ or laboratory-based). When integrating thesedata, it is therefore necessary to carry out anormalization process which takes these differencesinto account.

Core analysisAbsolute permeability can be measured in the

laboratory on core samples of various sizes. Thesemeasurements, which represent the only source ofdirect data, may refer both to laboratory and reservoirconditions. The data measured are then adjusted totake into account the so-called Klinkenberg effect (gasslippage) due to gas escaping from the reservoir, andthe effects of overburden pressure. The most criticalaspect of this type of data is the extremely smallsupport volume, which often renders themeasurements unrepresentative of the reservoir as awhole.

Minipermeameter analysisPermeability can be rapidly and accurately

measured in the laboratory using an instrument knownas a minipermeameter. Comparison with the normalmeasurements obtained from cores under ambientconditions often shows a good level of coherencebetween these two types of data. The significance ofthis type of measurement lies in the possibility ofidentifying small-scale heterogeneities. In this case,too, the critical aspect is represented by the samplevolume, even smaller than normal core samples.Furthermore, the measurements only refer tolaboratory conditions.

Interpretation of well testsThe permeability of a formation can be estimated

(see Chapter 4.4) through the interpretation of welltests (flowing and pressure build-up, injectivity andinterference tests, etc.). These interpretations providevalues for effective permeability to hydrocarbonsunder reservoir conditions, and refer to a much largersupport volume than any other technique. Where goodquality pressure data are available, well tests allow usto estimate the average permeability of a reservoirwith considerable accuracy.

Production logs (PLT)These tools are generally used to monitor wells

(see Chapter 6.1); however, where a production test isavailable it is possible to use PLT (Production LoggingTool) data to calculate a permeability profile at thewell (Mezghani et al., 2000). These data refer to theeffective permeability to hydrocarbons under reservoirconditions, and generally represent an interesting linkbetween the dynamic estimates deriving from theinterpretation of well tests, and the static estimateswhich can be obtained, for example, by logging.However, it is necessary to note the possible damagesuffered by the geological formation (or skin) aroundthe well.

Wireline Formation Testing (WFT)This is a test which measures formation pressures

at predetermined depth intervals, by carrying outshort flowing and pressure build-up phases. These areinterpreted in the same way as a flowing test toobtain estimates of permeability. In this case thevalues obtained can be considered to refer to thepermeability of the fluids present in the invadedzone, under reservoir pressure and temperatureconditions.

Nuclear Magnetic Resonance (NMR) logsNuclear magnetic resonance tools represent the

only means of obtaining a continuous vertical profile

562 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Page 11: 4.5 Static reservoir study - Treccani

of permeability in the well. Permeability is calculatedusing equations based on the proton relaxation time,and the results obtained may be fairly accurate,especially where some of the input parameters can becalibrated on measurements carried out on coresamples in the laboratory.

Petrophysical correlations.Permeability is often obtained using a correlation

with porosity by means of core measurements(Nelson, 1994). However, this method tends togenerate permeability profiles which are unnaturallyregular; there are various types of statistical dataprocessing which allow us to preserve at least in partthe heterogeneity of the original permeabilitydistribution. These include, for example, regressionsfor individual lithological facies, and multiple linearregressions (Wendt et al., 1986).

Empirical equationsVarious empirical equations exist in the relevant

literature for the estimate of permeability on the basisof known petrophysical parameters. In some specificcases, these equations may provide fairly acceptableresults, but it is always important to check them usingavailable core data.

Neural networksThis is a recent methodology, which allows us to

generate permeability profiles using logs or otherpetrophysical profiles. The most interesting aspect ofthis methodology (Mohaghegh and Ameri, 1996), isthat the obtained estimates correctly represent theoriginal degree of heterogeneity of the datameasured, and the results do not suffer, as statisticalmethods do, from the smoothing effect. Particularattention should be paid during the preliminary‘training’ process of the neural networks; thisrequires adequate calibration data, without which theresults obtained may be misleading. Table 1illustrates the characteristics of these variousmethods.

By integrating the data derived from these differenttechniques we can often generate reliable permeabilitymodels, which reflect both the static and dynamicaspects of this property. This allows us to improve andshorten the validation phase (history matching) of thedynamic simulation model, thereby optimizing thequality of the reservoir study and the time required toperform it.

Determination of net payThe net pay of a reservoir represents that portion of

rock which effectively contributes to production. Thisvalue is calculated using appropriate cut-off valuesapplied to petrophysical parameters. Although thesimplicity of the term might lead one to thinkotherwise, cut-off is one of the most controversialconcepts within the community of geologists andreservoir engineers, since there is no clear sharedmethodology for its definition. This is also evidentfrom the lack of literature on the subject, despite thefact that the determination of net pay is practicallyunavoidable in any reservoir study (Worthington andCosentino, 2003).

One of the key points in determining the cut-off tobe applied to petrophysical curves is an understandingof its dynamic nature. This is because the cut-off islinked to conditions that imply the productive capacityof hydrocarbons under given pressures, and with agiven development plan. Typically, a porosity cut-off isselected on the basis of permeability versus porositygraphs drawn up using data obtained from coreanalysis, thus fixing a limit value for permeabilityoften equivalent to a conventional value of 1 mD(microdarcy).

In selecting the cut-off, we must consider at leastthe following two factors. Firstly, the cut-off must bechosen on the basis of fluid mobility rather thanpermeability alone. Consequently, in the samegeological formation, the value of the cut-off varies asa function of the fluid present. This is why many of theworld’s gas fields produce from reservoirs withextremely low permeability, just a few mD, whereas

563VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Table 1. Characteristics of the various methods employed

Method Scale Pression ¤ Temperature Saturation Measurement

Core analysis Macro Environment/In situ Absolute DirectMinipermeameter Micro Environment Absolute DirectProduction tests Mega In situ Relative IndirectPLT Mega In situ Relative IndirectWFT Macro In situ Relative IndirectNMR Macro In situ Absolute IndirectRegressions Macro In situ Absolute IndirectEmpirical equations Macro In situ Absolute IndirectNeural networks Macro In situ Absolute Indirect

Page 12: 4.5 Static reservoir study - Treccani

the cut-offs normally applied to heavy oil reservoirsare in the order of tens of mD. Typical cut-off valuesfor mobility lie in the range of 0.5-1 mD/cp.

Secondly, the choice of cut-off must be a functionof production mechanisms. In reservoirs whichproduce by simple fluid expansion (depletion drive),the value of the cut-off depends on the prevalentpressure level. It is obvious that rocks with lowpermeability, subjected to high pressure differential(the difference between reservoir pressure and thepressure imposed in the production tubing), cancontribute to production. As a result, in a reservoir ofthis type, the real cut-off changes over time, aspressure differences increase. The cut-off’sdependency on time emphasizes another aspect of thiscomplex problem. By contrast, in reservoirsdominated by convective phenomena (e.g. reservoirssubjected to secondary recovery processes using waterinjection), where pressure does not changesignificantly during production, the cut-off dependsmore on the efficiency of the displacement process,and is thus more generally linked to concepts ofResidual Oil Saturation (ROS).

It should be stressed, however, that even taking intoconsideration the aspects described above, theselection of an appropriate cut-off value is difficult,and often elusive. This explains why such a choice ishighly subjective and difficult to justify. It is nocoincidence that one of the most controversial aspectsof ‘reservoir unitization’ processes (the pooledproduction of a reservoir which extends over two ormore production leases, by agreement or imposed bylaw), is precisely the choice of cut-off and thedetermination of net pay.

The main problem in determining the cut-off lies inthe choice of the reference value for permeability,which represents the boundary between productive andnon-productive rocks. Various factors should

contribute to this choice: a good knowledge of thereservoir’s lithologies and fluids, the prevalentproduction mechanism, the analysis of data which mayprovide direct or indirect information for the purpose(production tests and DST, data obtained withmeasurements performed using WFT and NMR-typewell test tools, etc.). The integration of all these typesof information allows an appropriate choice of thevalues to be adopted.

Once the permeability cut-off value forcommercial production has been defined, otherassociated petrophysical cut-off values may beobtained fairly simply on the basis of diagrams(crossplots) of reservoir properties. This methodologyis illustrated schematically in Fig. 6.

Where a lithological classification is available (seeabove), this procedure should be carried outindependently for each facies. This usually results ingreater accuracy, and consequently a more effectivedistinction between producing and non-producingrocks. In some cases, the lithological classification mayalso lead to the definition of facies which are reservoirand non-reservoir, thus making the determination ofnet pay even easier, especially when working oncomplex three-dimensional geological models.

Finally, it is always advisable to perform sensitivityanalyses on the method employed by using differentworking hypotheses, and thus different cut-off values,and noting the variations in the final values for thevolume of hydrocarbons in place. This phase oftenallows us to refine our initial hypotheses, and tooptimize our final choice.

Distribution of petrophysical parametersThe petrophysical well interpretation forms the

basis for the subsequent stage of the study, consistingin the lateral (2D) or spatial (3D) distribution ofreservoir properties. In both cases, the most complex

564 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Φ

Φ

Φ C ΦΦ C

Φ CKC

log K log SW

VshC

SWC

Vsh

Fig. 6. Procedure for defining a consistent set of petrophysical cut-offs. K, permeability; F, porosity;SW, water saturation; Vsh, shale volume; c, cut-off.

Page 13: 4.5 Static reservoir study - Treccani

problem is the lack of information on those parts ofthe reservoir between wells, especially when dealingwith highly heterogeneous geological formations, orthose characterized by poor lateral continuity.

Traditionally, the interpolation of known valuesmeasured at the wells has represented the classicmethodology for the construction of reservoir maps,with the geological/sedimentological model formingthe only reference point for this operation. In thepast, the reservoir geologist drew these mapsmanually; only from the 1980s onwards did computermapping techniques begin to be used. Since the1990s the situation has changed radically. On the onehand, the availability of computers with increasinglyhigh processing and graphic capabilities hasdefinitively changed the way reservoir geologistswork. On the other, the development of newmethodologies such as geostatistics and theextraordinary evolution of techniques for acquiringand processing geophysical data have providedgeologists with new tools, allowing them to buildmore accurate and less subjective models. In thefollowing sections we will describe separately thetwo possible approaches: two-dimensional and three-dimensional modelling.

Two-dimensional modelling of reservoir parameters

Two-dimensional geological modelling consists inthe generation of a set of maps representing thelateral distribution of reservoir parameters. We candistinguish between two basic types of map: thosewhich describe the geometry of geological units (top,bottom and thickness of the various layers: seeabove), and those which describe their petrophysicalproperties; porosity, water saturation, net/gross ratio,and permeability. It should be stressed that the lattertype of map, whilst not strictly speaking necessaryfor the static model, is essential for dynamicsimulations.

The procedures used to generate maps ofporosity and net/gross (the ratio of net pay to grossthickness), are basically similar. Mean values arecalculated in the wells for each geological unit, andthese values are then adopted for the interpolationprocess by using computer mapping techniques. Inthe simplest cases, as we have said, this operationis performed solely on the basis of thesedimentological reservoir model, with fairlyreliable results, at least where there is a highdensity of existing wells. However, considerableattention must be paid to the peripheral areas of thereservoir, where the mapping algorithm mayextrapolate meaningless values. In these cases, it iscommon practice to use reference control points,

which prevent this type of uncontrolledextrapolation.

This procedure may be improved by usinggeostatistical techniques. In this case, the correlationfunction adopted is not predefined, as in the case ofcommercial algorithms. Instead, it is calculateddirectly on the basis of available data, with obviousbenefits in terms of the accuracy of the end result.These correlation functions (the variogram, or itsopposite, covariance) express the real lateral continuityof the variable being modelled, and also allow us totake into account possible directional anisotropies. Thegeostatistical algorithm used in the next stage of theevaluation process is known as kriging. This algorithmallows us to represent accurately the lateraldistribution of the parameters, and has the additionaladvantage of providing an evaluation of localuncertainty (kriging variance).

A further improvement of the expected resultsmay be obtained by using seismic data. Geophysicsis the only direct source of information on areas ofthe reservoir which are distant from wells, and inrecent years the geophysical techniques available forthis purpose have improved considerably. Thisapproach is based on a possible correlation betweenparticular characteristics (or ‘attributes’) of theseismic signal recorded, and the petrophysicalcharacteristics of the reservoir (typically porosityand/or net pay). This correlation is defined in thecalibration phase, by comparing surface seismic datawith data measured at the wells (sonic and velocitylogs, VSP, etc.). Once the correlation has beendefined, we proceed to integrate the seismic data,generally using the following methods (in order ofcomplexity):• The normal well data interpolation, improved by

using maps of seismic attributes; these are used tocalculate the large-scale trend of the parameterunder consideration.

• The conversion of the map of the seismic attribute(e.g. amplitude or acoustic impedance) into aporosity map, using the correlation defined at thewells. Later, the resulting map is modified to beconsistent with available well values.

• The geostatistical approach, using spatial distribu-tion functions calculated on the basis of the corre-lation between well data and seismic data. The useof collocated cokriging techniques (Xu Wenlong etal., 1992) has become widespread in recent years.Fig. 7 shows an example of a porosity map

generated by integrating information obtained fromwells with geophysical data.

This type of approach to the construction ofreservoir maps is becoming increasingly common incurrent practice, due mainly to the availability of

565VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Page 14: 4.5 Static reservoir study - Treccani

highly sophisticated software applications. Theseallow us to visualize seismic and traditionalgeological data simultaneously, with obvious benefitsfor the modelling process. However, considerablecare is required in these operations, since seismicsignals are influenced by a broad range of factors(lithology, petrophysical characteristics, fluidcontent, overlying formations), and it is thusimportant to check the correlation between seismicdata and well data carefully. Spurious correlations aremore common than one might think, especially whereonly a few wells are available for control (Kalkomey,1997).

There are also various methodologies for theproduction of water saturation maps. As for porosityand net/gross, the most traditional technique is basedon the direct mapping of values measured at the wellsfor each geological layer. This procedure works fairlywell where a large number of wells are available, andhas the added advantage of reflecting the valueseffectively measured in the wells themselves.However, this methodology fails to take into accountthe correlation with other petrophysical parameters(porosity and permeability), and does not allow anaccurate reproduction of the capillary fringe (seeChapter 4.1). Moreover, it is prone to consistencyproblems in the petrophysical interpretation of thevarious wells.

Another technique frequently used to generatesaturation maps consists in the direct application of aporosity-water saturation correlation. In cases wherepore geometry is relatively simple, we can frequentlyobserve a linear correlation between these parameterson a semilogarithmic scale. The main advantage of thistechnique lies in its speed of execution and theconsistency of results. However, it does not allow us tomodel the capillary fringe; its principal application isthus for gas fields, and in general for those reservoirswhere the height of the capillary fringe can bedisregarded.

Other techniques for generating saturation mapsrely on the application of capillary pressure curves,which reproduce the distribution of the fluid phasesrelative to the height above the water-hydrocarboncontact. These functions may be derived fromcapillary pressure data measured in the laboratory(see above), or they may be calculated on the basis ofmultiple linear regressions. In the latter case, bothpetrophysical (porosity) curves and height above thecontact are used, and this allows us to simultaneouslytake into consideration the dependence on the poroussystem, and on the distance from the interfacebetween the fluids. These methods, whilst more time-consuming, generally represent the most satisfactorycompromise for the generation of saturation maps,

partly because the methodology employed is similarto that used for dynamic simulation in theinitialization phase of the model. In this light, thismethod promotes greater consistency between thehydrocarbons in place values calculated during thegeological modelling phase, and those calculatedduring the dynamic simulation phase.

The construction of an accurate permeability mapis one of the most important aspects of a reservoirstudy, since the results of the dynamic simulationmodel are largely dependent on it. Variousmethodologies are available, and the choice of whichto use depends on the characteristics of the reservoirunder examination, on the available data, and onavailable human and technological resources. Thetraditional method, as in the case of porosity andnet/gross, involves direct mapping of available wellvalues. However, as compared to other petrophysicalparameters, this methodology has greater limitations,linked to the following aspects.

Availability of data. Generally, the availability ofdata for the mapping process is more limited than forother petrophysical parameters, given that, with thepartial exception of nuclear magnetic resonance logs,permeability data are available only from cored wells.

Type of data. As already discussed (see above),there are generally various possible sources ofpermeability data, each of which providescharacteristic values relating to scale, saturationconditions and type of information (direct/indirect).The homogenization of these data required prior to amapping process often turns out to be an arduous task,and subject to compromises.

Spatial variability. The spatial continuity (lateraland vertical) of permeability is usually much lowerthan that of other reservoir parameters. In the case ofhighly heterogeneous formations, this continuity may

566 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

0.2750.2500.2250.2000.1750.1500.1250.1000.0750.0500.025

Fig. 7. Example of a porosity map generated by integration with seismic data(courtesy of L. Cosentino).

Page 15: 4.5 Static reservoir study - Treccani

be as little as one metre, or even entirely inexistent. Itis worth remembering that most algorithms used insoftware packages assume a predefined and implicitlyvery high spatial continuity which generates fairlyregular maps. In the case of permeability this is oftenunrealistic.

Despite this, the mapping of permeability usingproduction tests carried out in wells may generateaccurate maps, especially when a sufficiently largenumber of tests are available. These permeabilityvalues are often extremely representative, and allowus to produce consistent maps which are particularlywell-suited to dynamic simulation. In the case offractured reservoirs, where core data are shown to beinadequate for the representation of the actualreservoir permeability, this type of approach is oftena forced choice. Finally, it should be stressed that, asfor other reservoir parameters, these interpolationsmay be further improved by using geostatisticaltechniques and kriging algorithms.

An alternative methodology which is frequentlyemployed is based on the generation of a permeabilitymap from a map of porosity, using a correlationbetween the two parameters, generally calculated onthe basis of available core data. In this case, theresulting permeability map will intrinsically resemblethat of porosity, the implicit assumption being that thespatial correlation function for these two parameters isof the same type. However, this is normally inaccurate,and the resulting maps often appear unnaturally regular.Furthermore, it should be stressed that the relationshipbetween porosity and permeability on which thismethod rests is often far from clear, especially in thecase of carbonate sediments. As such, the results maybe improved through a careful analysis of the basiccorrelation, and the identification of lower-ordercorrelations, preferably for each individual facies.

Three-dimensional modelling of reservoir parameters

The 2D methodology described in the previousparagraph is gradually being replaced by morecomplex techniques, based on a three-dimensionalapproach to geological modelling. It is now possible togenerate and visualize rapidly three-dimensionalmodels of any reservoir parameter, with a resolutionthat frequently exceeds tens of millions of cells. Thismeans that the reservoir geologist can quickly checkdifferent working hypotheses and analyse resultsdirectly on the screen of his own computer, withobvious benefits in terms of time, and the accuracy ofthe end results. Three-dimensional modelling may beapplied to all reservoir parameters, basically using thesame procedures already described for two-dimensional models.

Generally speaking, two types of approach can beidentified: in the first, the distribution of petrophysicalparameters is carried out directly in the three-dimensional space of the reservoir, starting from wellprofiles (single-stage model). This method does notrequire a three-dimensional lithological model of thefacies (see above). In the second, the distribution isimplemented on the basis of the lithological model. Inthis case, the petrophysical parameters are distributedfollowing the three-dimensional modelling of thefacies, in accordance with statistical laws specific toeach facies (two-stage model).

The second method has the advantage of beingbased on a geological reference model which formsthe basis for lithological modelling. This generallyallows a better assignment of petrophysicalproperties, especially in the presence of complexlithologies characterized by different poroussystems.

A particularly interesting aspect of 3D modelling isthe possibility of integrating seismic data, traditionallyused in a two-dimensional context, directly in threedimensions. Thanks to the availability of sophisticatedprocessing algorithms which allow us to improve thevertical resolution of seismic data, and to the use ofnew techniques to characterize the seismic signal, wecan identify seismic facies within the set of seismicdata. These in turn can be correlated with the moretraditional facies deriving from the lithologicalcharacterization of the reservoir. Fig. 8 shows anexample of profiles derived from seismic datacharacterized in terms of seismic facies. Examples ofthis type represent a notable point of convergencebetween lithological, petrophysical and seismicmodelling, the integration of which may produceextremely accurate three-dimensional models.

567VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Fig. 8. Example of seismic profiles characterized in terms of seismic facies (courtesy of L. Cosentino).

Page 16: 4.5 Static reservoir study - Treccani

4.5.6 Integrated geological model

Until a few years ago, the geological modelreferred to a workflow rather than an object.During the past ten years the extraordinarydevelopment of information technologies and thespatial modelling of oil fields has occasioned suchradical changes in the way reservoir geologistswork and even think, that the meaning of thegeological model has changed considerably. On theone hand, it has become clear that the integrationof different disciplines, static and above alldynamic, is fundamental for the correct staticcharacterization of the reservoir. On the other, theinformation platforms on which we work todayallow the gradual construction of a model (firststructural, then stratigraphic, lithological, andfinally petrophysical), which comprises andsummarizes the results of the interpretationscarried out by the various experts participating inthe interdisciplinary study.

The integrated geological model has thus takenon a revolutionary meaning compared to the past,becoming a virtual object which represents theactual reservoir present underground in a discrete(but extremely detailed) way. It is characterizedquantitatively by petrophysical parametersdistributed within the three-dimensional space of thereservoir, and may be modified and up-dated rapidlyif new data become available, for example from newwells.

The theoretical and practical basis for this newapproach to reservoir geology is represented bystochastic modelling. The use of stochastic (orgeostatistical) models is relatively recent, but isbecoming the most common practice amongreservoir geologists. From the 1990s onwardsnumerous algorithms have been developed, the mostversatile being available in commercial applicationswhich have made them fairly simple to use. In brief(Haldorsen and Damsleth, 1990), stochasticmodelling refers to the generation of syntheticgeological models (in terms of facies andpetrophysical parameters), conditioned to allavailable information, both qualitative (soft), andquantitative (hard).

These models generate equiprobable realizations,which share the same statistical properties, and whichrepresent possible images of the geological complexityof the reservoir. There is no a priori method forchoosing which realization to use in a reservoir study,and this hinders the full acceptance of thesemethodologies by the geological community. On theother hand, the availability of a theoretically unlimitedseries of realizations allows us to explore thoroughly

(for a given algorithm and its associated parameters)the uncertainties linked to the available data. Thestochastic approach therefore represents a considerableimprovement on traditional geological modellingtechniques.

Currently, the most frequently used algorithmsfor stochastic modelling belong to either the pixel-based or object-based category. In pixel-basedmodels, also known as continuous models, the variable simulated is considered a continuousrandom function, whose distribution (often ofGaussian type) is characterized by cut-off valueswhich identify different facies or different intervalsfor petrophysical values. The most commonly usedalgorithms in this category are truncated Gaussianrandom functions (Matheron et al., 1987), andfunctions of indicator kriging type (Journel et al.,1990).

These models are applied especially in thepresence of facies associations which varycontinuously within the reservoir, as is frequentlythe case in geological formations of deltaic type orshallow water marine reservoirs. No a prioriassumptions are made on the form and extension ofthe sedimentary bodies, which are simulated solelyon the basis of the spatial distribution functionsused (variograms and proportionality curves). Thisapproach is often adopted in cases characterized byrelatively high net/gross ratios, in other words inprevalently sandy geological formations withintercalations of clay or other non-productivelayers.

By contrast, object-based models, also knownas Boolean models, generate three-dimensionaldistributions of sedimentary bodies, obtained byjuxtaposing objects of simplified geometry, such asdisks or tabular bodies, within a clayey matrix. Theparameters of these bodies (orientation, sinuosity,length, width, etc.) can be estimated on the basis ofthe sedimentological model adopted, geophysicaldata, outcrops of comparable rocks, or on the basisof production test interpretations. This type ofmodel is used more frequently for fluvial-typereservoirs, characterized by channels or meanderslocated within prevalently clayey geological units,where the overall net/gross ratio is relatively low.In these contexts, we can obtain extremelyinteresting results, with highly realistic images ofthe geology simulated. By contrast, in cases wherethe net/gross ratio is higher (typically �40%), andwhen the number of conditioning wells is high,these algorithms may require an extremely longtime to process.

Fig. 9 shows an example of a geological modelgenerated with a pixel-based algorithm. Note the

568 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Page 17: 4.5 Static reservoir study - Treccani

discontinuous nature of the facies generated, whoseextension and orientation depend on the parameters ofthe variogram used. Fig. 10, on the other hand, showsan image generated with an object-based algorithm, inwhich the sedimentary bodies are more clearly definedand separated from one another.

It is worth underlining that there is no a prioricriterion for choosing one of these two approaches,nor any specific algorithm within these families. Withthe exception of the general indications based on thesedimentary environment, which are outlined above,the algorithm is chosen by the reservoir geologistcarrying out the study on a largely subjective basis.Since different algorithms generate geological imageswhich often vary considerably, especially whenconditioning data is limited, the final result shouldclearly be understood in a statistical sense, even inthose cases (the most frequent) where it is used in adeterministic way. The correct use of this type ofresults should therefore be seen within the much morecomplex context of the evaluation of uncertaintieslinked to geological modelling, a topic which will bediscussed later.

The enormous potential of stochastic modellingis basically the possibility of quantitativelyintegrating a whole range of different types ofinformation and data generated during the study bythe various specialists. This technique isparticularly useful for a general geologicalunderstanding. General geological knowledge ofthe reservoir, based for example on knowndepositional models or on the existence ofcomparable outcrops, may be input into thestochastic model along with the often insufficientinformation from wells, thus allowing us togenerate more realistic geological models. Recenttheoretical developments (i.e. multi-pointgeostatistics) allow us to use data derived from

general geological models in a quantitative waywithin the stochastic model.

Even the petrophysical interpretation of thereservoir rock, defined in the well log’s quantitativeinterpretation phase, can be extended to the entirereservoir through stochastic modelling. As alreadymentioned, this can be done both directly, bysimulating petrophysical properties, or indirectly, bysimulating facies, and then associating meanpetrophysical values or frequency distributions withthese.

The stochastic approach can also be used tosimulate structural characteristics on a small ormedium scale (faults and fractures), which cannot bedistinguished in a deterministic way on the basis ofavailable data. Later, these faults and fractures mayalso be characterized with hydraulic parameters.

Finally, the integration of dynamic data(production tests and production data) representsone of the current frontiers of stochastic modelling.It is significant because a geological modelconditioned to available dynamic data, once inputinto the dynamic simulation model, should allow amuch more rapid validation of the productionhistory of the reservoir (history match). Currentlythere are no standard methodologies for performingthis integration, but in the relevant literaturevarious approaches of notable interest have alreadybeen presented.

The integrated geological model generated using astochastic approach represents the end result of thestatic modelling process. Once this is available, it isused to estimate the amount of hydrocarbons in place,and where required, it can be used to assessuncertainty.

569VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Fig. 9. Example of a geological model createdwith a pixel-based algorithm (courtesy of L. Cosentino).

Fig. 10. Example of a geological model created with an object-based algorithm (courtesy of L. Cosentino).

Page 18: 4.5 Static reservoir study - Treccani

4.5.7 Calculation of hydrocarbonsin place

The determination of Original Hydrocarbons InPlace (OHIP, or OOIP for oil and GOIP for gas) isgenerally considered the final stage of the staticreservoir study. It is during this stage that thedescription of the reservoir, in terms of externaland internal geometry, and the properties of thereservoir rock, are quantified through a numberexpressing the amount of hydrocarbons present inthe reservoir at the time of discovery.

In fact, the most important number for theeconomic evaluation of a field is that relating tothe reserves; in other words that portion ofhydrocarbons which can actually be recovered witha given development plan. The relation betweenhydrocarbons in place and Recoverable Reserves(RR) is expressed by the well-known equation:

[1] RR�OHIP·Rf

where Rf is the recovery factor. The value of thisfactor, and consequently of the reserves, dependsboth on the geological characteristics of thereservoir, and on a series of other elements such asthe type of hydrocarbon, the characteristics of drivemechanisms, the development plan adopted, thesurface equipment, gas and oil prices, etc. (seeChapter 4.6). The value of hydrocarbons in place,on the other hand, is independent of these factors,and therefore extremely important, especiallybecause it gives a clear and immediate picture ofthe importance and potential of the existingaccumulation.

Basically, there are two techniques forestimating hydrocarbons in place: the traditionalmethod, based on geological volumetric calculationtechniques; and methods based on material balance(see Chapter 4.3). In this context, it is worthremembering that dynamic simulation does notprovide an independent estimate of hydrocarbonsin place, since the values calculated by thesimulator simply derive from the geological modelused as input.

Below, only geological evaluation methods aredescribed in detail. It should be stressed, however,that material balance techniques may often provideextremely accurate estimates of hydrocarbons inplace, and that it is the reservoir geologist’s task tocheck the agreement between the various methods,and to justify any disagreements.

Volumetric evaluationsThese refer to estimates of the quantity of original

hydrocarbons in place calculated using the results of

the integrated geological model. These estimates arebased on the following formula:

N[2] OHIP�GBV1/(1–Sw)

G

where GBV is the Gross Bulk Volume of rock in thereservoir; N/G the net to gross (ratio of net pay togross thickness); / the porosity (fraction); Sw thewater saturation (fraction); (1�Sw), equal to Sh, thehydrocarbon saturation (fraction).

If we know the mean values of these parameters forthe reservoir in question, we can quickly calculate theamount of hydrocarbons in place. In fact, in commonpractice, this calculation is not performed using meanvalues (except as a preliminary evaluation), but ratherusing surfaces (in two dimensions) or volumes (inthree dimensions) representing the spatial distributionsof the parameters in the equation. All the computerapplications commonly used in two or three-dimensional static modelling supply the relevantcalculation algorithms, allowing us to obtain thevolume of hydrocarbons in place simply and rapidly.

In traditional two-dimensional modelling, based onthe combination of surfaces (grids), we obtain a mapknown as the equivalent hydrocarbon column (Grosspay�N/G�/�Sh), which provides a clear and immediatepicture of the hydrocarbon distributions within thereservoir. The value of OHIP is then obtained simplyby integrating this map. In the case of three-dimensional models, the OHIP value is calculateddirectly on the basis of the integrated geologicalmodel, using suitable calculation algorithms whichrealize the sum of the volume of hydrocarbons presentin each of the basic cells of the model. It is importantto note that Eq. [2] supplies a value for OHIP underreservoir conditions. To convert this into surfaceconditions, we need to take into consideration thevariation in volume that oil and/or gas undergo whenthey reach the surface. This variation in volume, whichis mainly a function of pressure, is measuredexperimentally in the laboratory, and is known asFormation Volume Factor (FVF). In the case of oil, theequation linking downhole volume and surface volumeis as follows:

OHIPR[3] OHIPST�121Bo

where OHIPST

is the volume under stock tankconditions, OHIP

Ris the volume under reservoir

conditions, and Bo is the FVF of the oil, expressed inreservoir barrels over stock tank barrels. In the case ofgas, the FVF is indicated with an equivalent volumefactor Bg.

It should be emphasized that the application of thisformula often leads to misunderstandings, because

570 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

Page 19: 4.5 Static reservoir study - Treccani

reports on the PVT analysis of reservoir oils (seeChapter 4.2) usually give different values for thevolume factor, according to the experiments carriedout in the laboratory. We can thus define a differentialBo, a flash Bo, and other Bo types deriving fromseparation tests at different pressures andtemperatures. These Bo values usually differ from oneanother, especially in the case of volatile oils.Furthermore, by combining differential Bo values withthose from separation tests we can calculate acomposite Bo, which takes into account both thebehaviour of the oil under reservoir conditions(differential test), and the actual separation conditionsat the surface. This composite value generallyrepresents the best approximation of the fluid’svolumetric behaviour, and is that which should be usedin Eq. [3].

The direct use of the value under reservoirconditions expressed by Eq. [1] eliminates possibleambiguities relating to the choice and use of thevolume factor, especially when data calculatedvolumetrically must be compared with data calculatedusing the simulation model, where the volume factorsare determined using more complex calculations.

Deterministic and probabilistic evaluationsGenerally speaking, the volume of hydrocarbons in

place may be calculated deterministically and/orprobabilistically.

Deterministic values of OHIP are obtained simplyby combining the mean values (in one dimension),surfaces (two dimensions) or grids (three dimensions)of the reservoir parameters indicated in Eq. [2]. Theseestimates are deterministic in that all the parametersare calculated in a univocal way, without taking intoaccount the possible uncertainties associated with eachof them. In other words, the estimates calculated forthe representation of these parameters are implicitlyconsidered to be correct.

This is the type of estimate traditionally suppliedby the reservoir geologist, and most frequentlyperformed. However, the process of constructing ageological model on the basis of insufficient, scatteredinformation (wells) involves uncertainties due to errorsof measurement, the lack of representative data,interpretative problems, etc. As a result, the value forOHIP obtained using this type of procedure is just oneof many possible values, and depends on the specificinterpretative process adopted. If we were to use, forexample, a different interpolation algorithm, we wouldusually obtain a different value for OHIP which is, apriori, equally valid.

In contrast to deterministic evaluations,probabilistic evaluations generally provide a muchmore realistic estimate of the amount of hydrocarbons

in place, since they also evaluate the accuracy of theestimate itself. The probabilistic approach involvestaking into account the probability distributions ofevery single parameter involved in the calculation.Each of these probability distributions quantitativelyreflects the degree of knowledge, and thus ofuncertainty, of the parameter in question. In thesimplest case (one dimension), these distributions aresampled repeatedly and at random (Monte Carlomethod), ultimately generating a distribution of OHIPvalues. This distribution is characterized by statisticalparameters (mean, median, standard deviation, etc.)which give a concise representation of the resultsobtained. In two or three dimensions, the Monte Carlomethod can nevertheless be applied, replacing simpleone-dimensional distributions with surface and griddistributions. In any case, the final result is stillrepresented by a frequency distribution, and thereforea probability distribution, for OHIP values.

In general, however, when making a probabilisticevaluation of hydrocarbons in place, the preferredmethodology is that of stochastic modelling.

Uncertainties relating to geological modellingThe reservoir geologist has the difficult task of

reconstructing with maximum accuracy the geometryand petrophysics of a reservoir about which he usuallyhas little, and mostly indirect, information. It istherefore obvious that the final model will alwayspresent some degree of uncertainty. The quantitativeevaluation of uncertainties relating to geologicalmodelling is one of the most complex and interestingaspects of a reservoir study.

In a typical static reservoir modelling study, we canidentify four main sources of uncertainty.

Uncertainties linked to the quality of data andtheir interpretation. All of the basic data in a study,from geophysical data to logs and core data, areassociated with errors of measurement whichinfluence the accuracy of the final result. Even thoughit is in theory possible to quantify these errors, thistask is rarely carried out, and the basic data aregenerally assumed to be correct. This is even moretrue of the interpretative stages.

Uncertainties linked to the structural andstratigraphic models. The structural interpretationcarried out by the geophysicist is in most cases of adeterministic nature, and does not includequantifying associated uncertainties, although it isclear that this phase of the work is to some degreesubjective. The same can be said of the correlationphase (stratigraphic model), especially when dealingwith depositional environments characterized bypoor lateral continuity (e.g. continental typedeposits).

571VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Page 20: 4.5 Static reservoir study - Treccani

Uncertainties linked to the stochastic model and itsparameters. Static modelling may be carried out usingdifferent stochastic algorithms, each of which willusually provide different results. Furthermore, asalready mentioned, there are no fixed rules forpreferring one algorithm a priori to another. A furthersource of uncertainty is linked to the parametersselected for the algorithm itself, for example the lengthof correlation chosen for the variogram, or thegeometries of depositional units in a Booleanalgorithm. Uncertainties on these points are rarelyexplored, although they have a significant impact onthe end results.

Uncertainties linked to the different realizations ofthe stochastic algorithm. These uncertainties can bequantified simply by comparing different realizationsof the stochastic model used; this is the mostfrequently carried out evaluation, probably due to itssimplicity. However, uncertainties linked to this aspectof the study are negligible, or almost, compared tothose mentioned above.

These brief considerations highlight theimportance of the space of the uncertainties underexploration. The total space of uncertainties isobviously unknown, but when we attempt aquantitative evaluation of the uncertainties of ageological model it is important to ensure that thisspace is adequately sampled. If we consider onlythe aspects covered in point 4, for example, we riskquantifying in detail just a limited part of theoverall uncertainty, thus creating a false sense ofaccuracy.

The problem becomes even more complexwhen we go on to evaluate uncertainties linked tothe dynamic simulation phase. These are even moresignificant as they have a direct impact on theeconomic evaluation of a reservoir, and implicitlyinclude the uncertainties linked to the geologicalmodel. When carrying out a complete analysis ofthese uncertainties, for example using a massiveapproach (i.e. by realizing hundreds of dynamicsimulations with different input parameters), wemay find that some aspects become so significantas to almost cancel out the impact of factors which,in the static model on its own, appeared important.

Fig. 11 shows an example of a risk analysiscarried out on a deep sea reservoir. In this case, theanalysis took into consideration a broad series ofparameters, both static and dynamic. The resultsshow that the greatest uncertainties in the finalresults concern the dimensions of the aquifer (over75%), followed by the position of the water-oilcontact. Uncertainties regarding static modelling,on the other hand, are negligible, despite the use ofgeological models which differ considerably from

one another. In this case, a detailed study of theuncertainties linked to the static model alone isobviously pointless, at least with hindsight.

References

Amaefule J.O. et al. (1993) Enhanced reservoir description:using core and log data to identify hydraulic (flow) unitsand predict permeability in uncored intervals/wells, in: Oiland gas strategies in the 21st century. Proceedings of the68th conference of the Society of Petroleum Engineers,Houston (TX), 3-6 October, SPE 26436.

Archie G.E. (1942) The electrical resistivity log as an aid indetermining some reservoir characteristics, «AmericanInstitute of Mining, Metallurgical, and Petroleum Engineers.Transactions», 146, 54-62.

Cosentino L. (2001) Integrated reservoir studies, Paris,Technip.

Haldorsen H.H., Damsleth E. (1990) Stochastic modelling,«Journal of Petroleum Technology», April, 404-412.

Journel A.G. et al. (1990) New method for reservoir mapping,«Journal of Petroleum Technology», 42, 212-219.

Kalkomey C.T. (1997) Potential risks when using seismicattributes as predictors of reservoir properties, «TheLeading Edge», March, 247-251.

McCoy D.D. et al. (1997) Water salinity variations in the Ivishakand Sag River reservoirs at Prudhoe Bay, «Society ofPetroleum Engineers. Reservoir Engineering», 12, 37-44.

572 ENCYCLOPAEDIA OF HYDROCARBONS

OIL FIELD CHARACTERISTICS AND RELEVANT STUDIES

aquiferwater-oil contactunexplaineddrawdown

81.7978914.024991.649181.16589

modelCrskinKr

0.713390.396290.252190.00019

Fig. 11. Results of risk analysis on a deep sea reservoir.

Page 21: 4.5 Static reservoir study - Treccani

Matheron G. et al. (1987) Conditional simulation of thegeometry of fluvio-deltaic reservoirs, in: Proceedings ofthe Society of Petroleum Engineers annual technicalconference and exhibition, Dallas (TX), 20-30 September,SPE 16753.

Mezghani M. et al. (2000) Conditioning geostatistical modelsto flowmeter logs, in: Proceedings of the Society of PetroleumEngineers European petroleum conference, Paris, 24-25October, SPE 65122.

Mohaghegh S., Ameri S. (1996) Virtual measurement ofheterogeneous formation permeability using geophysicalwell log responses, «The Log Analyst», 37, 32-39.

Nelson P.H. (1994) Permeability-porosity relationships insedimentary rocks, «The Log Analyst», 35, 38-62.

Vail P.R. et al. (1997) Seismic stratigraphy and global changesof sea level, in: Payton C.E. (edited by), Seismic stratigraphy.Applications to hydrocarbon exploration, «AmericanAssociation of Petroleum Geologists. Memoir», 26, 63-98.

Wendt W.A. et al. (1986) Permeability prediction from welllogs using multiple regression, in: Reservoir characterization.Proceedings of the Reservoir characterization technicalconference, Dallas (TX), 29 April-1 May 1985, 181-221.

Worthington P., Cosentino L. (2003) The role of cut-off inintegrated reservoir studies, in: Proceedings of the Societyof Petroleum Engineers annual technical conference andexhibition, Denver (CO), 5-8 October, SPE 84387.

Xu Wenlong et al. (1992) Integrating seismic data in reservoirmodeling. The collocated cokriging alternative, in: Proceedingsof the Society of Petroleum Engineers annual technicalconference and exhibition, Washington (D.C.), 4-7 October,SPE 24742.

Luca CosentinoEni - Agip

San Donato Milanese, Milano, Italy

573VOLUME I / EXPLORATION, PRODUCTION AND TRANSPORT

STATIC RESERVOIR STUDY

Page 22: 4.5 Static reservoir study - Treccani

Recommended