Date post: | 21-Oct-2015 |

Category: |
## Documents |

Upload: | bidyutiitkgp |

View: | 1,314 times |

Download: | 105 times |

Share this document with a friend

Description:

Internal Algorithms for PETREL

Popular Tags:

31

Transcript

Appendix 2 - Algorithms

This appendix contains descriptions and examples of different algorithms used in Petrel. The algorithms that are described are

those used in the Make Surface, Make Horizon and Petrophysical Modeling processes.

Averaging Methods

Interpolation Algorithms

Make Horizon Algorithms

Velocity Modeling Algorithms

Wavelets

Vertical averaging (Algorithms) (Petrophysical modeling) New!

Under vertical averaging the user can specify whether or not the petrophysical modeling should follow structural layers or the horizontal

plane. The vertical influence to interpolate vertically can be set by distance or by number of cells. The vertical influence can also be

weighted.

Parent topic: Interpolation Algorithms

Vertical Average - moving average, exp 2, averaging follows layer

An example of a property model created with the Moving average algorithm with the exponent set to 2 together with the vertical

averaging set to follow layer.

Vertical Average - moving average, exp 2, horizontal averaging

An example of a property model created with the Moving average algorithm with the exponent set to 2 together with the vertical

averaging set to horizontal.

Vertical Average - moving average, exp 2, directional trend 45deg, weight 10

An example of a property created with the Moving average algorithm with the exponent set to 2, together with a directional trend of

45o and the weight set to 10.

Vertical Average - moving average, exp 2, directional trend 45deg, weight 3

An example of a property created with the Moving average algorithm with the exponent set to 2, together with a directional trend of

45o and the weight set to 3.

Kriging interpolation (Algorithms) (Make surface and Petrophysical modeling)Kriging is an Estimation Technique / Mapping Method based on fundamental statistical properties of the data, the Mean and the

Variance. It does assumptions of the data set, such as stationary in regards to the rules for behavior of the properties which are

analized, studied or modeled with geostatistical tools. This technique can be defined based on the interaction between the variogram

parameters and the local neighborhood data.

The algorithm uses a variogram to express the spatial variability of the input data. The user can define type of function for the variogram

(Exponential, Spherical and Gaussian), range, sill and nugget. See Background Information on Variograms for more information on

variograms. The algorithm using an Exponential and Spherical variogram will not generate values larger or smaller than the min/max

values of the input data. In some extreme cases the algorithm using a Gaussian variogram with a clustered input data can produce

values higher than the max or lower than the min.

Nugget(unit)=Nugget/Sill, where Nugget(unit) equals the nugget relative to the unit sill; and the nugget and the sill are the figures from

the estimated variogram. In other words, the Sill is internally always set to 1 and the Nugget is a fraction of the Sill.

The Kriging interpolation algorithm use the same method as Kriging by Gslib algorithm but work internally. Differences include:

Kriging interpolation works in XYZ rather than IJK (Simbox, see Visualize a property as a regular box (simbox view)) for Kriging and

Kriging by Gslib.

Kriging interpolation only considers data within the variogram range (can lead to strange effects in areas with no data when trends have

not been removed correctly).

Kriging interpolation and Krging are much faster because transfer to external algorithms is not required.

Kriging and Kriging by Gslib give the user control of advanced settings (Expert tab).

Kriging and Kriging by Gslib offer Collocated Co-kriging (Co-kriging tab).

In the figures below is shown the Kriging interpolation panel for Petrophysical modeling.

Further information related to these algorithms can be found in the GSLIB manual: GSLIB Geostatistical Software Library and

User's Guide, 2nd Edition, 1998 by Clayton V. Deutsch, Andre G. Journel or on the GSLIB website http://www.gslib.com

(Support/Training section).

Additionally, collocated co-kriging is now available as an option in Make/edit surface. This option appears in the Distribution tab when

the method is set to kriging and kriging with Gslib.

Parent topic: Interpolation Algorithms

Kriging (Algorithms) (Make surface and Petrophysical modeling)The new Kriging algorithm that was introduced in Petrel 2008.1 was further improved in Petrel 2009.1. The new algorithm differs from

the standard GSLIB kriging in the way that it searches for neighbours and in certain aspects of housekeeping regarding which matrices

need to be inverted. The improvement gains are due to parallelization and some additional numerical efficiency. While it is difficult to

give general figures as there are some machine dependencies, on a 4 processer machine, the typical improvement is about a factor of

2.5. Addition of a fast Collocated co-kriging algorithm and some additional options to extend user control over the style of Kriging.

The Kriging algorithm differences include:

Kriging works in XYZ and IJK coordinates. Kriging interpolation works in XYZ rather than IJK for Kriging by Gslib (see Visualize a

property as a regular box (simbox view)).

Kriging and Kriging interpolation are much faster because transfer to external algorithms is not required.

Kriging and Kriging by Gslib offer Collocated Co-kriging.

Kriging and Kriging by Gslib give the user control of advanced settings.

This algorithm uses a variogram to express the spatial variability of the input data. The user can define type of function for the

variogram (Exponential, Spherical and Gaussian), range, sill and nugget. See Background Information on Variogramsfor more

information on variograms. The algorithm using an Exponential and Spherical variogram will not generate values larger or smaller than

the min/max values of the input data. In some extreme cases the algorithm using a Gaussian variogram with a clustered input data can

produce values higher than the max or lower than the min.

Nugget(unit)=Nugget/Sill, where Nugget(unit) equals the nugget relative to the unit sill; and the nugget and the sill are the figures from

the estimated variogram. In other words, the Sill is internally always set to 1 and the Nugget is a fraction of the Sill.

Parent topic: Interpolation Algorithms

The Kriging algorithm works principally like the existing Kriging by Gslib algorithm but has two main differences. At first the search

algorithm works on the basis of the k-d tree (k-dimensional) to look for the n nearest points. The space will be subdivided into

subspaces to organize the data structure and optimize the data search (Figure 1). It will work faster than the known Super Block or

Spiral search used in Kriging by Gslib or Stochastic Simulation.

Figure 1 - Example for a kd tree, after 3 splits (red, green, blue) 8 subspaces for optimized data search exist (image from Wikipedia).

The second significant difference is the way how the Kriging matrix is set up and solved. The system divides up the set of points to be

kriged into equivalence classes of points which have the same sets of neighbours. Then the inversion of the matrix will be done once for

each equivalence class instead of for each point. Depending on how many neighbours are used, this can give minor or major (factor of

10 or more) speed ups.

The Kriging algorithm in Petrel 2009 has also fast Collocated co-kriging algorithm (see Co-kriging tab (Petrophysical modeling -

Kriging and Kriging by Gslib))and some additional options to extend user control over the style of Kriging in the Expert tab(advanced

settings) (see Expert tab (Petrophysical modeling - Kriging and Kriging by Gslib)). Users who are not familiar with the Gslib algorithms

should not edit these special settings.

Further information related to these algorithms can be found in the GSLIB manual: GSLIB Geostatistical Software Library and

User's Guide, 2 ndEdition, 1998 by Clayton V. Deutsch, Andre G. Journel or on the GSLIB

websitehttp://www.gslib.com(Support/Training section).

Kriging by Gslib (Algorithms) (Make surface and Petrophysical modeling)The Kriging by Gslib algorithm uses the same method as Kriging algorithms but using external files and the Gslib executable.

Differences include:

Kriging interpolation works in XYZ rather than IJK (Simbox, see Visualize a property as a regular box (simbox view)) for Kriging by Gslib

and Kriging.

Kriging interpolation only considers data within the variogram range (can lead to strange effects in areas with no data when trends have

not been removed correctly).

Kriging interpolation and Kriging are much faster because transfer to external algorithms is not required.

Kriging by Gslib and Kriging give the user control of advanced settings (see Expert tab (Petrophysical modeling - Kriging and Kriging by

Gslib)).

Kriging by Gslib and Kriging offer Collocated co-kriging. The Collocated co-kriging for Kriging is faster than for Kriging by Gslib, see Co-

kriging tab (Petrophysical modeling - Kriging and Kriging by Gslib)for detailed information.

In the Expert tab(advanced settings) in which some special settings can be defined by the user, are the same options as for the

Sequential Gaussian Simulation method and the settings are internal parameters used by the Gslib algorithm. Users who are not

familiar with the Gslib algorithms should not edit these special settings.

Further information related to these algorithms can be found in the GSLIB manual: GSLIB Geostatistical Software Library and

User's Guide, 2 ndEdition, 1998 by Clayton V. Deutsch, Andre G. Journel or on the GSLIB

websitehttp://www.gslib.com(Support/Training section).

Additionally, collocated co-kriging is now available as an option in Make/edit surface. This option appears in the Distribution tab when

the method is set to kriging and kriging with Gslib.

Parent topic: Interpolation Algorithms

Cos expansion (Algorithms) (Make Surface)This is an interpolation technique which minimizes the curvature of the result and produces a smooth surface. This algorithm works very

well for few points, but can tend to be slow when having many points (>100). It can fail if some points are very close to each other.

An example of a surface created with the Cos expansion algorithm.

Parent topic: Interpolation Algorithms

Functional (Algorithms) (Make surface and Petrophysical modeling)This method creates a 3 dimensional function and then uses this function in the interpolation. The function weights the input points by

distance and will be recalculated for each interpolation. It will keep a trend going, and therefore it is best suited for several input points

(>20). The function is medium fast and sometimes will fail for few points.

Functional algorithms can be used with Equal point weighting to extract a trend from input data.

The four options are described below and can be observed by experimenting using the Equal point weighting.

Plane - Creates a simple plane

Z= ax+by+c

Bilinear - creates a bilinear plane (rectangular hyperboloide)

Z= axy+bx+cy+d

Simple Parabol - Creates a symmetrical parabol

Z= a(x2+y2)+bx+cy+d

Parabol - Creates a standard parabol which is not constrained to symmetry

Z= ax2+by2+cx+dy+eParent topic: Interpolation Algorithms

Sequential Gaussian Simulation (Algorithms) (Make surface and Petrophysical modeling)The Sequential Gaussian Simulation is a stochastic method of interpolation based on Kriging. It can honor input data, input distributions,

variograms and trends.

During the simulation, local highs and lows will be generated between input data locations which honor the variogram. The positions of

these highs and lows will be determined by a random number supplied by the user or the software. Because of this, multiple

representations are recommended to gain an understanding of uncertainty.

In the absence of other information the input distribution will be given by the input data. In this case the result will not give values above

the maximum or below the minimum of the input data.

Parent topic: Interpolation Algorithms

Cell visitation order

Each of the nodes will be visited in random order. At each node the data will be kriged to determine the variance at that node, and then

a value will be picked from the input distribution to match the variance at that node. As subsequent cells are visited, the previously

defined cell values are also used in the kriging (not just the input data).

As a result of this method, the last cells to be defined are heavily constrained by the distribution of the defined cells and the input

distribution to be achieved. Therefore, it is important that cells are visited in a truly random order and that localized areas of cells are not

visited together. For this reason there are two expert settings which give the user control over the visitation order.

Standard distributions and Normal Score transformations

By default, the simulation will follow a univariate distribution which can either be derived from the input data or user-defined and the

resulting model will closely match this distribution.

This is achieved by transforming the input data using a normal score transformation prior to the simulation and back transforming the

result using the same transformation. The normal score transformation will always result in a standard normal distribution (mean of 0

and a standard deviation of 1).

The diagram below explains the process: each upscaled cell value in the property domain is assigned a value in the normal score space

by projecting its cumulative frequency/probability onto their respective cumulative distribution functions. The modeling is done on the

transformed data (in normal score space) and the normal score space values in all cells are then back-transformed to the property

domain.

If there is relatively little input data then the resulting input histogram may be blocky with poor resolution and isolated values carrying too

much weight. In that case, it is advisable to edit the histogram before modeling (see Distribution Functions). The option to edit the target

distribution is available through the Data Analysis module (see Normal Score Transformation).

Bivariate distributions (Theory)

When performing a bivariate distribution, the user also supplies a secondary property covering the area to be simulated, and a cross

plot of the two variables. The resulting model will honour the the input distribution and will also follow the same general spatial pattern

as the secondary variable.

Prior to modeling, the cross plot will be split into a number of bin intervals based on values of the secondary property. For each bin, a

separate distribution and CDF will be calculated. Both the forward and back transformation of the normal score value in each cell will

use a different CDF depending on which bin interval the corresponding secondary property value falls into.

The figure below shows the back transformation where porosity (the secondary variable) is in the range 0.1 - 0.15. As the sum of the

distributions for each bin is identical to the input distribution, the input distribution is still honored.

Rate this listing

Closest (Algorithms) (Make surface and Petrophysical modeling)This function uses the closest input point for the created surface.

An example of a surface created with the Closest algorithm.

An example of a property model created with the Closest algorithm.

Parent topic: Interpolation Algorithms

Artificial (Algorithms) (Make Surface)Five different methods, with different interpolation settings, can be used to create an artificial surface

Constant value: This method makes a surface with a constant Z-value for the whole surface.

Fractal: This method makes a fractal surface when entering the range for the Z-values (Z-max and Z-min) and the variables for the

fractal method (Exponent and Hurst value). The exponent defines rows in the new surface - value between 2 and 10. The greater the

exponent, the more details the new surface will show. The Hurst factor must be set between 0.3 and 3.0 - the new surface will become

smoother as the Hurst factor increases.

An example of an artificial surface created with the Fractal method.

Plane - Creates a plane surface with dip and azimuth as defined by the user. The user defines a point X, Y, Z from which the dip and

azimuth initiates.

Areas - Option to give different Z-values inside and outside a polygon. The Z-value of the resulting surface inside the polygon can also

be given by the polygon itself. This makes it possible to give different values inside the polygons. Use the Z-value selector in order to

attach Z-values to a polygon. This method is used to generate trend surfaces.

Channels - Create an isochore or surface with channels from a set of polygons. The isochore can be utilized in the Make Zones

process. To create an isochore, enter a positive value as channel depth and to create a surface with channels, enter a negative value.

Parent topic: Interpolation Algorithms

Input data for examplesParent topic: Interpolation Algorithms

Make Surface

Some examples of surfaces created in Petrel have been generated by using an artificial set of Well Tops as input data and making

surfaces with some different algorithms and settings.

Statistics for Well Tops.

Figure 1. Statistics for the created grid.

Petrophysical models

Some examples of 3D property models created in Petrel have been done by using an artificial set of wells with logs (porosity) as input

and making 3D property models with some different algorithms and settings.

Figure 2. Statistics for the created 3D property model.

The wells with upscaled logs that are used.

Make Horizon AlgorithmsThe algorithm will first interpolate the input data locally then use a global interpolation method to give values to the nodes which

received no values during the local interpolation. These two separate phases of interpolation can be defined separately.

Algorithm typesParent topic: Appendix 2 - Algorithms

Algorithm typesParent topic: Make Horizon Algorithms

Local interpolation (Algorithms)

Under local interpolation the user can set the local influence radius of the point data and the local interpolation algorithms to be used.

The available options for local influence radius are:

half cell: This option is best for low density of points.

1 cell: This option is best for high density of points.

The available local interpolation algorithms are:

Moving average: This algorithm calculates the average of the points near the grid node, and works best for low density of points or

point data with bad quality.

Plane: This algorithm makes a linear plane which represents data points near the grid node.

Parabolic: This algorithm makes a 3D parabolic surface to represent the points near the grid nodes, and works best for high density of

points and points with good quality.

Global extrapolation (Algorithms)

Under global extrapolation the global extrapolation algorithm can be set. There are three available options:

Minimum Curvature: Extrapolates the values which could not be evaluated in the local interpolation. It uses a smoothing operator

which will keep the surface smooth.

Full Tension: Extrapolates the values which could not be evaluated in the local interpolation. It uses a linear operator that will keep the

surface as flat as possible.

None: No global extrapolation will be performed. Only values from the local interpolation will be defined for the grid.

See Minimum Curvature (Algorithms) (Make Surface) for examples.

Velocity Modeling AlgorithmsThis topic deals with the algorithms used by the Make velocity model process to do time-depth relationship (TDR) estimation in Petrel.

The TDR estimation is assuming a linear velocity function, V=V0+K*Z, V=V0+K*(Z-Z0) or V=V0+K*T, in each zone and the method

derives the parameters from well data. This form of estimation is required by the following velocity model settings:

Well TDR - Constant (V0, K)

Well TDR - Surface (V0, K)

Correction - Constant (V0)

Correction - Surface (V0)

The goal of this approach is to estimate V0 and K so that the calculated TDR in the velocity model fits the TDR in all wells. The larger

goal is to make a velocity model that makes maximum use of well data, and is still mathematically robust and tolerant of realistic level s

of noice in the data.

Using the algorithmsParent topic: Appendix 2 - Algorithms

Using the algorithmsParent topic: Velocity Modeling Algorithms

Input data

In order to use this method you will need:

Constant values, surfaces or model tied horizons in time (two way time (TWT))

Well path(s) in depth

A time-depth relationship (TDR) for each well of interest. The input data for the time depth relationship is specified on the time tab of the

wells settings folder, and will automatically generate a 'General time' log (one way time) with a log point for each piece of data.

The method works layer by layer, i.e. depth converting the first zone down to the first horizon, then the second zone, etc. Well

corrections are done after each layer is converted and added to the velocity model itself.

Options in estimating V0 and K from well TDRUse minimum depth or velocity method and optimize for estimation of K.

Choose if the method should minimize the sum of a function on the depth or velocity residuals. An option to optimize estimation of K is available for linear velocity functions that will honor the rate of increase of velocity.

Use well or data weights in a robust estimation.Outliers in a distribution will influence the least squares fitting if the errors are not normally distributed with constant variance, i.e. data points with a large residuals.The robust estimation works by down-weighting wells and/or data that lays far from the fitted function.

Use the best object to use for base well intersection.Choose what object to use to define the base of the interval where it intersects the wells.Correctiontakes the input from the correction column in the Make velocity model settings (typically well tops)TDR intersectionuses the calculated intersection based on the TDR.Correction or TDR intersectionSelecting uses the former if existing and the latter if not. In addition Estimate and adjust to base can be selected where the minimum method and the TDR is used to estimate where the base of the zone should be in depth. It then ensure that the function goes through that point and use it when estimating the unknown coefficients (V0 and K). The result should give a more reliable base in depth. See figure 1 that partly describes the problem at hand.

Figure 1. Calculation of the zone thickness in a non-vertical well

Figure 1

In total this approach will give a better result when bad input data with outliers and a top horizon that does not match the input data

exists (as seen above).

Constant k/V0 or surface?

V0 and k can each be calculated as constant values for the layer or as surfaces. If the model is specified as Well TDR - Surface,then

individual values are calculated for each well and the resultant points gridded to produce the surface. Different combinations of the

surface or constant for k and V0 will affect what kind of input data is used in the process above.Both Vo and K as Surfaces

The K and Vo values are calculated in each well individually by the minimum method chosen (one cross plot for each well). The final surfaces should be analyzed carefully. If the surfaces are not smooth, then while the values at each well may be reasonable, between wells there may be combinations of V0 and k which are unrealistic.

Vo as Surface and K as ConstantData from all the wells is analyzed together. For each k, V0 is found in each well and the sum of the total error in each well is plotted. V0 is then recalculated in each well individually using the optimal k. Correction input may be used instead of TDR in this scenario.

Vo as Constant and K as SurfaceThe optimal least depth error K and Vo pairs are found in each well as in option a. The individual Ks are gridded to generate the final K surface. The same well point K values are then used as input in the further analysis to find the global constant Vo returning the overall minimum depth error for all wells. This workflow would not normally be used.

Both Vo and K as Constantsk is first estimated as described for option b. V0 is then recalculated in all wells using this fixed global k. The constant V0 value is found by iterating and plotting the total error in much the same way as is done in for k.

Combining estimated values with well correction

If well correction data is supplied, these should be used instead of the intersection calculated based on the TDR. However, since a

perfect well tie precludes constant V0 and K values, you have a choice between:

Estimate V0 and K surfaces and tie the wells perfectly

Estimate a constant K and a V0 surface to tie the wells. The constant will have to be recalculated, but it will remain constant.

Estimate V0 and K constants at the expense of perfect well tie. Vo and K will be constants optimized based on the minimum method

chosen when calculating V0 and k from well TDR

WaveletsThere are currently two processes in Petrel handling generation of synthetic seismograms. In the longest existing process, Synthetics

found under Stratigrapy in the Processes pane, four analytical wavelets are possible to choose between. In the new process, Seismic

well tie found under Geophysics in the processes pane, three wavelets can be constructed using the Wavelet builder. The analytical

wavelet that can be generated in the Synthetics process are described in more detail in the following:

Wavelet typesParent topic: Appendix 2 - Algorithms

Wavelet typesWavelets used in the Synthetics process in Petrel include 4 analytical types:

Ricker

Butterworth

Ormsby

Klauder

Filtering is used most commonly to remove unwanted frequencies from data by a process called bandpass filtering. Low and high

frequencies are considered too noisy and to improve the displayed image bandpass filtering is used.

The most important decision for a user of Petrel is whether to use a minimum or zero phase wavelet. But be careful, the convention for

a zero phase wavelet for the USA and Europe is completely opposite in phase (see figure below).

In addition some of the other wavelets used by other software products include Hanning, Hamming and reverberation wavelets.

Included in the description for each of the types of wavelet is a diagram showing a typical waveform (Time vs. Amplitude), Frequency

Spectrum (Frequency vs. amplitude) and Phase Spectrum (Phase vs. frequency).

Note: Petrel shows the phase spectrum in the form of Radians, whereas other software products may show this spectrum in

the form of degrees.Parent topic: Wavelets

Ricker

A ricker filter is a filter that requires only 1 input, the peak frequency as seen in the Petrel screenshot below. This is commonly used for

synthetic modeling. No bandpass filter is involved and the frequency and phase spectrums are purely a function of the peak frequency

input.

In the below example, a 40Hz peak frequency was chosen.

Note: With a Ricker wavelet no side-lobes are seen in the amplitude vs. time spectrum, whereas the Butterworth, Omsby and Klauder

filters all have associated side-lobes.

Figure showing the spectrum for a Ricker wavelet.

The Ricker wavelet is defined as:

Where Vm is the peak frequency

Where TD is the period.

Ormsby

The bandpass (see figure below) of an Ormsby filter can be described by using up to 4 corner frequencies as in the figure below. In

Petrel;

1. is the low cut frequency, where all lower frequencies are filtered out and not used.

2. is the low pass frequency where after this frequency, 100% of all higher frequencies will be used.

3. is the high pass frequency where frequencies higher than this will be linearly tapered until point 4.

4. is the high cut frequency where any frequencies higher than this will be filtered out and not used.

Figure showing the bandpass for an Ormsby Filter.

Figure showing the spectrum for an Ormsby wavelet.

The wavelet is designed from four points forming a trapezoid bandpass filter.

Low cut frequency

Low pass frequency

High pass frequency

High Cut frequency

Frequencies below the low cut or above high cut are rejected. Between low pass and high pass the filter is flat at an amplitude of 1.

Klauder

A Klauder is defined by 2 frequency cutoff values, a low and a high cutoff which in the example below are set at 10Hz and 70Hz. The

contributing frequencies are represented by a box-car that assigns the same constant amplitude for all frequency components. Because

of sudden discontinuities in the amplitude of frequencies at the beginning and end, the wavelet has some undesirable side-lobe

oscillations.1

Figure showing the spectrum for a Klauder wavelet.

This wavelet simulates the autocorrelation of a linear vibrosis sweep.

Where:

A = sweep signal amplitude

T = sweep signal duration

σf = sweep signal bandwidth

τ = processed record time

f = centre frequency

1 Lab 1. Imhof, M.G. Seismic Modelling lecture notes, Virginia Tech

Butterworth

The Butterworth bandpass (see figure below) consists of 2 cutoff frequencies taken at 3dB down from maximum power or approx. half

power (~50% on the amplitude scale on the figure below). For the example below they are at 10Hz and 50Hz. The Butterworth filter also

requires 2 slopes. The slopes are defined as "decibels/octave", where an octave is a doubling in the frequency (i.e. 10 to 20Hz). A

decibel is a unit of measure for acoustics defined by the formula;

dB = 20log(X/Y)

where X and Y is the ratio of amplitudes.

If the ratio of X/Y is ratio of 2 then it will be 6dB, and if the ration is 10 that translates to 20dB2.

Figure showing the bandpass of a Butterworth filter.

Figure showing the spectrum for a Butterworth wavelet.

The wavelet is defined as:

Where:

A = amplitude

f = frequency

lc = low cut frequency

hc = high cut frequency

lo = low order

ho = high order

Recommended