+ All Categories
Home > Science > Principles of seismic data processing m.m.badawy

Principles of seismic data processing m.m.badawy

Date post: 07-Jan-2017
Category:
Upload: mahmoud-m-badawy
View: 85 times
Download: 23 times
Share this document with a friend
66
Principles of Seismic Data Processing M.M.Badawy
Transcript
Page 1: Principles of seismic data processing m.m.badawy

Principles of Seismic DataProcessing

M.M.Badawy

Page 2: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page2

Principles of Seismic DataProcessing

Mahmoud Mostafa Badawy

Lecturer Assistant of Geophysics, Geology Department, Faculty of Science,Alexandria University, Egypt

Page 3: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page3

Content:Chapter 1: Seismic Generation Introduction Elasticity Term Wave Definition Waves Types Geometry of wave ray paths (Theories) Acoustic Impedance and Reflection Coefficient Velocity Resolution ProblemsChapter 2: Seismic Data Processing Processing Concept Processing Main Steps:

Reformatting and Demultiplexed Geometry Definition Field Static Correction Amplitude Recovery Noise Attenuation (De-Noise) Deconvolution CMP Gather NMO Correction Demultiple Migration CMP Stack Problems

Chapter 3: Software Using Seismic Processing Using Vista Software

Page 4: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page4

Chapter 1:

What Makes A Wiggle?

Seismic reflection profiling is an echo sounding technique. A controlled sound pulse is issuedinto the Earth and the recording system listens a fixed time for energy reflected back frominterfaces within the Earth. The interface is often a geological boundary, for example thechange of sandstone to limestone.

Once the travel-time to the reflectors and the velocity of propagation is known, the geometryof the reflecting interfaces can be reconstructed and interpreted in terms of geological structurein depth. The principal purpose of seismic surveying is to help understand geological structureand stratigraphy at depth and in the oil industry is ultimately used to reduce the risk of drillingdry wells.

What Is A Reflection?

The following figure shows a simple earth model and resulting seismic section used to illustratethe basic concepts of the method.The terms source, receiver and reflecting interface are introduced. Sound energy travelsthrough different media (rocks) at different velocities and is reflected at interfaces where themedia velocity and/or density changes.The amplitude and polarity of the reflection is proportional to the acoustic impedance (productof velocity and density) change across an interface. The arrival of energy at the receiver istermed a seismic event.A seismic trace records the events and is conventionally plotted below the receiver with thetime (or depth axis)

Wave is a disturbance whichtravels in the medium or

without.

Page 5: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page5

Wave Propagation

For small deformations rocks are elastic, which is they return to their original shapeonce a small stress applied to deform them is removed. Seismic waves are elasticwaves and are the "disturbances" which propagate through the rocks.

The most commonly used form of seismic wave is the P (primary)-wave which travels as aseries of compressions and rarefactions through the earth the particle motion being in thedirection of wave travel. The propagation of P-waves can be represented as a series of wavefronts (lines of equal phase) which describe circles for a point source in a homogeneous media(similar to when a stone is dropped vertically onto a calm water surface). As the wave frontexpands the energy is spread over a wider area and the amplitude decays with distance from thesource.

This decay is called spherical or geometric divergence and is usually compensated for inseismic processing. Rays are normal to the wave fronts and diagrammatically indicate thedirection of wave propagation. Usually the shortest ray-path is the direction of interest and ischosen for clarity. Secondary or S waves travel at up to 70% of the velocity of P-waves and donot travel through fluids.

The particle motion for an S-wave is perpendicular to its direction of propagation (shearstresses are introduced) and the motion is usually resolved into a horizontal component (SHwaves) and a vertical component (SV waves).

Page 6: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page6

Snell's Law

The mathematical description of refraction or the physical change in the direction of a wavefront as it travels from one medium to another with a change in velocity and partial conversionand reflection of a P-wave to an S-wave at the interface of the two media.

Snell's law, one of two laws describing refraction, was formulated in the context of light waves,but is applicable to seismic waves. It is named for Willebrord Snel (1580 to 1626), a Dutchmathematician.Snell's law can be written as:

Reflection: The energy or wave from a seismic source which has been reflected from anacoustic impedance contrast (reflector) or a series of contrasts within the earth.

Refraction: The change in direction of a seismic ray upon passing into a medium with adifferent velocity. The mathematics of this is defined by Snell’s law.

Reflection Coefficient:

The ratio of amplitude of the reflected wave to the incident wave, or how much energy isreflected. If the wave has normal incidence, then its reflection coefficient can be expressed as:

Page 7: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page7

If the A.I of the lower formation is higher than the upper one, the reflection polarity will be+ve and vice versa.

If the difference in A.I between the two formations is high, the reflection magnitude(Amplitude) will be high.

Velocity Analysis:

-The determination of seismic velocity is the key to seismic method.

-The process of calculating seismic velocity is to do better process seismic data. Successfulstacking, time migration and depth migration all require proper velocity inputs

-Velocity estimation is needed also to convert time section into depth section.

Page 8: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page8

Kinds of Velocity:

• Average velocity: at which represent depth to bed (from surface to layer). Average velocity iscommonly calculated by assuming a vertical path, parallel layers and straight ray paths,conditions that are quite idealized compared to those actually found in the Earth.

• Pseudo Average Velocity: when we have time from seismic & depth from well

• True Average Velocity: when we measure velocity by VSP, Sonic, or Coring

• Interval Velocity: The velocity, typically P-wave velocity, of a specific layer or layers orock,

• Pseudo Interval Velocity: when we have time from seismic & depth from well

• True Average Velocity: when we measure velocity by VSP, Cheak shot

• Stacking Velocity: The distance-time relationship determined from analysis of normal moveout (NMO) measurements from common depth point gathers of seismic data. The stackingvelocity is used to correct the arrival times of events in the traces for their varying offsets priorto summing, or stacking, the traces to improve the signal-to noise ratio of the data.

• RMS Velocity: is root mean square velocity & equivalent to stacking velocity but increasedby 10%

• Instantaneous Velocity: Most accurate velocity (comes from sonic tools) & can be measuredat every feet

• Migration Velocity: used to migrate certain point to another (usually > or < of stackingvelocity by 5-15%)

Page 9: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page9

Tape Formats:

Several tape formats defined by the SEG are currently in use. These standards are often treatedquite liberally, especially where 3D data is concerned. Most contractors also process data usingtheir own internal formats which are generally more efficient than the SEG standards.

The two commonest formats are SEG-D (for field data) and SEG-Y for final or intermediateproducts.The previous figure shows the typical way in which a seismic trace is stored on tape for SEG-Yformat.

The use of headers is particularly important since these headers are used in seismic processingto manipulate the seismic data. Older multiplexed formats (data acquired in channel order) suchas SEG-B would typically be demultiplexed (in shot order) and transcribed to SEG-Y beforeprocessing.

In SEG-Y format a 3200 byte EBCDIC (Extended Binary Coded Decimal Interchange Code)"text" header arranged as forty 80 character images is followed by a 400 byte binary headerwhich contains general information about the data such as number of samples per trace. This isfollowed by the 240 byte trace header (which contains important information related to thetrace such as shot point number, trace number) and the trace data itself stored as IBM floatingpoint numbers in 32 byte format.

The trace, or a series of traces such as a shot gather, will be terminated by an EOF (End of File)marker. The tape is terminated by an EOM (End of Media) marker. Several lines may beconcatenated on tape separated by two EOF markers (double end of file). Separate lines shouldhave their own EBCIDC headers, although this may be stripped out (particularly for 3Darchives) for efficiency. Each trace must have its own 240 byte trace header. Note there areconsiderable variations in the details of the SEG-Y format.

Page 10: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page10

Convolution:

Is a mathematical way of combining two signals to achieve a third, modified signal.The signal we record seems to respond well to being treated as a series of signals superimposedupon each other that is seismic signals seem to respond convolutionally. The process ofDECONVOLUTION is the reversal of the convolution process.

Convolution in the time domain is represented in the frequency domain by a multiplyingthe amplitude spectra and adding the phase spectra.

Page 11: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page11

F-K Transform:

A two-dimensional Fourier transform over time and space is called an F-K (or K-F) transformwhere F is the frequency (Fourier transform over time) and K refers to wave-number (Fouriertransform over space).

The space dimension is controlled by the trace spacing and (just like when sampling a timeseries) must be sampled according to the Nyquist criterion to avoid spatial aliasing. TemporalAliasing was previously discussed. In the F-K domain there is a two-dimensional amplitudeand phase spectrum but usually only the former is displayed for clarity with colour intensityused to show the amplitudes of the data at different frequency and wave-number components.Several noise types such as groundroll or seismic interference may be more readily separated inthe FK amplitude domain than the time-space domain and therefore will be easier to mutebefore the inverse transform is applied.

Page 12: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page12

Introduction:

The purpose of seismic processing is to manipulate the acquired data into an image that can beused to infer the sub-surface structure. Only minimal processing would be required if we had aperfect acquisition system.

Processing consists of the application of a series of computer routines to the acquired dataguided by the hand of the processing geophysicist. There is no single "correct" processingsequence for a given volume of data.

At several stages judgments or interpretations have to be made which are often subjective andrely on the processors experience or bias. The interpreter should be involved at all stages tocheck that processing decisions do not radically alter the interpretability of the results in adetrimental manner.

Processing routines generally fall into one of the following categories:

enhancing signal at the expense of noise providing velocity information collapsing diffractions and placing dipping events in their true subsurface locations

(migration) increasing resolution (wavelet processing)

Page 13: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page13

Contractors:

Today most processing is carried out by contractors who are able to perform most jobs quicklyand cheaply with specialized staff, software and computer hardware. There are currently fivemain contractors who are likely to have an office or an affiliation almost anywhere in the worldwhere oil exploration is taking place. In addition there are many smaller localized contractorsprincipally in London and Houston, and also some specialized contractors who concentrate onparticular processing areas.

These are summarized in the following table:

Page 14: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page14

A Processing Flow:

Processing flow is a collection of processing routines applied to a data volume. The processorwill typically construct several jobs which string certain processing routines together in asequential manner.

Most processing routines accept input data, apply a process to it and produce output data whichis saved to disk or tape before passing through to the next processing stage. Several of thestages will be strongly interdependent and each of the processing routines will require severalparameters some of which may be defaulted.

Some of the parameters will be defined, for example by the acquisition geometry and somemust be determined for the particular data being processed by the process of testing.

Factors which Affect Amplitudes

Page 15: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page15

New Data:

Tape containing recorded seismic data (trace sequential or multiplexed) Observer logs/reports Field Geophysicist logs/reports and listings Navigation/survey data Field Q.C. displays Contractual requirements

Simple Processing Sequence Flow:

Reformat Geometry Definition Field Static Corrections (Land - Shallow Water - Transition Zone) Amplitude Recovery Noise Attenuation (De-Noise) Deconvolution CMP Gather NMO Correction De-multiple (Marine) Migration CMP Stack

Page 16: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page16

Spherical Divergence:

Due to the nature propagation of the energy on the shape of wave fronts, andwith increasing of the diameter of these waves, the energy decays through time sowe have to compensate this decay. The surface area of a sphere is proportional to the square of its radius so the energylost due to spherical divergence is proportional to 1/r2.

Page 17: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page17The Effect of Spherical

Divergence

Page 18: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page18

Automatic Gain Control (AGC):

The dynamic range of the recorded signal can vary from micro volts to volts. A fixedgain will cause clipping of the large values or not enough amplification for very lowvalues. AGC provides higher gain for small values and lower gain for large datavalues. The controller sets the amplification for each sample, passes the gain informationto the amplifier and the formatter. But take care that AGC should be usually fordisplay only because it harms the amplitude. We take shallow window and deep window and it makes amplification for the smallamplitudes in the deeper parts of the data. AGC - Automatic gain control: An amplitude gain procedure applied to the trace that

equalizes the trace energy over a contiguous sequence of specified time windows. Afterapplication of AGC, attenuation and geometrical spreading effects can be roughlycorrected for and reflection amplitudes are normalized to be about the same value.

Page 19: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page19

BeforeApplying AGC

After ApplyingAGC

Page 20: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page20

Swell Noise (Marine Waves):

A type of marine noise results from the scratching of the cable with water duringthe survey. Its characteristics: (it has low frequency and high amplitude). Its shape: (vertical lines along the data or only parts of it). How we can attenuate it? We say that it has low frequency and high amplitude, sowe can use a filter related to frequency or amplitude or both, so we can use bandpass filter or amplitude\frequency filter.

Page 21: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page21

Band Pass Filter:

It deals with the frequency, it may be used to cut low frequencies only to be called lowcut high pass filter or to cut high frequency only to be called low pass high cut filter or wecan determine arrange of frequencies to pass to be called four corners filter.We here will use low cut high pass filter, why?!!! Because swell noise have low frequency.Also band pass filter can work with slope, we can determine a slope and it will cutaccording to that slope, low slope equals low effect on amplitude and high slope equalshigh effect on amplitude.

Page 22: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page22

Page 23: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page23

Amplitude/ Frequency Filter:

In this filter we divide the data into frequency bands (e.g. 0-5, 5-10, 10-20 …), then wedetermine windows from the data to work in, and determine number of traces in eachwindow then we determine the cut off, so the filter will calculate the average amplitudein each window then it will compare this average amplitude with the amplitude in eachtrace.If the cut off value equals or higher than the amplitudes in the trace , it is okay ,if it islesser it will minimize the amplitude to the cut off value. And the smaller the cut off valuethe harsher the filter is and the higher the cut off value the milder the filter is. I try withcut off values till I find the best value that I'll apply, it may be 3,2or even 1.5.

Page 24: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page24

It did not properly working in the near offset due to the high amplitude caused bythe source. So how we can overcome this problem? We can do that by reversing the second shot gather beside the first one and startingfrom the far offsets to the near offsets as it is only one shot in split spread array anddoing that with all shot gathers. In land data we don't find swell noise but we find ground roll.***************************************

Direct Waves:They are source- generated due to the direct travel of these waves from the source to thereceiver and they are dominant in near offsets. They can be attenuated by normal moveout, muting and stacking.

Refraction:They are generated by critically refracted waves from the near surface layers. They aredominant in the far offsets. They can be attenuated by NMO, muting and stacking.

Ground Roll:

It is a source noise coming from propagation of waves in particles of near surfacelayers without net movement. It is dominant in the upper part from the data andinterfered with direct waves and refracted waves. Its characteristics: (low velocity, low frequency and high amplitude). It could be attenuated by F-K filter or Tau-p filter.

Page 25: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page25

F-K filters: It is applied in frequency domain not time domain. We use forward Fouriertransform to transfer from time domain to frequency domain. It is a relation between frequency (f) and wave number (k).it gives me two typesof events linear events and parabolic events, the linear events are those that havelow velocities and frequencies and I determine these linear events to a filter and itcalculates the velocity of this line then it removes all events with the same andlower velocities, this filter is called cut off velocity filter. And don't forget that when you transfer the data to the time domain again you willfind an increase in the data time and its frequency content due to Fourier, so youhave to make blanking to this time. Note that: F-K is run on gathers and FX and FY are run on stacked data but theyremove the dip of both noise and data, so they are rarely used.

Page 26: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page26

Seismic Data in X-TDomain

Seismic Data in F-KDomain

Seismic Data BeforeApplying F-K Filter

Seismic Data AfterApplying F-K Filter

Page 27: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page27

This is done if the data isn't aliased, so what if it is aliased? We use (Tau-P) filter or we make infill.

Tau-P filter: It is a Velocity dependent filter. Tau=intersection with time axis. P=1/V.First we make 2D Fourier transform to transfer from time to frequency then we use 3DFourier transform to transfer to tau-pi domain.This filter divides each event to more than one segment, and then it makes a tangentfor each segment to intersect the time axis in different tans, and calculates the slope forevery tangent, and then it makes a relation between a one tau and different slopes for thedifferent tangents that intersect the time axis at this tau (it makes a fan) with knowing ofmaximum and minimum slopes.The higher is P, the lower is the velocity and the lower is P, the higher is the velocity. Thenthe result is a graph between tau and P with parts with regular events and others withirregular ones we determine the interval that we are concerning with. And this is Tau-P inlinear mode.

Page 28: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page28

Page 29: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page29

But if there is a residual ground roll, we should make the infill

Infill: Infill is a technique used to increase number of traces to avoid aliasing.

We replace a trace between every two trace or more according to the Nyquist bysummation of the two traces and dividing them by two to give us the trace and wereorder the traces in a manner by which we can return to our original data and alsowe can flag the new or old traces to be known. After that we apply F-K filternormally. But also there will be a residual ground roll scattered in the data (notcoherent) and it can be attenuated by Amplitude/Frequency filter.Zero phasing:

It is a process that can be applied at the first steps or at the last but it is preferred tobe at first. Zero phases: (the maximum amplitude is at zero time). Zero phases is a mathematically solution but we can be close to it using vibroseis. Minimum phase: (maximum amplitude at minimum time, we can obtain it

with dynamite). Maximum phases: (maximum amplitude at maximum time). Mixed phases: (it is a mixed phase in between minimum phase and

maximum phase and we can get it with air gun).

Page 30: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page30

Zero phasing is a process by which we can modify the position of peaks andtroughs to be at the reflector position instead of being above or below its realposition for facilitating the interpretation process.

To make zero phasing we should make:

1-Model source 2-cross correlation

For air gun we get the source signature from the contractor. Then using software we determine the distance between the maximum amplitudeand zero time then we make shift toward zero time by a distance equal it from zerotime to max amplitude. And we can attenuate the bubble effect by designing the wavelet before shifting.And by this step we designed a filter that we multiply it with the source signatureto ensure that the result is a zero phase signature. And then we apply this filter onseismic data using cross correlation.

Source modeling can be for dynamite using thecharge, whole depth and the recorder model.We also determine the polarity of the traceseither it is normal or reverse. For vibroseisewe don't do that step.

Page 31: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page31

Reformat:

This will usually follow an Industry standard convention. e.g. SEGD, SEGY for magneticmedia (Data Format’ defines: How the data is arranged and stored.)

Geometry Definition:

- Important values for data processing are source – receiver OFFSETS!

- Where are the shots and receivers located?

- The area of mathematics relating to the study of space

and the relationships between points, lines, curves and

surfaces.

Geometry in seismic means defining where everything is located using the following:

Coordinates of shot and receivers

Relationship between ‘file’ numbers and shot locations Relationship between shots and receivers

Missing shots and/or receivers

Attributes for shots/receivers e.g. elevations, depths etc

(We need to supply the X, Y and Z co-ordinate of every shot and geophone station for theline. Luckily, in many cases, we can rely on a regular shooting pattern to simplify the input.

Geometry may be simple (for example, regular 2D marine data), or extremely complex (aland 3D survey shot over sand dunes).

Page 32: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page32

Both land and marine data are acquired using multiple sources and geophone arrays, tofacilitate the acquiring of the large volumes of data necessary. The geometry for land data canbe extremely complex, essentially shooting multiple crooked lines at once!

If we know the positions of the source and receiver then we can calculate the position of aCommon Mid Point.

Field Static Corrections:

What if the surface elevation changes?

i.e. remove the difference in travel time caused by shots and receivers being at differentelevations.

(Static corrections are time-shifts applied to seismic data

to compensate for :)

Variations in elevations on land

Variations in source and receiver depths

(Marine gun/cable, land source)

Tidal effects (marine)

Variations in velocity/thickness of near-surface layers

Change in data reference times

Page 33: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page33

Static Assumptions:

1- The ray-paths through the near-surface layering are vertical (not quite true)

2- Weathering medium is isotropic

(SA + AG + GD + DR = SB + BG + GC + CR)

- The ray-paths through the near-surface layering are vertical (not quite true):

-This means that deeper the reflector, better the assumption. This also means that shallow datais likely to suffer if weathering is thick

-The assumption of vertical ray paths is not strictly true and a complete solution of the problemrequires consideration of other factors such as the interplay of dynamic and static correctionswith lateral as well as vertical velocity variations.

-As far as velocity computations are concerned, we assume that the medium of weathering zoneis "isotropic" and therefore, the horizontal velocities we calculate are also applicable vertically.

-In reality, both these assumptions are not physically true. But we are forced to make theseassumptions in order to compute surface consistent statics.

Page 34: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page34

Main Types of Static Corrections:

FIELD (Initial) STATICS:

• The main static correction based on field measurements/derived from data acquired inthe field e.g. up-hole survey, refraction data.

RESIDUAL STATICS:

• Derived during processing by using reflection data to ‘fine-tune’ the field statics.

There are two main types of static calculation:

By ‘Field’ we mean the initial statics applied and historically calculated by the field crew.

Sometimes calculated in the office - more on that when we look at refraction statics. Refractionstatics is also classified as ‘field statics’

Residual static computations are made after field statics has been computed and applied to thedata.

Page 35: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page35

Amplitude Recovery:

Where’s all the source energy gone?

- The amplitude of a wave may be defined as:

‘The maximum departure of the wave from the average value’

- Basically, the size and magnitude of a waveform is called its

(Amplitude)

Page 36: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page36

Noise Attenuation (De-Noise):

De-Noise: set of processes that are carried out on the raw seismic data to increase thesignal to noise ratio. Ambient Noise (Random): which doesn't exhibit correlation from trace to

trace, not generally source generated. Coherent Noise: predictable from trace to trace across group of tracesI.e. have a phase relationship between adjacent traces, commonly source generator.

Types of Noise:

Random Noise (Ambient Noise) (Natural):

Noise generated by air waves Wind motion Environment noise Lose coupling of geophones in the ground

Coherent Noise (Artificial):

Direct arrivals. Ground roll. Air waves. Shallow refractions. Reflected refractions. Ghosts. Multiples. Diffractions.

Page 37: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page37*****************************************************************

Page 38: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page38

Ambient Noise (Random) (Natural):For ambient noise we can use editing and muting for attenuating such types of noiseslike high tension power, pumping, vehicles and so on.We can make editing by killing traces and removing the traces. Also we can usemuting for removing or cutting unwanted signal, cutting of surface waves and cuttingof distortions caused by the dynamic correction.Muting:

Page 39: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page39

Trace Editing:

Page 40: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page40

Deconvolution:

How to improve the vertical resolution?

Q: Relationship between frequency and attenuation?

A: High frequencies attenuated faster

Q: What is decon?

A: An inverse of filtering process

Deconvolution is a processing tool which has been used for:

Wavelet Shaping Multiple Removal

Convolution:

Convolution is the change of a wave shape as a result of passing it through a linear filter.

When a signal passes through any filter (such as the earth), it is replicated many timeswith different amplitudes and time delays, by the filter.

Assuming that the signal, itself, does not alter with the passing of time (ie. it is time shiftinvariant), then the filter produces a linear superposition of these copies of the signal.The mathematical description of this process is known as convolution.

Mathematically correlation process is similar to the convolution process except‘direction’ of operator’. In the case of convolution, the direction of operator does notmatter and any two waveforms, when convolved with each other, will result in the sameoutput waveform. However, in the case of correlation, we will have one result bycorrelating waveform A with waveform B and a different result by correlating waveformB with waveform A. That is, depending on which waveform (A or B) we make anoperator during cross-correlation, the result will be accordingly.

Page 41: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page41

Deconvolution:

The objective of deconvolution:

In theory……….

• Reveal the subsurface reflectors by removing the effects of the system wavelet,including ghosts and short-period multiples.

In practice………

• Achieve a better estimate of the geological layers.

• Output trace to represent reflectivity functions in terms of amplitude, polarity anddepth/time.

Generally fall into one of two categories:

Deterministic Deconvolution: part of the seismic system is known. For example, where thesource wavelet is accurately known we can do source signature deconvolution.

Statistical Deconvolution: no information is available about any of the components of theconvolutional model. A statistical approach is needed to derive information about a wavelet(either ‘source’, ‘system’ or combined wavelets).

Page 42: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page42

CMP Gather:

How to order the data?

Q: what is a CMP?

A: a collection of traces from the same sub-surface point

With different source-receiver offset values (preferably)

Q: Why CMP domain?

A: NMO, stack, remove some structural influences from the ‘gather’

NMO Correction:

How to correct for time differences due to offset within the CMP?

NMO corrects for arrival time differences due to source-receiver offset variations attempts tocorrect to zero-offset case.

Page 43: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page43

(Normal Move out – NMO)

Equation is valid provided offsets are not too large (spread <6km?) and assuming velocitydoesn’t vary laterally. Otherwise have to include higher order coefficients into equation.

Page 44: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page44

Demultiple:

How to remove false reflections?

General Properties of Multiples:

Low velocity (high moveout)

Velocity increases with depth

High amplitude

less geometric spreading

Periodic

Repeated cycles in horizontal layers

Predictable

From primaries

Primary and Multiple Velocity:

• As the primary and multiple energy has both travelled through the same layer themultiple just spent longer in the layer, then what’s their velocity relationship?

Page 45: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page45

Migration:

Do the reflections all come from vertically below?

Migration:

A process which attempts to correct the

distortions of the geological structure inherent in

the seismic section.

Migration re-distributes energy in the seismic

section to better image the true geological structures

Why Migration??

Rearrange seismic data so that reflection events may be displayed at their true positionin both space and time.

laterally in up-dip direction

upward in time

Collapse diffractions back to their point of origin.

Improve lateral resolution - collapse Fresnel zone.

To obtain more accurate velocity information (when performed pre-stack).

For more accurate ‘depth’ section.

Page 46: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page46

How geologic features appear after Migration?

Dipping events :

- Dipping events appear to be steeper- Migration moves events up dip- Migration steepness events- Migration shortens even

Anticline :

- the anticline is broader and less steep on the 'stack' section.- On the migrated section it appears less broad and steeper sides

Syncline:- Synclines appears on the stacked section as Bow-Ties

- Migration correct this shape

Page 47: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page47

Migration comparison:

Type Pluses Minuses

Pre stack Migrated data is used to pickvelocity analysis.

Higher cost than post stack.

Low S/N.

Poststack

High S/N ratio.

Lower cost than pre-stack

Assumptions in stack process breakdownwhen dip and velocity variation.

Time Good result if velocity anddip variation not toocomplex - at an affordableprice

Algorithms do not take account of raybending - poor when large dip andvelocity variations.

Depth Algorithms take account ofray bending.

Requires very accurate velocity-depthmodel. Time and cost increase.

2D Two pass on 3D data allowsfor use of differentalgorithms, extra QC.

Only uses energy from plane of section

3D Uses energy from in and outof plane of section.

Resource/cost issues

Page 48: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page48

CMP Stack:

How to reduce the number of traces?

Produces a ‘zero-offset’ trace (It results in S/N improvement)

What is Stacking?

We take all the traces that have the same common

mid point (in 2D or the same bin (3D) & sum them together.

The CMP locations for the 5 source / receiver pairs all fall in the same bin.

These 5 traces would be collected together ‘gathered’ and then summed together tomake 1 trace ‘stacked’.

Shot gather data need to be sorted to CMP gathers. NMO correction apply to the CMP gathers. Stack the NMO corrected CMP gathers.

Page 49: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page49

For all sorts of reasons, the ideal seismic section would consist of a series of traces shotwith the shot and receiver in the same position. This would produce a true zero-offset ornormal-incidence section where, for a horizontal reflector, the incident rays would be atright angles.

In practical terms, placing the recording instruments on top of the shot is not a viableproposition! So, in the real world, our shot and receiver are always some distance (oroffset) apart, and our reflections will include some distortion due to the increased travel-time of the raypath to the longer offsets.

The most important correction that is applied is that of normal moveout, usually referredto as NMO.

Stacking Velocity (Vnmo)

The velocity associated with the best-fit hyperbola to correct moveout on CMP gathers andalign signal from the same reflector.

For small offsets and horizontal layering

Vnmo ~ Vrms

Page 50: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page50

The velocity we deal with most!

Stacking or NMO velocity is the velocity of a constant homogeneous isotropic layer above areflector which would give approximately the same offset-dependence (normal moveout) asactually observed. It is the value determined by a velocity analysis and is the value used foroptimum common=midpoint stacking.

The velocities measured during Velocity Analysis.

Often (erroneously) referred to as RMS velocities (Vrms) .

Increase in value in the presence of dipping events.

Stacking velocity (Vnmo) approaches RMS velocities (Vrms) only for small offsets.

For a single layer model, homogeneous and isotropic,

stacking=RMS=interval=average

Page 51: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page51

The Power of Stack:

Relies on signal being in phase and noise being out of phase i.e. primary signal is ‘flat’on the cmp gather after NMO corrections

A spatial or K- filtering process

Data reduction - usually to [almost] ‘zero-offset’ trace

Attenuates coherent noise in the input record (to varying degrees)

Attenuates random noise relative to signal by up to N; where N = number of tracesstacked (i.e. fold of stack)

K filter - filtering of spatial frequencies by summing/mixing

K-filter - Apply an ‘all-ones’ filter and output the central sample. To apply a spatial K-filter to a record we must first collect the series of samples having

the same time values from each data trace - ie. form a common-time trace.

This is the input data which must be convolved with our chosen filter to produce thefiltered output. The process is applied to each common-time trace in turn (0 msec, 4msec, 8 msec, etc.).

The summing filter is a high-cut spatial filter. It passes energy close to K=0, ie.effectively dips close to 0ms per trace. Therefore, if signal has been aligned to zero dip(as in NMO corrected data), signal will be passed.

Organized noise contained in steeper dips will be suppressed - except at low temporalfrequencies or if the noise aliases and wraps-around through K=0.

If we increase the number of filter points - ie. increase the fold - then the filter becomesmore effective at passing only energy close to K=0, or dips closer to zero.

Page 52: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page52

Migration:Migration of seismic data moves dipping events to their correct positions, collapsesdiffractions, increases spatial resolution and is probably the most important of all processingstages.

Migration theory has been long established but restricted computer power has driven theindustry to a bewildering array of ingenious methods to perform and enhance the accuracyof migration.

It could be argued that much of the past research has been directed towards doing migrationless wrong rather than doing it right. Certainly there has been more research into migrationalgorithms than the critical factor of determining the correct velocity model to use.

With today's availability of cheap computer power modern practice tends towards doingmigration as correctly as possible rather than as cheaply as possible. Most migrationalgorithms have good points and bad points and work better in some data areas than inothers.

As in much of processing the choice of which migration algorithm to apply is rathersubjective. In this section we introduce the basic theory of migration and discuss the variousmethods and terminology which have built up over the last 30 years. Yilmaz (1987) andBancroft (1998) contain many further details and examples of migration.

Basic Theory:

Zero-Offset Migration:

The theory of zero-offset migration is important since the stacking process simulates a zero-offset section as well as attenuating noise and multiples. The migration process is referred to aspoststack migration or zero- offset migration.

If the stack does not produce a good approximation to the zero-offset section then prestackmigration must be performed prior to stacking. Due to the data volumes involved, prestackmigration takes at least the fold of the data longer to compute than poststack migration.

Page 53: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page53

The adjacent figure (a) shows a zero-offset seismic experiment conducted over a constantvelocity medium. Sources and receivers are marked by red dots. The image of dipping reflectordip ß results in seismic section (b) where the reflection point is plotted in green below thereceiver at a time equal to its reflection time (t1t4).

On the seismic section, the dip α and position of the reflector are incorrect and an interpretationof this section would be in error. The equation shown in (b) relates the dip before and aftermigration. The maximum dip on the seismic section of 45o corresponds to a reflector dip of90o .

By taking a semicircular arc equal to the travel time from each of the recorded positions andconstructing a line at tangent to the arcs the true migrated position of the reflector is discovered(c). The process of migration makes the resulting image look like the true geological structure.Migration is sometimes also called imaging.

The migration process has moved the reflection up-dip and the migrated segment (blue) issteeper and shorter than the reflection segment (green).

Frequencies will be lower on the migrated segment. In the diagram the velocity is assumed toequal 1 so the vertical axis of time and depth are interchangeable.

For the migration to be correct (figure (a)) the vertical axis of (c) would be in depth and wouldrequire the velocity to be known (in order to convert from the recorded time section to themigrated depth section).

Page 54: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page54

Page 55: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page55

Kirchhoff Migration:

The earliest methods of migration by hand used the semicircular construction shown in theadjacent figure (a) for the migration of a single point shown in green. The migrated resultshown in blue is a semicircle in a constant velocity medium.

This result is also called the impulse response of a process and is especially useful since aseismic section can be considered to consist of a series of spikes - the migrated reflectors willoccur where the semicircles constructively interfere.

This is called Hagedoorn migration where the amplitude of the spike on the input time sectionis distributed along a semicircle on the output migrated time section. Destructive interferencewill cancel out noise, but sometimes residual semicircular smiles are seen in the resultingsection as a result of noise.

In (b) of the adjacent figure the constant velocity semicircle construction is used to migrate ahyperbolic diffraction curve (green) to it's migrated position (blue point) where the semicirclesinterfere. An alternative method would be to sum the amplitudes along the hyperbola and placethe summed amplitude at the apex. This latter form of migration formed the basis for the firstcomputer algorithms and is called diffraction summation, diffraction stack or more generallyKirchhoff migration. In the figure (c) a Kirchhoff summation is illustrated for migration of adipping event.

The zero-offset section is considered to be a superposition of diffractors at each time sample(Huygen's Principal). The diffractors interfere to form coherent events and individualdiffractions may be visible at discontinuities such as faults. At each output time migratedposition (shown by the blue dots and line) the amplitudes of the input zero-offset time data(green dots and line) are summed along a series of hyperbolas controlled by the velocity field(some of which are illustrated). Maximum amplitudes will occur at the migrated event,otherwise the amplitudes will be minimal.

Page 56: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page56

Page 57: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page57

Migration:

A major difference in migration algorithms arises from the way the velocity field is utilised. Inthe early 1970's when migration algorithms were being developed the computer power was solimited that several approximations were introduced in order to get programs to run in anythinglike a reasonable time.

These assumptions led to time-migration - a process which collapses diffractions and movesdipping events toward the true position but leaves the migrated image with a time axis whichmust be depth converted at a later stage. Time migration assumes that the diffraction shape ishyperbolic and ignores ray bending at velocity boundaries.

Depth Migration assumes that the arbitrary velocity structure of the earth is known and willcompute the correct diffraction shape for the velocity model. The data are then migratedaccording to the diffraction shape and the output is defined with a depth axis (although resultsare often stretched back to time to enable comparison with time migrations).

If the velocity model for the depth migration is incorrect then the migration will be incorrectand the error may be difficult to detect if the migration is performed post-stack.

Page 58: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page58

Page 59: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page59

Page 60: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page60

The exploding reflector model and the finite difference methods automatically take care of theamplitudes when using the downward continuation method.

Similarly the FK method of migration applies a defined amplitude scaling when moving thedata in the FK space.

Estimation of the diffraction stack amplitudes proved more of a challenge until the Kirchhoffintegral solution to the wave equation provided a theoretical foundation.

Assumptions used in the design of geological models are reviewed in preparation for evaluatingthe design of migration programs that are derived from the wave-equation. A review ofKirchhoff migration is then presented that begins as a diffraction stack process, and thenproceeds to matched filtering concepts and the integral solution to the wave-equation.

One dimensional (1D) convolution modelling and deconvolution are then used to introduceinversion concepts that lead to “transpose” processes and matched filtering. These concepts arethen expanded for twodimensional (2D) data, to illustrate that Kirchhoff migration is a“transpose” process or matched filter that approximates seismic inversion.

Evolution of amplitude in Kirchhoff migrations:

Diffraction stacking or Kirchhoff migration produces one migrated sample at a time by, first,computing a diffraction shape for a scatterpoint at that location, second, summing andweighting the input energy along a diffraction path, and third, placing the summed energy at thescatterpoint location on the migrated section.

The process is repeated for all migrated samples. During summation, the amplitudes of theinput data are weighted, and it is this weighting of the input data that we are investigating, andwhich is the dominant objective of many inversions.

Page 61: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page61

Seismic traces contain wavelets that represent different properties, depending on the assumedmodel. For example, with flat data, the peak amplitude of the wavelet may be assumed torepresent the amplitude of a reflecting boundary, or the same wavelet may be considered part ofa wave field. The amplitude will be handled differently when combining all the traces to forman image of the subsurface.

Amplitudes may be computed by a number of processes such as:

• stacking• diffraction stacking, and matched filtering• solutions to the wave-equation• inversion principles all of which are based on a specific type of model.

Consider the preparation of traces in a common midpoint (CMP) gather where gain recoveryhas been applied to each trace. We now assume that the amplitudes of the wavelets representthe reflection coefficients from the subsurface geology. Normal moveout (NMO) correction hasbeen applied to match the travel-times of offset traces with those at zero offset.

A mute is then applied to ensure that all the contributing wavelets look similar. These waveletsare summed, and then divided by the number of contributing traces, to produce an average ofthe wavelets. This averaging process maintains the amplitude of the wavelet while attenuatingthe amplitude of noise. The result is a zero-offset trace with an improved signal to noise ratio(SNR).

Seismic imaging is considered key to reduce risk and cost in exploratory as well asdevelopment drilling. Although we have recently seen important advances, the authors claimthat a step change is required to significantly improve the industry’s ability to obtain accurateseismic images of oil and gas reservoirs within geologically complex settings.

Page 62: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page62

Kirchhoff integral solution to the wave-equation:

The parameters used in the diffraction stack method were estimated from the physicalmodelling experiments. This migration process became rigorous when it was recognized[Schneider 1978] that the Kirchhoff integral solution to the wave equation, which was usedin optics, gave a theoretical solution for seismic migration.

This theoretical solution provided both the amplitude and phase filters that had been previouslypredicted by experimentation. A2D integral solution to the wave-equation from Gazdag (1984)is shown in equation:

where r is the radial distance from the source receiver location to the scatter point, c = V/2,and β the geological dip for the appropriate position on the diffraction. The cosine term maybe replaced by T0/T, giving a more familiar form of:

The Kirchhoff, FK, and downward continuation methods of seismic migration are based onwave-equation solutions. These migration algorithms produce an image of the sub-surface bypropagating the energy recorded on the surface back to the area of the reflector.

In contrast to these wave-equation methods, seismic inversion attempts to estimate thereflectivity of a geological model from the recorded energy.

Quite often, these inversions produce an algorithm that is almost identical to that of theKirchhoff method, with only slight changes to the amplitude scaling.

Page 63: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page63

Some Glossaries:

AGC - Automatic gain control. An amplitude gain procedure applied to the trace thatequalizes the trace energy over a contiguous sequence of specified time windows. Afterapplication of AGC, attenuation and geometrical spreading effects can be roughlycorrected for and reflection amplitudes are normalized to be about the same value.

CMG - Common midpoint gather. A collection of traces all having the same midpointlocation between the source and geophone.

COG - Common offset gather. A collection of traces all having the same offsetdisplacement between the source and geophone.

CRG - Common receiver gather. A collection of traces all recorded with the samegeophone but generated by different shots.

CSG - Common shot gather. Vibrations from a shot (e.g., an explosion, air gun, orvibroseis truck) are recorded by a number of geophones, and the collection of thesetraces is known as a CSG.

Fold - The number of traces that are summed together to enhance coherent signal. Forexample, a common midpoint gather of N traces is time shifted to align the commonreflection events with one another and the traces are stacked to give a single trace withfold N.

IVSP data - Inverse vertical seismic profile data, where the sources are in the well andthe receivers are on the surface. This is the opposite to the VSP geometry where thesources are on the surface and the receivers are in the well . An IVSP trace willsometimes be referred to as a VSP trace or reverse vertical seismic profile (RVSP)seismogram.

OBS survey - Ocean bottom seismic survey. Recording devices are placed along an arealgrid on the ocean floor and record the seismic response of the earth for marine sources,such as air guns towed behind a boat. The OBS trace will be classified as a VSP-liketrace.

Page 64: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page64

Reflection coeffcient. A flat acoustic layer interface that separates two homogeneousisotropic media with densities ρ 1 and ρ 2 and compressional velocities v has the pressurereflection coeffcient (ρ2v2–ρ1v1)/(ρ2v2+ ρ1v2). This assumes that the source plane wave isnormally incident on the interface from the medium indexed by the number 1.

RTM - Reverse Time Migration. A migration method where the reflection traces arereversed in time as the source-time history at each geophone. These geophones now actas sources of seismic energy and the fields are backpropagated into the medium (Yilmaz,2001).

Stacking - Stacking traces together is equivalent to summation of traces. This is usuallydone with traces in a common midpoint gather after aligning events from a commonreflection point.

S/N - Signal-to-noise ratio. There are many practical ways to compute the S/N ratio.Gerstoft et al. (2006) estimates the S/N of seismic traces by taking the strongestamplitude of a coherent event and divides it by the standard deviation of a long noisesegment in the trace.

SSF - Split step Fourier migration. A migration method performed in the frequency,depth, and spatial wavenumber domains along the lateral coordinates (Yilmaz, 2001).

SSP data - Surface seismic profile data. Data collected by locating both shots andreceivers on or near the free surface.

SWD data - Seismic-while-drilling (SWD) data. Passive traces recorded by receivers onthe free surface with the source as a moving drill bit. Drillers desire knowledge about therock environment ahead of the bit, so they sometimes record the vibrations that areexcited by the drill bit. These records can be used to estimate the subsurface properties,such as reflectivity (Poletto and Miranda, 2004).

SWP data - Single well profile data with the shooting geometry . Data are collected byplacing both shots and receivers along a well.

Page 65: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page65

VSP data - Vertical seismic profile data. Data collected by firing shots at or near the freesurface and recorded by receivers in a nearby well. The well can be either vertical,deviated, or horizontal .

Xwell data - Crosswell data. Data collected by firing shots along one well and recordingthe resulting seismic vibrations by receivers along an adjacent well.

ZO data - Zero-offset data where the geophone is at the same location as the source.

Source-receiver configurations for four different experiments: SSP=surface seismic profile, VSP=vertical seismicprofile, SWP=single well profile, and Xwell=Crosswell. Each experiment can have many sources or receivers atthe indicated boundaries (horizontal solid line is the free surface, vertical thick line is a well). The derrickindicates a surface well location, y denotes the reflection point, and the stars indicate sources.

Page 66: Principles of seismic data processing m.m.badawy

Principles of Seismic Data Processing

M.M.Badawy

Page66

References:

Bancroft, J.C., 1998. A practical understanding of Pre- and Poststack Migration.Volumes 1 & 2. SEG.

Hatton L.,Worthington, M.H., & Makin, J., 1986. Seismic Data Processing - Theory andPractice. Blackwell.

McQuillin, R., Bacon, M., & Barclay, W., 1984. An Introduction to SeismicInterpretation. Graham & Trotman.

Sheriff, R.E., 1991. Encyclopedic Dictionary of Exploration Geophysics. SEG. Sheriff,R.E. & Geldart, L.P., 1982. Exploration Seismology. Volumes 1 & 2. Cambridge

University Press. Yilmaz, O. 1987. Seismic Data Processing. SEG.


Recommended