+ All Categories
Home > Documents > For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... ·...

For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... ·...

Date post: 13-Mar-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
37
For Peer Review Attribute selection in seismic facies classification: application to a Gulf of Mexico 3D seismic survey Journal: Interpretation Manuscript ID Draft Manuscript Type: 2018-04 Machine learning in seismic data analysis Date Submitted by the Author: n/a Complete List of Authors: Kim, Yuji; University of Oklahoma, Geology and Geophysics Hardisty, Robert; University of Oklahoma, ConocoPhillips School of Geology and Geophysics Marfurt, Kurt; University of Oklahoma, College of Earth and Energy; Keywords: Facies, seismic attributes Subject Areas: Integrated workflows and best practices (with broad applicability), Application examples (applying a relatively new technique or concept), Structural, stratigraphic, and sedimentologic interpretation https://mc.manuscriptcentral.com/interpretation Interpretation
Transcript
Page 1: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer ReviewAttribute selection in seismic facies classification: application to a Gulf of Mexico 3D seismic survey

Journal: Interpretation

Manuscript ID Draft

Manuscript Type: 2018-04 Machine learning in seismic data analysis

Date Submitted by the Author: n/a

Complete List of Authors: Kim, Yuji; University of Oklahoma, Geology and GeophysicsHardisty, Robert; University of Oklahoma, ConocoPhillips School of Geology and GeophysicsMarfurt, Kurt; University of Oklahoma, College of Earth and Energy;

Keywords: Facies, seismic attributes

Subject Areas:Integrated workflows and best practices (with broad applicability), Application examples (applying a relatively new technique or concept), Structural, stratigraphic, and sedimentologic interpretation

https://mc.manuscriptcentral.com/interpretation

Interpretation

Page 2: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

ATTRIBUTE SELECTION IN SEISMIC FACIES

CLASSIFICATION: APPLICATION TO A GULF OF MEXICO

3D SEISMIC SURVEY

Yuji Kim1, Robert Hardisty1, and Kurt J. Marfurt1

1 The University of Oklahoma, ConocoPhillips School of Geology and

Geophysics, Norman, Oklahoma, USA. E-mail: [email protected];

Original paper date of submission: 11. 30. 2018

Page 1 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 3: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

2

Page 2 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 4: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

3

ATTRIBUTE SELECTION IN SEISMIC FACIES CLASSIFICATION:

APPLICATION TO A GULF OF MEXICO 3D SEISMIC SURVEY

Abstract

Automated seismic facies classification using machine learning algorithms is becoming more

common in the geophysics industry. Seismic attributes are frequently used as the input to

classification because some attributes express certain geologic patterns or depositional

environments better than the original seismic amplitude. Selecting appropriate attributes is a

crucial part of the classification workflow for computational cost and reasonable model building.

For supervised learning, principal component analysis (PCA) can reduce the dimensions of the

data while maintaining the highest variance possible. For supervised learning, the best attribute

subset can be built by selecting input attributes which are relevant to the output class and avoid

using redundant attributes which are similar to each other. Multiple attributes are tested to classify

salt diapirs, mass transport deposits (MTDs) and the conformal reflector “background” for a 3D

seismic marine survey acquired on the northern Gulf of Mexico shelf. We analyze attribute-to-

attribute correlation and the correlation between the input attributes to the output classes to

understand which attributes are relevant and which attribute are redundant. We find that amplitude

and texture attribute families are able to differentiate salt and MTC's. Multivariate analysis using

filter, wrapper and embedded algorithms rank the attributes by importance, indicating the best

attribute subset for classification. We show that attribute selection algorithms for supervised

learning can not only reduce computational cost but also enhance the performance of the

classification.

Page 3 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 5: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

4

Introduction

Machine learning and big data analysis have drawn much attention recently and have been

implemented in many different industries. In the exploration and production (E&P) industry,

automated seismic facies classification and pattern recognition are gradually being integrated into

common workflows. Several machine learning algorithms such as self-organizing map (SOM) and

K-means clustering have been applied to automate seismic facies classification and are available

in several commercial interpretation software packages. A great number of different types of

seismic attributes can be used for classification and pattern recognition in machine learning

algorithms. However, some attributes express certain geologic or depositional pattern more

effectively than others. For instance, the envelope (reflection strength) is sensitive to changes in

acoustic impedance and has long been correlated to changes in lithology and porosity (Chopra and

Marfurt, 2005). In many cases, the instantaneous frequency typically enhances interpretation of

vertical and lateral variations of layer thickness. Coherence measures lateral changes seismic

waveform which in turn can be correlated to lateral changes in structure and stratigraphy (Marfurt

et al., 1998). Understanding the classification methods and nature of seismic attributes is crucial

to providing the most reliable predictions. Exploration seismic data are “big” while attributes may

be highly redundant. Adding to this problem, the original seismic amplitude data (and therefore

subsequently derived attributes) may contain significant noise (Coléou. et al. 2003).

A number of studies find dimensionality reduction in machine learning problems reduces

computation time and storage space and shows meaningful results out of facies classification

(Coléou. et al. 2003; Roy et al., 2010; Roden et al., 2015). Principal component analysis (PCA) is

one of the most popular methods to reduce dimensionality, reducing a large multidimensional

Page 4 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 6: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

5

(multiattribute) data set into a lower dimensional data set spanned by composite (linear

combinations of the original) attributes, while preserving variation. Self-organizing mapping

(SOM) also creates a lower-dimensional representation of high-dimensional input data to aid

interpretation. Both PCA and SOM are a type of unsupervised learning, where the goal is to define

the underlying structure of the input data.

Roden et al. (2015) used PCA to define a framework for multiattribute analysis to

understand which seismic attributes are the significant for unsupervised learning. In their study,

the combination of attributes determined by PCA is then used as input to SOM to identify geologic

patterns and to define stratigraphy, seismic facies and direct hydrocarbon indicators. Zhao et al.

(2018) built on these ideas and suggested a weight matrix computed from the skewness and

kurtosis of the attribute histograms to to improve unsupervised SOM learning.

In general, attribute selection in unsupervised learning relies on the data distribution of the

input attributes and the correlation between input attributes. In contrast, supervised learning maps

a relationship between input attributes and the desired classified output using an interpreter-

defined training dataset, constructing a nonlinear inferred function to do so. A number of

supervised learning studies suggest alternatives other than PCA for attribute selection, also known

as feature selection or variable selection, to reduce dimensionality (Jain and Zongker, 1997;

Chandrashekar and Sahin, 2014). A. In this paper, we introduce multiple strategies to select

appropriate attributes for seismic facies classification with a case study. Our goal is to 1) provide

a good classification model in terms of validation accuracy while avoiding overfitting, 2) to reduce

computation time and memory space, and 3) to aid attribute selection in unsupervised learning

where the training data are not defined.

Page 5 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 7: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

6

A desirable attribute subset might simply be built by detecting relevant attributes and

discarding the irrelevant attributes (Sánchez-Maroño et al., 2007), where relevant attributes are

input attributes which are highly correlated with the training output. Redundant attributes are input

attributes which are highly correlated with other attributes. Barnes (2007) suggested that there are

a great many redundant and useless attributes that breeds confusion in conventional human

interpretation, but may also pose problems in machine learning.

To avoid inefficiency and build a simple predictive model, in this paper we evaluate several

attribute selection algorithms to maximize relevance and minimize redundancy thereby building

an efficient subset of attributes for supervised facies classification. We begin with an overview of

the different attribute evaluation schemes as well as alternative correlation metrics. We then

address a specific problem of differentiating salt facies from mass transport deposits in a Gulf of

Mexico survey from a background of relatively conformal sedimentary reflectors, and generate 20

attribute volumes belonging to five attribute families. We define polygons around the key facies

to be differentiated, generating training data. We then determine which combination of attributes

provides the greatest facies discrimination. Finally, we use this attribute subset to classify the

volume using a random forest decision tree supervised classification algorithm, and identify areas

of successful classification and areas that are more problematic.

Methodology

Attribute selection methods can be classified into three groups: 1) a wrapper model which

use a correlation or dependency measure, 2) a filter model which applies a predictive model to

evaluate the performance of an attribute subset and 3) an embedded model measuring attribute

Page 6 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 8: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

7

importance during the training process. Since multiple attributes are analyzed simultaneously in

the test, we consider our attribute selection algorithm to be a multivariate algorithm.

Correlation measures to maximize relevance, minimize redundancy

Correlation or dependence is a statistical measure of the relationship between two random

variables. The goal of finding an optimal subset can be achieved by maximizing the relevance

between input attributes and the output class, while minimizing redundancy among attributes (Yu

and Liu, 2004; Peng et al., 2005). To maximize relevance, attributes which are highly correlated

with the class are selected to comprise subsets. Redundancy is caused by attributes which are

highly correlated to each other. Thus, measuring and analyzing the correlation between an attribute

and a class, or correlations among attributes are prioritized to evaluate the attribute subset. A

number of correlation measures can be used in feature selection; We examine Pearson correlation,

rank correlation, mutual information, and distance correlation (the mathematical description in the

Appendix). The Pearson correlation coefficient (Pearson, 1895) is the most common measure, and

detects only linear relationship between two random variables. A Pearson coefficient value of 1

means strong positive relationship between two variables: If there is a positive increase in one

variable, then there is a positive increase of a fixed proportion in the other variable. Spearman’s

rank correlation, on the other hand, measures the tendency of a positive or negative relation,

without requiring that increase or decrease to be explained by a linear relationship. Figure 1

illustrates different types of relationships between variables X and Y. In Figure 1b and 1e the rank

correlation has a higher coefficient value than Pearson's correlation, because rank correlation is

able to detect non-linear positive relationships. Dependence among attributes are not always linear.

Barnes (2007) suggests a high rank correlation among amplitude attributes (average reflection

Page 7 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 9: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

8

strength, RMS amplitude, average absolute amplitude), which have correlation coefficients in

excess of 0.95. Discontinuity attributes such as cross-correlation (Bahorich and Farmer, 1995),

semblance (Marfurt et al., 1998), Sobel filter similarity (Luo et al., 1996), chaos (Randen et al.,

2001), eigenstructure-based methods (Gersztenkorn and Marfurt, 1999), and weighted correlation

also have strong rank correlation. Rank correlation measures non-linear and monotonic

relationship. In a monotonic relationship, if a value of one variable increases, so does the value of

the other variable. Mutual information (MI) and distance correlation, on the other hand, detect

non-linear and non-monotonic relationship. MI is based on information theory and measures the

information that two random variables X and Y share. Conceptually, MI measures how much

knowing about one variable reduces uncertainty about the other variable. Distance correlation is

obtained from distance covariance divided by the product of their distance standard deviation

(Appendix), which is related to the Pearson correlation in that the definition involves a product

moment. Distance correlation, however, uses the Euclidean norm to obtain pairwise distances

between points. In Figure 1c and 1f, the Pearson and rank correlation coefficients are

approximately zero which indicates that these two correlation measures do not detect non-linear

and non-monotonic relationship.

In terms of dependence between an input attribute and output class, it is also important to

identify each predictive model’s ability to map non-linear relationship between an attribute and a

class. Even though the attribute is a powerful variable which can have a high correlation with class,

some predictive models may degrade prediction accuracy if the model cannot properly map the

relationship. Figure 1 describes four types of predictive models: linear Bayesian, neural network

(NN), random forest (RF), and support vector machine (SVM) and their ability to map input and

output using regression methods. The hyper parameters for each predictive model were selected

Page 8 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 10: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

9

based on grid-searches that give the best validation score. Except for linear-Bayesian, the other

three of models map the input and output with high accuracy. We select our test predictive model

to be NN, RF, and SVM for two case studies since they are able to map the nonlinear relationship

appropriately. The noise in the signal can affect the correlation in that noise gives more uncertainty

in predicting the output class. The sensitivity to noise also differs with correlation measure. Mutual

information and distance correlation coefficients decrease more than the other when Gaussian

noise equivalent to 0.1 N(0,1) is added to variable Y as shown in Figure 1d, 1e and 1f.

The correlation between attributes are a function of the correlation measures. The correlation

measure is affected not only by non-linearity but also by the covariance of two variables or noise.

Figure 2 shows scatter plots of two attributes which have relatively high correlations. The

relationship between peak magnitude and instantaneous envelope (Figure 2b) is linear and has

higher Pearson coefficient than those of the other two. MI and distance correlation can detect non-

linear relationships but their values between GLCM and variance are lower in Figure 2c because

their covariance is lower.

Univariate versus multivariate analysis for attribute selection

Univariate analysis is data analysis considering only one attribute, while multivariate analysis

considers multiple attributes and their relationship between each other. For instance, in univariate

analysis, the most negative curvature has been shown to highlight karst collapse features well (Qi,

2014). In univariate analysis, we evaluate whether any single attribute is relevant to the output.

When using multiple attributes as input, the relationship between attributes as well as their

relevance to the output should be analyzed in a multivariate manner. For instance, RMS amplitude

and average reflection strength can be both good hydrocarbon indicators, but it is more efficient

Page 9 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 11: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

10

to include just one of these two attributes when it comes to facies classification, using both is

redundant. To measure redundancy is simple when attributes are perfectly correlated. If two

attributes are perfectly correlated, then adding them does not provide additional information.

Guyon and Elisseeff (2003), however, suggest that if two variables are highly correlated, then they

have a possibility to complement each other. Also, two variables which are not relevant by

themselves can be useful when they are used together. Selecting attributes when considering

relevance and redundancy together can be a complicated problem, but if the attribute selection

algorithms are developed in a multivariate manner, they can be applied for attribute selection as

well.

Attribute selection algorithms: filters, wrappers and embedded methods

A goal of attribute selection is to reduce computation time for training and to reduce

dimensionality. Principal component analysis (PCA), for instance, transforms a set of observations

into several inter-correlated variables (Abdi and Williams, 2010). The goal is to extract the

important information with fewer variables. PCA is a common tool in attribute selection for

unsupervised learning. PCA is also a multivariate technique. In supervised learning, however,

input-output pair provide substantial information in attribute selection and avoiding the overfitting

problem resulting from irrelevant attributes.

In terms of supervised classification, there are three major approaches to select attributes in

a multivariate manner: filters, wrappers and embedded methods. Figure 3 describes the

mechanisms of the three types of methods. Filter methods use a suitable measure or ranking

criterion such as correlation or mutual information. Relief (Kira and Rendell, 1992) is a distance-

based filter algorithm which evaluates attributes according to how well their values differentiate

Page 10 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 12: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

11

between instances that are near to each other. Correlation-based Feature Selection(CFS) (Hall,

1999) is an algorithm based on a heuristic evaluation function which is calculated from correlations

between attribute-class and attribute-attribute. These filter algorithms are computationally less

expensive than the wrapper algorithm which is required to compute a prediction model.

Wrapper methods use a predictive model and maximize classification accuracy to select

the attribute subset. The wrapper algorithms, thus require greater computational resources but

provide better performance in that they provide a higher score in the prediction model. The

sequential forward selection(SFS) algorithm (Kittler, 1978), for example, starts with an empty

subset and adds an attribute to the subset sequentially to yield the highest increase in score.

Sequential backward selection (SBS), on the other hand, subtracts an attribute from a full subset

sequentially whose elimination gives the lowest decrease in score.

The embedded method implements attribute selection as a part of the training process. For

instance, a random forest classifier calculates the variable importance (Breiman, 2001b; Liaw and

Wiener, 2002) during training. Another embedded technique is to compute the weights of each

attribute in the SVM classifier (Guyone et al.,2002) and logistic regression (Ma and Huang, 2005).

Case study: Gulf of Mexico survey - attribute selection to differentiate salt, mass transport

deposits (MTDs) and conformal reflectors

Seismic expression of salt and MTDs

Salt diapirs inherently have poor internal reflectivity and are easily overprinted by crossing

coherent migration artifacts (Jones and Davison, 2014) in part due to the geometry of salt bodies

and their higher P-wave velocities than those of the surrounding strata. deposits around salts. The

Page 11 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 13: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

12

seismic amplitude pattern inside salt domes results in attribute anomalies. In general, the

mismigrated noise gives rise to a relatively low amplitude, chaotic and discontinuous seismic

pattern that results in low coherence and high gray-level co-occurrence matrix entropy. Therefore,

texture attributes, such as GLCM (entropy, homogeneity, energy), have been widely used to

differentiate salt diapirs (Berthelot et al., 2013; Qi et al., 2016). Mass transport complexes (MTDs)

are slumps, slides, and debris flows generated and emplaced by gravity-controlled processes

(Nelson et al., 2011). In the Gulf of Mexico datasets, transport direction of MTD is related to the

geometry and the growth of nearby salt diapirs. MTDS often show chaotic or highly disrupted

seismic patterns with great internal complexity (Frey-Martinez, 2010). However, in the upper part

of the MTD we may see coherent, rotated fault blocks. In general, the resulting attribute anomalies

are high RMS amplitudes low coherence (Brown, 2011; Omosanya and Alves, 2013). The

conformal reflectors around salt diapirs and MTDs show a relatively continuous seismic pattern,

which leads to high coherence and low to moderate values of GLCM entropy.

Methodology

The 3D marine seismic is acquired in offshore Louisiana over an area of 3089 mi2 (Qi et al.,

2016). The poststack seismic volume includes 4367 in-lines, 1594 cross-lines and 475 time

samples with a sampling interval of 4 ms. 20 seismic attributes in 5 categories were calculated for

the 3D volume. The five attribute categories consist of amplitude, geometric, instantaneous,

texture and spectral attributes (Table 1). For supervised learning, we use a voxel type training

dataset which is rendered from geological and stratigraphical interpretation: salt, MTDs and

conformal reflectors are interpreted inside the red box (751 in-lines 551 cross-lines) in Figure 4,

and cropped as a 3D seismic volume using a polygon. 10,000 voxels in the cropped 3D seismic of

Page 12 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 14: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

13

each facies were randomly selected and labeled for training output (e.g. conformal reflectors: 0

salt: 1, MTC: 2). The training input data were then extracted from the 20 attribute volumes at the

same 10,000 locations. Figure 5 summarizes the attribute selection workflow. First, we look into

attribute-to-attribute correlations using four measures of correlation: Pearson, rank, MI, distance

correlation. These measures are valid to analyze relationships between two continuous variables.

To investigate the relationship between an input attribute and an output class, we computed the MI

between the attribute and the class. We also use analysis of variance (ANOVA) to determine which

attributes are significant. ANOVA is an analysis tool that splits the variability found in a data set

into systematic factors and random factors. If the variation can be explained from systematic

factors, then the variable is significant in distinguishing classes. Both ANOVA and MI can be

applied to analyze both continuous and discrete type variables, which are continuous attributes and

discrete output class in this case. Also, both methods are univariate filters which can provide

information on how well a single attribute can differentiate classes and is relevant to each output

class individually. Analysis of attribute-to-attribute dependence and attribute-to-class dependence

is a measure of the relevance and redundancy of attributes. To build the best subset, however,

requires a quantitative measure and an optimizing algorithm of relevance and redundancy. Thus,

we apply multivariate algorithms using three approaches: filter, wrapper and embedded methods.

Among several filter methods we use ReliefF (Kononenko, 1997) which is an updated Relief

algorithm that adopts the L1 norm to find near-hit and near-miss instances and uses absolute

differences when updating the weight vector. We evaluate the fast correlation-based filter (FCBF)

algorithm designed for high-dimensional data filter method. In the wrapper case, we applied SFS

and SBS with three classifiers: NN, RF, SVM. For the embedded case the RF classifier is adopted

which produces a ranking of variables during the training process. For each method, the we test

Page 13 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 15: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

14

the performance and evaluate the error rates of the attribute subset using the NN, RF and SVM

classifiers. We predict 3D facies using an RF classifier for the best attribute subset to test the

validity of the model.

Results and discussion

Table 2 shows the correlation between each attribute and the other attributes using Pearson,

Rank, MI, distance correlation measures. The GLCM homogeneity and entropy are perfectly anti-

correlated when measured with Pearson, Rank and Distance correlation, suggesting we should use

only one of them. Amplitude attributes such as RMS amplitude (measure in a single trace) and

total energy (measured in five adjacent traces along structural dip) are highly correlated (corr.

coeff. > 0.9). Also, these amplitude attributes are highly correlated with instantaneous envelope

and peak magnitude. Because MI is more sensitive to noise or the distribution of data points, the

MI coefficients are lower than those of other measures.

The attribute-to-class relationship is analyzed using ANOVA and MI. Both methods show

that the amplitude family of attributes (e.g. RMS amplitude, total energy, instantaneous envelope

and peak magnitude) are relevant to the output class. The texture attributes (GLCM and chaos) are

also powerful variables for classification. ANOVA and MI tell us which attributes can better

differentiate the facies of interest. However, using a univariate filter can result in redundancy in

attribute selection. For instance, both GLCM entropy and GLCM homogeneity are highly ranked

in both ANOVA and MI (Figure 6) which shows they are powerful variables. However, using both

attributes is inefficient because the two attributes are perfectly anti-correlated. To consider all

variables together, we applied (1) PCA and (2) attribute selection algorithms in a multivariate

manner. PCA analysis of 15 components explains 0.98 of the variance, while 10 components

Page 14 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 16: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

15

explain 0.90 of the variance (Figure 7). Even though PCA provides a measure of how many

components are needed to explain a certain variation, it does not consider the input attribute-to-

output class relationship. To take into account relevance and redundancy, we test several attribute

selection algorithms to build the attribute subsets. The 10 highest ranked attributes obtained from

each attribute selection algorithm are described in Table 3. Algorithms in the same the categories

(e.g. ReliefF and FCBF algorithms of the filter method, six algorithms of the wrapper method)

show similar attribute rankings. However, the filter and the wrapper algorithms yield quite

different attribute subsets. Wrapper algorithms generally tend to select relevant attributes, which

are shown to be important in the analysis of input-to-output dependence. At the same time, wrapper

algorithms more efficiently reject redundant attributes. For instance, when total energy is chosen

in the subset, the wrapper algorithm rejects RMS amplitude from the 10 highest-ranked attribute

subset. Random forest variable selection is an example of an embedded method which tends to

choose important attributes, but they also include redundant attributes. For instance, the subset has

total energy and peak magnitude close together, while GLCM homogeneity and entropy are ranked

close as well. Figure 8 shows the error rate of the attribute subsets selected using the filter, wrapper

and embedded algorithms. The NN, RF, SVM classifiers are used for five-fold validation and the

error rate is based on accuracy score. Wrapper methods reduce error rate with a small number of

attributes because the methods are based on the performance of the predictive model.

We applied attribute subsets to differentiate salt, MTC and conformal reflectors in the same

3D seismic volume we used for the training set (Figures 9, 10 and 11). The predicted facies using

the 5 and 10 highest-ranked attributes in the wrapper methods does not make a considerable

difference, while the amount of computation time to compute and read the attribute volume is

substantially reduced by selecting a fewer number of attributes. Moreover, the predicted facies

Page 15 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 17: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

16

using all 20 attributes is not considerably different compared to the predicted facies of the 5 and

10 highest-ranked attributes in the wrapper methods. A limitation is that some of the MTD facies

are misclassified as salt because both facies are highly discontinuous and have low coherence.

Also, other discontinuous geologic features, such as faults, are missclassified as MTDs.

Conclusions

In this paper, we introduce strategies to select appropriate attributes for automated seismic

facies classification. Analyzing attribute-to-attribute dependence and attribute-to-class

relationships helps to understand which attributes are redundant and which are relevant. However,

a high correlation between attribute does not always imply that attributes are redundant; we need

to analyze all attributes together using a framework which can quantitatively rank the attributes to

build a subset. The multivariate attribute selection algorithms result in the subsets which have

smaller number of attributes but show good performance in differentiating salt and MTC facies

from conformal reflectors. From a geological point of view, it is challenging to define the

depositional environments in the survey area into only three discrete classes. Turbidites, faults,

overpressured shale, and seismic noise will be misclassified into one of the target classes.

However, understanding each seismic attribute’s characteristic is crucial to implement automated

facies classification and to aid rendering of a seismic volume where the interpreters target. Even

though the case study is focused on differentiating (classifying_ salt diapirs and MTDs from

conformal reflectors, the attribute selection algorithms can be applied for other supervised

classification problems. For instance, the workflow can be applied to select physical properties

Page 16 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 18: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

17

and seismic attributes to yield reservoir properties such as porosity, permeability and brittleness

from input quantitative interpretation attributes.

APPENDIX

Correlation measures

Pearson’s product-moment correlation

A correlation measure which is most widely used is Pearson’s product-moment coefficient

(Pearson, 1895). The correlation between two random variables X and Y is defined

(1) 𝒄𝒐𝒓𝒓(𝑿, 𝒀) =𝒄𝒐𝒗(𝑿,𝒀)

𝝈𝒙𝝈𝒚

where cov(X, Y) is the covariance between X and Y, x and y are the standard deviation of X and

Y, respectively. The Pearson’s coefficient describes linear dependence between two variables.

Among scatter plots in Figure 1a, 1b and 1c, only 1a is perfectly correlated or anti-correlated in

terms of Pearson’s correlation.

Spearman’s rank correlation

Spearman correlation coefficient is defined as Pearson correlation between the ranked

variables. A positive Spearman correlation corresponds to increasing monotonic trend, while a

negative one corresponds to decreasing monotonic trend between two random variables. The

correlation assesses positive or negative relationships whether they are linear or not. According to

the Spearman’s definitions of correlation, two variables X and Y in Figure 1 b are highly correlated

even if the relationship is nonlinear.

Page 17 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 19: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

18

Mutual information

In information theory, the uncertainty involved in the value of a random variable is quantified

as entropy. The Shannon entropy which is a measure of the uncertainty of a random variable is

defined as

(2)𝑯 = ― ∑𝒊𝒑𝒊𝒍𝒐𝒈(𝒑𝒊)

where pi is the probability of occurrence of the i-th possible value of the source symbol (Shannon

and Weaver, 1949; Cover and Thomas, 1991). Mutual information measures the gain of

information about one random variable by observing another. Mutual information of two discrete

random variables X and Y are denoted by

𝑰(𝑿;𝒀) = 𝑯(𝑿) ― 𝑯(𝑿│𝒀)

(3)= 𝑯(𝒀) ―𝑯(𝒀│𝑿)

Where H(X), H(y) are marginal entropies and H(X|Y), H(Y|X) are conditional entropies.

Substituting equation (2) to equation (3) gives

(4)𝑰(𝑿;𝒀) = ∑𝒚 ∈ 𝒀

∑𝒙 ∈ 𝑿𝒑(𝒙,𝒚)𝒍𝒐𝒈( 𝒑(𝒙,𝒚)

𝒑(𝒙)𝒑(𝒚))where p(x), p(y) are marginal probability function and p(x, y) is joint probability function.

Distance correlation

The distance correlation of two random variables is defined as distance covariance divided

by distance standard deviation. Distance correlation is denoted as

(5)𝒅𝑪𝒐𝒓(𝑿,𝒀) =𝒅𝑪𝒐𝒗(𝑿,𝒀)

𝒅𝑽𝒂𝒓(𝑿)𝒅𝑽𝒂𝒓(𝒀)

Page 18 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 20: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

19

where dCov(X, Y) is distance covariance, dVar(X) and dVar(Y) are distance variance of X and Y,

respectively. In contrast to Pearson’s covariance which is defined as inner product of two centered

vectors, distance covariance is defined as product of centered Euclidean distances D(xi, xj) and

D(yi, yj) in arbitrary dimensions.

(6)𝒅𝑪𝒐𝒗(𝑿,𝒀) =𝟏

𝒏𝟐∑𝒏

𝒊 = 𝟏∑𝒏

𝒋 = 𝟏𝑫(𝒙𝒊,𝒙𝒋) ∙ 𝑫(𝒚𝒊,𝒚𝒋)

where x X , y X and n is number of samples of X and Y. Distance correlation can detect non-

linear relationship and it values are non-negative.

Page 19 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 21: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

20

Figure 1. Different types of relationship between variables X and Y and their correlation coefficients and regression score. Each scatter plot describes a different relationship between X and Y: (a) and (c) linear and monotonic relationship, (b) and (e) non-linear, monotonic relationship,

Page 20 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 22: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

21

(c) and (f) non-linear, non-monotonic relationships. Gaussian noise 0.1 N(0, 1) has been added to variable Y in (d), (e) and (f). Coefficients are computed using Pearson, rank, mutual information, and distance correlation methods. Regression score are computed with linear Bayesian, NN, RF, SVM repressor predictive algorithms. The best hyper parameters for each model is obtained using a grid-search algorithm.

Figure 2. Relations between different attribute pairs (a) total energy vs. RMS amplitude, (b) peak magnitude vs. instantaneous envelope, and (c) GLCM entropy vs. variance. Correlation coefficients are computed using Pearson, rank, mutual information and distance measures.

Page 21 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 23: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

22

Figure 3. Cartoon summarizing the the steps om the (a) filter, (b) wrapper and (c) embedded attribute subset selection workflows.. Note that there is no feedback in the filter workflow.

Page 22 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 24: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

23

Categories of seismic attributes evaluated in facies classification

Amplitude attributes

Instantaneous attributes

Geometric attributes

Texture attributes

Spectral attributes

RMS amplitude Instantaneous envelope

Coherence Chaos Peak magnitude

Total energy Instantaneous frequency

Dip magnitude GLCM entropy Peak frequency

Relative acoustic impedance

Instantaneous phase

Dip azimuth GLCM homogeneity Peak phase

Most-positive curvature

Most-negative curvature

Aberrancy magnitude

Aberrancy azimuth

Table 1. The seismic attribute families and the 20 specific attributes used in this study to classify conformal sediments, salt and mass transport deposits.

Page 23 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 25: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

24

Figure 4. Time slice through seismic amplitude (top) and energy ratio similarity attribute (bottom). Green arrow indicates a mass transport deposit (MTD) while blue arrows indicate salt diapirs, both of which exhibit low similarity.

Page 24 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 26: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

25

Figure 5. The workflow to select the best subset of attributes based on geologic relevance as well as attribute-to-attribute redundancy using three types of multivariate approaches.

Page 25 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 27: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

26

Attribute – attribute correlation analysisCorrelation measures Attributes highly correlated with the other attributes (corr. coeff. > 0.6)

Pearson corr.

GLCM entropy – GLCM homogeneity (-1.0)Inst. envelope – Peak magnitude (0.96)RMS amp. - Inst. envelope (0.93)RMS amp. – Peak magnitude (0.90)RMS amp. – Total energy (0.90)Total energy – Inst. envelope (0.84)Total energy – Peak magnitude (0.83)Inst. phase – Relative acoustic impedance (0.73)GLCM entropy – Chaos (0.71)GLCM entropy – Variance (0.70)Inst. freq. – Peak freq. (0.62)GLCM entropy – Chaos (0.71)

Rank corr.

GLCM entropy – GLCM homogeneity (-1.0)RMS amp. – Total energy (0.99)Inst. envelope – Peak magnitude (0.95)RMS amp. - Inst. envelope (0.94)RMS amp. – Total energy (0.93)RMS amp. – Peak magnitude (0.92)Total energy – Peak magnitude (0.82)Inst. phase – Relative acoustic impedance (0.81)GLCM entropy – Variance (0.80)GLCM entropy – Chaos (0.71)Inst. freq. – Peak freq. (0.64)Inst. envelope – GLCM homogeneity (0.62)

Mutual information

RMS amp. – Total energy (0.9)GLCM entropy – GLCM homogeneity (0.85)Inst. envelope - Peak magnitude (0.74)RMS amp. - Inst. envelope (0.72)Total energy - Inst. envelope (0.71)RMS amp. – Peak magnitude (0.69)Total energy - Peak magnitude (0.68)

Distance corr.

GLCM entropy – GLCM homogeneity (0.97)RMS amp. – Total energy (0.92)Inst. envelope - Peak magnitude (0.87)RMS amp. - Inst. envelope (0.83)RMS amp. – Peak magnitude (0.78)Total energy - Inst. envelope (0.78)Total energy - Peak magnitude (0.74)

Table 2. Attribute-to-attribute attribute correlation analysis using Pearson, rank, MI, and distance correlations. Attribute pairs exhibiting high correlation (correlation coefficient > 0.6) are ranked in descending order.

Page 26 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 28: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

27

Figure 6. Univariate analysis to describe relationships between a single input attribute and the desired output classes using (a) analysis of variance (ANOVA) F-value, and (b) mutual information. Both analyses show amplitude and texture type attributes are important variables for classification. Unless the prediction is restricted to a specific horizon, phase varies between 180° and +180° with increasing time and is poorly correlated to output class. Examining Figure 4, it is clear that the azimuth of reflector dip, faults, and flexures for this data set also varies between 180° and +180° and is not correlated to any one facies.

Page 27 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 29: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

28

Figure 7. Explained variance with a given number of components among 20 seismic attributes. 15 components explain 0.98 of variance, while 10 components explain 0.90 of total variance.

Table 3. Selected attribute subsets using filter (RelifF, FCBF), wrapper (NN, RF, SVM) and embedded (RF, Logistic regression) methods. Each subset includes the 10 best attributes ranked in descending order.

Page 28 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 30: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

29

Page 29 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 31: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

30

Figure 8. The number of attributes included in the attribute subset vs error rate. For (a), (b) and (c) Attributes in the subset were selected using filter methods (RelifF, FCBF) while error rates were computed with (a) NN, (b) RF, (C) SVM classifiers. For (d) (e) and (f) attributes included the subset were selected using wrapper methods (SFS, SBS) while error rates were computed with (d) NN, (e) RF, (f) SVM classifiers. For (g) (h) and (i) attributes included in the subset were selected using an embedded method (RF) while error rates were computed with (d) NN, (e) RF, (f) SVM classifiers.

Figure 9. A representative time slice at t = 1.1 s through (a) amplitude, facies predicted using (b) 5 high-ranked attributes using wrapper methods, (c) 10 high-ranked attributes using wrapper methods, and (d) 20 attributes. The red polygon in (a) is human-delineated an MTD.

Page 30 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 32: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

31

Figure 10. A representative vertical slice along line AA’ through (a) amplitude, and seismic facies predicted using (b) 5 high-ranked attributes using wrapper methods, (c) 10 high-ranked attributes using wrapper methods, and (d) 20 attributes. The three red polygon in (a) are human-interpreted MTDs.

Page 31 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 33: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

32

Figure 11. A shallow time slice at t = 0.612 s through (a) amplitude, (b) coherence, and (c) facies predicted using 5 high-ranked attributes using wrapper methods. The arrow in (c) indicates the area where MTC are misclassified as salt.

Page 32 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 34: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

33

References

Abdi, H., and L. J. Williams, 2010, Principal component analysis: Wiley Interdisciplinary

Reviews: Computational Statistics, 2, 433-459.

Bahorich, M. S., and S. L. Farmer, 1995, 3-D seismic coherency for faults and stratigraphic

features: The Leading Edge, 1053-1058.

Barnes, A. E., 2007, Redundant and useless seismic attributes: Geophysics, 72, P33-P38.

Berthelot, A., A. H. Solberg, and L. J. Gelius, 2013, Texture attributes for detection of salt:

Journal of Applied Geophysics, 88, 52-69.

Breiman, L., 2001. Random forests: Mach. Learning, 45, 5–32.

Brown, A.R., 2011, Interpretation of Three-dimensional Seismic Data, Society of Exploration

Geophysicists and American Association of Petroleum Geologists.

Chandrashekar, G., and F. Sahin, 2014, A survey on feature selection methods: Computers &

Electrical Engineering, 40, 16-28.

Chopra, S., and K. J. Marfurt, 2005, Seismic attributes—A historical perspective: Geophysics,

70, 3SO-28SO.

Coléou, T., M., Poupon and K. Azbel, 2003, Unsupervised seismic facies classification: A

review and comparison of techniques and implementation: The Leading Edge, 22, 942-953.

Cover, T. M. and J. A. Thomas, 1991, Entropy, relative entropy and mutual information:

Elements of Information Theory, 2, 1-55.

Page 33 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 35: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

34

Frey-Martinez, J., 2010, 3D Seismic interpretation of mass transport deposits: Implications for

basin analysis and geohazard evaluation: In Submarine Mass Movements and Their

Consequences, Springer, Dordrecht, 553-568.

Gersztenkorn, A., and K. J. Marfurt, 1999, Eigenstructure-based co- herence computations as an

aid to 3-D structural and stratigraphic mapping: Geophysics, 64, 1468–1479.

Guyon, I., and A. Elisseeff, 2003, An introduction to variable and feature selection: Journal of

Machine Learning Research, 3, 1157-1182.

Hall, M. A., 2000, Correlation-based feature selection of discrete and numeric class machine

learning: Proc. 17th Int’l Conf. Machine Learning, 359-366.

Jain, A., and D. Zongker, 1997, Feature selection: Evaluation, application, and small sample

performance: IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 153-158.

Jones, I. F., and I. Davison, 2014, Seismic imaging in and around salt bodies: Interpretation, 2,

SL1-SL20.

Kira, K., and L. A. Rendell, 1992, A practical approach to feature selection: In International

Conference on Machine Learning, 368–377.

Kittler, J., 1978, Feature set search algorithms: In Pattern Recognition and Signal Processing, 41–

60.

Kononenko, I., E. Šimec, and M. Robnik-Šikonja, 1997, Overcoming the myopia of inductive

learning algorithms with RELIEFF: Applied Intelligence, 7 - 1, 39-55.

Liaw, A. and M. Wiener, 2002, Classification and regression by random Forest: R News, 2 - 3,

18–22.

Page 34 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 36: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

35

Luo, Y., W. G. Higgs, and W. S. Kowalik, 1996, Edge detection and stratigraphic analysis using

3D seismic data: 66th Annual International Meeting, SEG, Expanded Abstracts, 324–327.

Marfurt, K. J., R. L. Kirlin, S. L. Farmer, and M. S. Bahorich, 1998, 3-D seismic attributes using

a semblance-based coherency algorithm: Geophysics, 63, 1150-1165.

Nelson, C. H., C. A. Escutia, J. E. Damuth, and D. C. Twichell, 2011, Interplay of mass-transport

and turbidite-system deposits in different active tectonic and passive continental margin settings:

External and local controlling factors: Sediment. Geol, 96, 39-66.

Omosanya, K. O., and T. M. Alves, 2013, A 3-dimensional seismic method to assess the

provenance of Mass-Transport Deposits (MTDs) on salt-rich continental slopes (Espírito Santo

Basin, SE Brazil): Marine and Petroleum Geology, 44, 223-239.

Robnik-Šikonja, M., and I. Kononenko, 2003, Theoretical and empirical analysis of ReliefF and

RReliefF: Machine learning, 53(1-2), 23-69.

Pearson, K., 1894, Contributions to the mathematical theory of evolution: Philosophical

Transactions A, 185, 71-110.

Peng, H., F. Long, and C. Ding, 2005, Feature selection based on mutual information criteria of

max-dependency, max-relevance, and min-redundancy: IEEE Transactions on Pattern Analysis

and Machine Intelligence, 27, 1226-1238.

Qi, J., T. Lin, T. Zhao, F. Li, and K. J. Marfurt, 2016, Semisupervised multiattribute seismic facies

analysis: Interpretation, 4, SB91-SB106.

Qi, J., B. Zhang, H. Zhou, and K. J. Marfurt, 2014, Attribute expression of fault-controlled karst—

Fort Worth Basin, Texas: A tutorial: Interpretation, 2, SF91-SF110.

Page 35 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960

Page 37: For Peer Review - University of Oklahomamcee.ou.edu/aaspi/publications/2019/Kim_et_al_2019... · 2019-01-30 · to providing the most reliable predictions. Exploration seismic data

For Peer Review

36

Randen, T., S. Pedersen, and L. Sonneland, 2001, Automatic extraction of fault surfaces from

three-dimensional seismic data: 71st Annual International Meeting, SEG, Expanded Abstracts,

551–554.

Roden, R., T., Smith, and D. Sacrey, 2015, Geologic pattern recognition from seismic attributes:

Principal component analysis and self-organizing maps: Interpretation, 3, SAE59-SAE83.

Roy, A., 2013, Latent space classification of seismic facies: Ph.D. Dissertation, The University of

Oklahoma.

Roy, A., M. Matos, and K. J. Marfurt, 2010, Automatic seismic facies classification with kohonen

self organizing maps-a tutorial: Geohorizons Journal of Society of Petroleum Geophysicists, 6-14.

Sánchez-Maroño, N., A. Alonso-Betanzos, and M. Tombilla-Sanromán, 2007, Filter methods for

feature selection–a comparative study: In International Conference on Intelligent Data Engineering

and Automated Learning, Springer, Berlin, Heidelberg, 178-187

Shannon, C., and W. Weaver, 1949, The mathematical theory of communication: University of

Illinois Press.

Yu, L., and H. Liu, 2003, Feature selection for high-dimensional data: A fast correlation-based

filter solution: Proceedings of the 20th international conference on machine learning (ICML-03)

856-863.

Yu, L., and H. Liu, 2004, Efficient feature selection via analysis of relevance and redundancy:

Journal of Machine Learning Research, 5, 1205-1224.

Zhao, T., F. Li, and K. J. Marfurt, 2018, Seismic attribute selection for unsupervised seismic facies

analysis using user-guided data-adaptive weights: Geophysics, 83, O31-O44.

Page 36 of 36

https://mc.manuscriptcentral.com/interpretation

Interpretation

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960


Recommended