Date post: | 19-Dec-2015 |
Category: |
Documents |
View: | 221 times |
Download: | 1 times |
© Prentice Hall 1
DATA MININGDATA MININGIntroductory and Advanced TopicsIntroductory and Advanced Topics
Part IPart I
Margaret H. DunhamMargaret H. DunhamDepartment of Computer Science and EngineeringDepartment of Computer Science and Engineering
Southern Methodist UniversitySouthern Methodist University
Companion slides for the text by Dr. M.H.Dunham, Companion slides for the text by Dr. M.H.Dunham, Data Mining, Data Mining, Introductory and Advanced TopicsIntroductory and Advanced Topics, Prentice Hall, 2002., Prentice Hall, 2002.
© Prentice Hall 2
Data Mining OutlineData Mining Outline PART IPART I
– IntroductionIntroduction– Related ConceptsRelated Concepts– Data Mining TechniquesData Mining Techniques
PART IIPART II– ClassificationClassification– ClusteringClustering– Association RulesAssociation Rules
PART IIIPART III– Web MiningWeb Mining– Spatial MiningSpatial Mining– Temporal MiningTemporal Mining
© Prentice Hall 3
Introduction OutlineIntroduction Outline
Define data miningDefine data mining Data mining vs. databasesData mining vs. databases Basic data mining tasksBasic data mining tasks Data mining developmentData mining development Data mining issuesData mining issues
Goal:Goal: Provide an overview of data mining. Provide an overview of data mining.
© Prentice Hall 4
IntroductionIntroduction
Data is growing at a phenomenal rateData is growing at a phenomenal rate Users expect more sophisticated Users expect more sophisticated
informationinformation How?How?
UNCOVER HIDDEN INFORMATIONUNCOVER HIDDEN INFORMATION
DATA MININGDATA MINING
© Prentice Hall 5
Data Mining DefinitionData Mining Definition
Finding hidden information in a Finding hidden information in a databasedatabase
Fit data to a modelFit data to a model Similar termsSimilar terms
– Exploratory data analysisExploratory data analysis– Data driven discoveryData driven discovery– Deductive learningDeductive learning
© Prentice Hall 6
Data Mining AlgorithmData Mining Algorithm
Objective: Fit Data to a ModelObjective: Fit Data to a Model– DescriptiveDescriptive– PredictivePredictive
Preference – Technique to choose the Preference – Technique to choose the best modelbest model
Search – Technique to search the dataSearch – Technique to search the data– ““Query”Query”
© Prentice Hall 7
Database Processing vs. Data Database Processing vs. Data Mining ProcessingMining Processing
QueryQuery– Well definedWell defined– SQLSQL
QueryQuery– Poorly definedPoorly defined– No precise query languageNo precise query language
DataData– Operational dataOperational data
OutputOutput– PrecisePrecise– Subset of databaseSubset of database
DataData– Not operational dataNot operational data
OutputOutput– FuzzyFuzzy– Not a subset of databaseNot a subset of database
© Prentice Hall 8
Query ExamplesQuery Examples DatabaseDatabase
Data MiningData Mining
– Find all customers who have purchased milkFind all customers who have purchased milk
– Find all items which are frequently purchased Find all items which are frequently purchased with milk. (association rules)with milk. (association rules)
– Find all credit applicants with last name of Smith.Find all credit applicants with last name of Smith.– Identify customers who have purchased more Identify customers who have purchased more than $10,000 in the last month.than $10,000 in the last month.
– Find all credit applicants who are poor credit Find all credit applicants who are poor credit risks. (classification)risks. (classification)– Identify customers with similar buying habits. Identify customers with similar buying habits. (Clustering)(Clustering)
© Prentice Hall 9
Data Mining Models and TasksData Mining Models and Tasks
© Prentice Hall 10
Basic Data Mining TasksBasic Data Mining Tasks Classification Classification maps data into predefined groups maps data into predefined groups
or classesor classes– Supervised learningSupervised learning– Pattern recognitionPattern recognition– PredictionPrediction
RegressionRegression is used to map a data item to a real is used to map a data item to a real valued prediction variable.valued prediction variable.
Clustering Clustering groups similar data together into groups similar data together into clusters.clusters.– Unsupervised learningUnsupervised learning– SegmentationSegmentation– PartitioningPartitioning
© Prentice Hall 11
Basic Data Mining Tasks Basic Data Mining Tasks (cont’d)(cont’d)
Summarization Summarization maps data into subsets with maps data into subsets with associated simple descriptions.associated simple descriptions.– CharacterizationCharacterization– GeneralizationGeneralization
Link AnalysisLink Analysis uncovers relationships among uncovers relationships among data.data.– Affinity AnalysisAffinity Analysis– Association RulesAssociation Rules– Sequential Analysis determines sequential Sequential Analysis determines sequential
patterns.patterns.
© Prentice Hall 12
Ex: Time Series AnalysisEx: Time Series Analysis Example: Stock MarketExample: Stock Market Predict future valuesPredict future values Determine similar patterns over timeDetermine similar patterns over time Classify behaviorClassify behavior
© Prentice Hall 13
Data Mining vs. KDDData Mining vs. KDD
Knowledge Discovery in Databases Knowledge Discovery in Databases (KDD):(KDD): process of finding useful process of finding useful information and patterns in data.information and patterns in data.
Data Mining:Data Mining: Use of algorithms to Use of algorithms to extract the information and patterns extract the information and patterns derived by the KDD process. derived by the KDD process.
© Prentice Hall 14
KDD ProcessKDD Process
Selection:Selection: Obtain data from various sources. Obtain data from various sources. Preprocessing:Preprocessing: Cleanse data. Cleanse data. Transformation:Transformation: Convert to common format. Convert to common format.
Transform to new format.Transform to new format. Data Mining:Data Mining: Obtain desired results. Obtain desired results. Interpretation/Evaluation:Interpretation/Evaluation: Present results Present results
to user in meaningful manner.to user in meaningful manner.
Modified from [FPSS96C]
© Prentice Hall 15
KDD Process Ex: Web LogKDD Process Ex: Web Log Selection:Selection:
– Select log data (dates and locations) to useSelect log data (dates and locations) to use Preprocessing:Preprocessing:
– Remove identifying URLsRemove identifying URLs– Remove error logsRemove error logs
Transformation:Transformation: – Sessionize (sort and group)Sessionize (sort and group)
Data Mining:Data Mining: – Identify and count patternsIdentify and count patterns– Construct data structureConstruct data structure
Interpretation/Evaluation:Interpretation/Evaluation:– Identify and display frequently accessed sequences.Identify and display frequently accessed sequences.
Potential User Applications:Potential User Applications:– Cache predictionCache prediction– PersonalizationPersonalization
© Prentice Hall 16
Data Mining DevelopmentData Mining Development•Similarity Measures•Hierarchical Clustering•IR Systems•Imprecise Queries•Textual Data•Web Search Engines
•Bayes Theorem•Regression Analysis•EM Algorithm•K-Means Clustering•Time Series Analysis
•Neural Networks•Decision Tree Algorithms
•Algorithm Design Techniques•Algorithm Analysis•Data Structures
•Relational Data Model•SQL•Association Rule Algorithms•Data Warehousing•Scalability Techniques
© Prentice Hall 17
KDD IssuesKDD Issues
Human InteractionHuman Interaction OverfittingOverfitting OutliersOutliers InterpretationInterpretation Visualization Visualization Large DatasetsLarge Datasets High DimensionalityHigh Dimensionality
© Prentice Hall 18
KDD Issues (cont’d)KDD Issues (cont’d)
Multimedia DataMultimedia Data Missing DataMissing Data Irrelevant DataIrrelevant Data Noisy DataNoisy Data Changing DataChanging Data IntegrationIntegration ApplicationApplication
© Prentice Hall 19
Social Implications of DMSocial Implications of DM
Privacy Privacy ProfilingProfiling Unauthorized useUnauthorized use
© Prentice Hall 20
Data Mining MetricsData Mining Metrics
UsefulnessUsefulness Return on Investment (ROI)Return on Investment (ROI) AccuracyAccuracy Space/TimeSpace/Time
© Prentice Hall 21
Database Perspective on Data Database Perspective on Data MiningMining
ScalabilityScalability Real World DataReal World Data UpdatesUpdates Ease of UseEase of Use
© Prentice Hall 22
Visualization TechniquesVisualization Techniques
GraphicalGraphical GeometricGeometric Icon-basedIcon-based Pixel-basedPixel-based HierarchicalHierarchical HybridHybrid
© Prentice Hall 23
Related Concepts OutlineRelated Concepts Outline
Database/OLTP SystemsDatabase/OLTP Systems Fuzzy Sets and LogicFuzzy Sets and Logic Information Retrieval(Web Search Engines)Information Retrieval(Web Search Engines) Dimensional ModelingDimensional Modeling Data WarehousingData Warehousing OLAP/DSSOLAP/DSS StatisticsStatistics Machine LearningMachine Learning Pattern MatchingPattern Matching
Goal:Goal: Examine some areas which are related to Examine some areas which are related to data mining.data mining.
© Prentice Hall 24
DB & OLTP SystemsDB & OLTP Systems SchemaSchema
– (ID,Name,Address,Salary,JobNo)(ID,Name,Address,Salary,JobNo) Data ModelData Model
– ERER– RelationalRelational
TransactionTransaction Query:Query:
SELECT NameSELECT NameFROM TFROM TWHERE Salary > 100000WHERE Salary > 100000
DM: Only imprecise queriesDM: Only imprecise queries
© Prentice Hall 25
Fuzzy Sets and LogicFuzzy Sets and Logic Fuzzy Set:Fuzzy Set: Set membership function is a real valued Set membership function is a real valued
function with output in the range [0,1].function with output in the range [0,1]. f(x): Probability x is in F.f(x): Probability x is in F. 1-f(x): Probability x is not in F.1-f(x): Probability x is not in F. EX:EX:
– T = {x | x is a person and x is tall}T = {x | x is a person and x is tall}– Let f(x) be the probability that x is tallLet f(x) be the probability that x is tall– Here f is the membership functionHere f is the membership function
DM: DM: Prediction and classification are fuzzy.Prediction and classification are fuzzy.
© Prentice Hall 26
Fuzzy SetsFuzzy Sets
© Prentice Hall 27
Classification/Prediction is Classification/Prediction is FuzzyFuzzy
Loan
Amnt
Simple Fuzzy
Accept Accept
RejectReject
© Prentice Hall 28
Information Retrieval Information Retrieval
Information Retrieval (IR):Information Retrieval (IR): retrieving desired retrieving desired information from textual data.information from textual data.
Library ScienceLibrary Science Digital LibrariesDigital Libraries Web Search EnginesWeb Search Engines Traditionally keyword basedTraditionally keyword based Sample query:Sample query:
Find all documents about “data mining”.Find all documents about “data mining”.
DM: Similarity measures; DM: Similarity measures; Mine text/Web data.Mine text/Web data.
© Prentice Hall 29
Information Retrieval (cont’d)Information Retrieval (cont’d)
Similarity:Similarity: measure of how close a measure of how close a query is to a document.query is to a document.
Documents which are “close enough” Documents which are “close enough” are retrieved.are retrieved.
Metrics:Metrics:– PrecisionPrecision = |Relevant and Retrieved| = |Relevant and Retrieved|
|Retrieved||Retrieved|– RecallRecall = |Relevant and Retrieved|= |Relevant and Retrieved|
|Relevant||Relevant|
© Prentice Hall 30
IR Query Result Measures IR Query Result Measures and Classificationand Classification
IR Classification
© Prentice Hall 31
Dimensional ModelingDimensional Modeling View data in a hierarchical manner more as View data in a hierarchical manner more as
business executives mightbusiness executives might Useful in decision support systems and miningUseful in decision support systems and mining Dimension:Dimension: collection of logically related collection of logically related
attributes; axis for modeling data.attributes; axis for modeling data. Facts:Facts: data stored data stored Ex: Dimensions – products, locations, dateEx: Dimensions – products, locations, date
Facts – quantity, unit priceFacts – quantity, unit price
DM: May view data as dimensional.DM: May view data as dimensional.
© Prentice Hall 32
Relational View of DataRelational View of Data
ProdID LocID Date Quantity UnitPrice 123 Dallas 022900 5 25 123 Houston 020100 10 20 150 Dallas 031500 1 100 150 Dallas 031500 5 95 150 Fort
Worth 021000 5 80
150 Chicago 012000 20 75 200 Seattle 030100 5 50 300 Rochester 021500 200 5 500 Bradenton 022000 15 20 500 Chicago 012000 10 25 1
© Prentice Hall 33
Dimensional Modeling QueriesDimensional Modeling Queries
Roll Up:Roll Up: more general dimension more general dimension Drill Down:Drill Down: more specific dimension more specific dimension Dimension (Aggregation) HierarchyDimension (Aggregation) Hierarchy SQL uses aggregationSQL uses aggregation Decision Support Systems (DSS):Decision Support Systems (DSS):
Computer systems and tools to assist Computer systems and tools to assist managers in making decisions and managers in making decisions and solving problems.solving problems.
© Prentice Hall 34
Cube view of DataCube view of Data
© Prentice Hall 35
Aggregation HierarchiesAggregation Hierarchies
© Prentice Hall 36
Star SchemaStar Schema
© Prentice Hall 37
Data WarehousingData Warehousing
““Subject-oriented, integrated, time-variant, nonvolatile” Subject-oriented, integrated, time-variant, nonvolatile” William InmonWilliam Inmon
Operational Data:Operational Data: Data used in day to day needs of Data used in day to day needs of company.company.
Informational Data:Informational Data: Supports other functions such as Supports other functions such as planning and forecasting.planning and forecasting.
Data mining tools often access data warehouses rather Data mining tools often access data warehouses rather than operational data.than operational data.
DM: May access data in warehouse.DM: May access data in warehouse.
© Prentice Hall 38
Operational vs. InformationalOperational vs. Informational
Operational Data Data Warehouse
Application OLTP OLAP
Use Precise Queries Ad Hoc
Temporal Snapshot Historical
Modification Dynamic Static
Orientation Application Business
Data Operational Values Integrated
Size Gigabits TerabitsLevel Detailed Summarized
Access Often Less Often
Response Few Seconds Minutes
Data Schema Relational Star/Snowflake
© Prentice Hall 39
OLAPOLAP Online Analytic Processing (OLAP):Online Analytic Processing (OLAP): provides more provides more
complex queries than OLTP.complex queries than OLTP. OnLine Transaction Processing (OLTP):OnLine Transaction Processing (OLTP): traditional traditional
database/transaction processing.database/transaction processing. Dimensional data; cube view Dimensional data; cube view Visualization of operations:Visualization of operations:
– Slice:Slice: examine sub-cube. examine sub-cube.– Dice:Dice: rotate cube to look at another dimension. rotate cube to look at another dimension.– Roll Up/Drill DownRoll Up/Drill Down
DM: May use OLAP queries.DM: May use OLAP queries.
© Prentice Hall 40
OLAP OperationsOLAP Operations
Single Cell Multiple Cells Slice Dice
Roll Up
Drill Down
© Prentice Hall 41
StatisticsStatistics Simple descriptive modelsSimple descriptive models Statistical inference:Statistical inference: generalizing a model generalizing a model
created from a sample of the data to the entire created from a sample of the data to the entire dataset.dataset.
Exploratory Data Analysis:Exploratory Data Analysis: – Data can actually drive the creation of the Data can actually drive the creation of the
modelmodel– Opposite of traditional statistical view.Opposite of traditional statistical view.
Data mining targeted to business userData mining targeted to business user
DM: Many data mining methods come DM: Many data mining methods come from statistical techniques. from statistical techniques.
© Prentice Hall 42
Machine LearningMachine Learning Machine Learning:Machine Learning: area of AI that examines how to area of AI that examines how to
write programs that can learn.write programs that can learn. Often used in classification and prediction Often used in classification and prediction Supervised Learning:Supervised Learning: learns by example. learns by example. Unsupervised Learning: Unsupervised Learning: learns without knowledge of learns without knowledge of
correct answers.correct answers. Machine learning often deals with small static datasets. Machine learning often deals with small static datasets.
DM: Uses many machine learning DM: Uses many machine learning techniques.techniques.
© Prentice Hall 43
Pattern Matching Pattern Matching (Recognition)(Recognition)
Pattern Matching:Pattern Matching: finds occurrences of finds occurrences of a predefined pattern in the data.a predefined pattern in the data.
Applications include speech recognition, Applications include speech recognition, information retrieval, time series information retrieval, time series analysis.analysis.
DM: Type of classification.DM: Type of classification.
© Prentice Hall 44
DM vs. Related TopicsDM vs. Related Topics
Area Query Data Results Output DB/OLTP Precise Database Precise DB Objects
or Aggregation
IR Precise Documents Vague Documents OLAP Analysis Multidimensional Precise DB Objects
or Aggregation
DM Vague Preprocessed Vague KDD Objects
© Prentice Hall 45
Data Mining Techniques OutlineData Mining Techniques Outline
StatisticalStatistical– Point EstimationPoint Estimation– Models Based on SummarizationModels Based on Summarization– Bayes TheoremBayes Theorem– Hypothesis TestingHypothesis Testing– Regression and CorrelationRegression and Correlation
Similarity MeasuresSimilarity Measures Decision TreesDecision Trees Neural NetworksNeural Networks
– Activation FunctionsActivation Functions
Genetic AlgorithmsGenetic Algorithms
Goal:Goal: Provide an overview of basic data Provide an overview of basic data mining techniquesmining techniques
© Prentice Hall 46
Point EstimationPoint Estimation Point Estimate:Point Estimate: estimate a population estimate a population
parameter.parameter. May be made by calculating the parameter for a May be made by calculating the parameter for a
sample.sample. May be used to predict value for missing data.May be used to predict value for missing data. Ex: Ex:
– R contains 100 employeesR contains 100 employees– 99 have salary information99 have salary information– Mean salary of these is $50,000Mean salary of these is $50,000– Use $50,000 as value of remaining employee’s Use $50,000 as value of remaining employee’s
salary. salary. Is this a good idea?Is this a good idea?
© Prentice Hall 47
Estimation ErrorEstimation Error
Bias: Bias: Difference between expected value and Difference between expected value and actual value.actual value.
Mean Squared Error (MSE):Mean Squared Error (MSE): expected value expected value of the squared difference between the of the squared difference between the estimate and the actual value:estimate and the actual value:
Why square?Why square? Root Mean Square Error (RMSE)Root Mean Square Error (RMSE)
© Prentice Hall 48
Jackknife EstimateJackknife Estimate Jackknife Estimate:Jackknife Estimate: estimate of parameter is estimate of parameter is
obtained by omitting one value from the set of obtained by omitting one value from the set of observed values.observed values.
Ex: estimate of mean for X={xEx: estimate of mean for X={x1, … , x, … , xn}}
© Prentice Hall 49
Maximum Likelihood Maximum Likelihood Estimate (MLE)Estimate (MLE)
Obtain parameter estimates that maximize Obtain parameter estimates that maximize the probability that the sample data occurs for the probability that the sample data occurs for the specific model.the specific model.
Joint probability for observing the sample Joint probability for observing the sample data by multiplying the individual probabilities. data by multiplying the individual probabilities. Likelihood function: Likelihood function:
Maximize L.Maximize L.
© Prentice Hall 50
MLE ExampleMLE Example
Coin toss five times: {H,H,H,H,T}Coin toss five times: {H,H,H,H,T}
Assuming a perfect coin with H and T equally Assuming a perfect coin with H and T equally
likely, the likelihood of this sequence is: likely, the likelihood of this sequence is:
However if the probability of a H is 0.8 then:However if the probability of a H is 0.8 then:
© Prentice Hall 51
MLE Example (cont’d)MLE Example (cont’d) General likelihood formula:General likelihood formula:
Estimate for p is then 4/5 = 0.8Estimate for p is then 4/5 = 0.8
© Prentice Hall 52
Expectation-Maximization Expectation-Maximization (EM)(EM)
Solves estimation with incomplete data.Solves estimation with incomplete data. Obtain initial estimates for parameters.Obtain initial estimates for parameters. Iteratively use estimates for missing Iteratively use estimates for missing
data and continue until convergence.data and continue until convergence.
© Prentice Hall 53
EM ExampleEM Example
© Prentice Hall 54
EM AlgorithmEM Algorithm
© Prentice Hall 55
Models Based on SummarizationModels Based on Summarization
Visualization:Visualization: Frequency distribution, mean, variance, Frequency distribution, mean, variance, median, mode, etc.median, mode, etc.
Box Plot:Box Plot:
© Prentice Hall 56
Scatter DiagramScatter Diagram
© Prentice Hall 57
Bayes TheoremBayes Theorem
Posterior Probability:Posterior Probability: P(hP(h1|x|xi)) Prior Probability:Prior Probability: P(h P(h1)) Bayes Theorem:Bayes Theorem:
Assign probabilities of hypotheses given a data Assign probabilities of hypotheses given a data value.value.
© Prentice Hall 58
Bayes Theorem ExampleBayes Theorem Example Credit authorizations (hypotheses): Credit authorizations (hypotheses):
hh11=authorize purchase, h=authorize purchase, h2 = authorize after = authorize after further identification, hfurther identification, h3=do not authorize, =do not authorize, hh4= do not authorize but contact police= do not authorize but contact police
Assign twelve data values for all Assign twelve data values for all combinations of credit and income:combinations of credit and income:
From training data: P(hFrom training data: P(h11) = 60%; P(h) = 60%; P(h22)=20%; )=20%;
P(h P(h33)=10%; P(h)=10%; P(h44)=10%.)=10%.
1 2 3 4 Excellent x1 x2 x3 x4 Good x5 x6 x7 x8 Bad x9 x10 x11 x12
© Prentice Hall 59
Bayes Example(cont’d)Bayes Example(cont’d) Training Data:Training Data:
ID Income Credit Class xi 1 4 Excellent h1 x4 2 3 Good h1 x7 3 2 Excellent h1 x2 4 3 Good h1 x7 5 4 Good h1 x8 6 2 Excellent h1 x2 7 3 Bad h2 x11 8 2 Bad h2 x10 9 3 Bad h3 x11 10 1 Bad h4 x9
© Prentice Hall 60
Bayes Example(cont’d)Bayes Example(cont’d) Calculate P(xCalculate P(xii|h|hjj) and P(x) and P(xii))
Ex: P(xEx: P(x77|h|h11)=2/6; P(x)=2/6; P(x44|h|h11)=1/6; P(x)=1/6; P(x22|h|h11)=2/6; P(x)=2/6; P(x88||
hh11)=1/6; P(x)=1/6; P(xii|h|h11)=0 for all other x)=0 for all other xii.. Predict the class for xPredict the class for x44::
– Calculate P(hCalculate P(hjj|x|x44) for all h) for all hjj. . – Place xPlace x4 4 in class with largest value.in class with largest value.– Ex: Ex:
»P(hP(h11|x|x44)=(P(x)=(P(x44|h|h11)(P(h)(P(h11))/P(x))/P(x44)) =(1/6)(0.6)/0.1=1. =(1/6)(0.6)/0.1=1.
»xx4 4 in class hin class h11..
© Prentice Hall 61
Hypothesis TestingHypothesis Testing
Find model to explain behavior by Find model to explain behavior by creating and then testing a hypothesis creating and then testing a hypothesis about the data.about the data.
Exact opposite of usual DM approach.Exact opposite of usual DM approach. HH0 0 – Null hypothesis; Hypothesis to be – Null hypothesis; Hypothesis to be
tested.tested. HH1 1 – Alternative hypothesis– Alternative hypothesis
© Prentice Hall 62
Chi Squared StatisticChi Squared Statistic
O – observed valueO – observed value E – Expected value based on hypothesis.E – Expected value based on hypothesis.
Ex: Ex: – O={50,93,67,78,87}O={50,93,67,78,87}– E=75E=75– 22=15.55 and therefore significant=15.55 and therefore significant
© Prentice Hall 63
RegressionRegression
Predict future values based on past Predict future values based on past valuesvalues
Linear RegressionLinear Regression assumes linear assumes linear relationship exists.relationship exists.
y = cy = c00 + c + c11 x x11 + … + c + … + cnn x xnn
Find values to best fit the dataFind values to best fit the data
© Prentice Hall 64
Linear RegressionLinear Regression
© Prentice Hall 65
CorrelationCorrelation
Examine the degree to which the values Examine the degree to which the values for two variables behave similarly.for two variables behave similarly.
Correlation coefficient r:Correlation coefficient r:• 1 = perfect correlation1 = perfect correlation• -1 = perfect but opposite correlation-1 = perfect but opposite correlation• 0 = no correlation0 = no correlation
© Prentice Hall 66
Similarity MeasuresSimilarity Measures
Determine similarity between two objects.Determine similarity between two objects. Similarity characteristics:Similarity characteristics:
Alternatively, distance measure measure how Alternatively, distance measure measure how unlike or dissimilar objects are.unlike or dissimilar objects are.
© Prentice Hall 67
Similarity MeasuresSimilarity Measures
© Prentice Hall 68
Distance MeasuresDistance Measures
Measure dissimilarity between objectsMeasure dissimilarity between objects
© Prentice Hall 69
Twenty Questions GameTwenty Questions Game
© Prentice Hall 70
Decision TreesDecision Trees Decision Tree (DT):Decision Tree (DT):
– Tree where the root and each internal node is Tree where the root and each internal node is labeled with a question. labeled with a question.
– The arcs represent each possible answer to The arcs represent each possible answer to the associated question. the associated question.
– Each leaf node represents a prediction of a Each leaf node represents a prediction of a solution to the problem.solution to the problem.
Popular technique for classification; Leaf Popular technique for classification; Leaf node indicates class to which the node indicates class to which the corresponding tuple belongs.corresponding tuple belongs.
© Prentice Hall 71
Decision Tree ExampleDecision Tree Example
© Prentice Hall 72
Decision TreesDecision Trees
AA Decision Tree Model Decision Tree Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Decision TreeDecision Tree– Algorithm to create the treeAlgorithm to create the tree– Algorithm that applies the tree to data Algorithm that applies the tree to data
Creation of the tree is the most difficult part.Creation of the tree is the most difficult part. Processing is basically a search similar to Processing is basically a search similar to
that in a binary search tree (although DT may that in a binary search tree (although DT may not be binary).not be binary).
© Prentice Hall 73
Decision Tree AlgorithmDecision Tree Algorithm
© Prentice Hall 74
DT DT Advantages/DisadvantagesAdvantages/Disadvantages
Advantages:Advantages:– Easy to understand. Easy to understand. – Easy to generate rulesEasy to generate rules
Disadvantages:Disadvantages:– May suffer from overfitting.May suffer from overfitting.– Classifies by rectangular partitioning.Classifies by rectangular partitioning.– Does not easily handle nonnumeric data.Does not easily handle nonnumeric data.– Can be quite large – pruning is necessary.Can be quite large – pruning is necessary.
© Prentice Hall 75
Neural Networks Neural Networks Based on observed functioning of human Based on observed functioning of human
brain. brain. (Artificial Neural Networks (ANN)(Artificial Neural Networks (ANN) Our view of neural networks is very simplistic. Our view of neural networks is very simplistic. We view a neural network (NN) from a We view a neural network (NN) from a
graphical viewpoint.graphical viewpoint. Alternatively, a NN may be viewed from the Alternatively, a NN may be viewed from the
perspective of matrices.perspective of matrices. Used in pattern recognition, speech Used in pattern recognition, speech
recognition, computer vision, and recognition, computer vision, and classification.classification.
© Prentice Hall 76
Neural NetworksNeural Networks Neural Network (NN)Neural Network (NN) is a directed graph is a directed graph
F=<V,A> with vertices V={1,2,…,n} and arcs F=<V,A> with vertices V={1,2,…,n} and arcs A={<i,j>|1<=i,j<=n}, with the following A={<i,j>|1<=i,j<=n}, with the following restrictions:restrictions:– V is partitioned into a set of input nodes, VV is partitioned into a set of input nodes, V II, ,
hidden nodes, Vhidden nodes, VHH, and output nodes, V, and output nodes, VOO..– The vertices are also partitioned into layers The vertices are also partitioned into layers – Any arc <i,j> must have node i in layer h-1 Any arc <i,j> must have node i in layer h-1
and node j in layer h.and node j in layer h.– Arc <i,j> is labeled with a numeric value wArc <i,j> is labeled with a numeric value w ijij..– Node i is labeled with a function fNode i is labeled with a function f ii..
© Prentice Hall 77
Neural Network ExampleNeural Network Example
© Prentice Hall 78
NN NodeNN Node
© Prentice Hall 79
NN Activation FunctionsNN Activation Functions
Functions associated with nodes in Functions associated with nodes in graph.graph.
Output may be in range [-1,1] or [0,1]Output may be in range [-1,1] or [0,1]
© Prentice Hall 80
NN Activation FunctionsNN Activation Functions
© Prentice Hall 81
NN LearningNN Learning
Propagate input values through graph.Propagate input values through graph. Compare output to desired output.Compare output to desired output. Adjust weights in graph accordingly.Adjust weights in graph accordingly.
© Prentice Hall 82
Neural NetworksNeural Networks
A A Neural Network ModelNeural Network Model is a computational is a computational model consisting of three parts:model consisting of three parts:– Neural Network graph Neural Network graph – Learning algorithm that indicates how Learning algorithm that indicates how
learning takes place.learning takes place.– Recall techniques that determine hew Recall techniques that determine hew
information is obtained from the network. information is obtained from the network. We will look at propagation as the recall We will look at propagation as the recall
technique.technique.
© Prentice Hall 83
NN AdvantagesNN Advantages
LearningLearning Can continue learning even after Can continue learning even after
training set has been applied.training set has been applied. Easy parallelizationEasy parallelization Solves many problemsSolves many problems
© Prentice Hall 84
NN DisadvantagesNN Disadvantages
Difficult to understandDifficult to understand May suffer from overfittingMay suffer from overfitting Structure of graph must be determined Structure of graph must be determined
a priori.a priori. Input values must be numeric.Input values must be numeric. Verification difficult.Verification difficult.
© Prentice Hall 85
Genetic AlgorithmsGenetic Algorithms Optimization search type algorithms. Optimization search type algorithms. Creates an initial feasible solution and Creates an initial feasible solution and
iteratively creates new “better” solutions.iteratively creates new “better” solutions. Based on human evolution and survival of the Based on human evolution and survival of the
fittest.fittest. Must represent a solution as an individual.Must represent a solution as an individual. Individual:Individual: string I=I string I=I11,I,I22,…,I,…,Inn where I where Ijj is in is in
given alphabet A. given alphabet A. Each character IEach character I j j is called a is called a genegene.. Population:Population: set of individuals. set of individuals.
© Prentice Hall 86
Genetic AlgorithmsGenetic Algorithms A A Genetic Algorithm (GA)Genetic Algorithm (GA) is a is a
computational model consisting of five parts:computational model consisting of five parts:– A starting set of individuals, P.A starting set of individuals, P.– CrossoverCrossover:: technique to combine two technique to combine two
parents to create offspring.parents to create offspring.– Mutation: Mutation: randomly change an individual.randomly change an individual.– Fitness: Fitness: determine the best individuals.determine the best individuals.– Algorithm which applies the crossover and Algorithm which applies the crossover and
mutation techniques to P iteratively using mutation techniques to P iteratively using the fitness function to determine the best the fitness function to determine the best individuals in P to keep. individuals in P to keep.
© Prentice Hall 87
Crossover ExamplesCrossover Examples
111 111
000 000
Parents Children
111 000
000 111
a) Single Crossover
111 111
Parents Children
111 000
000
a) Single Crossover
111 111
000 000
Parents
a) Multiple Crossover
111 111
000
Parents Children
111 000
000 111
Children
111 000
000 11100
11
00
11
© Prentice Hall 88
Genetic AlgorithmGenetic Algorithm
© Prentice Hall 89
GA Advantages/DisadvantagesGA Advantages/Disadvantages AdvantagesAdvantages
– Easily parallelizedEasily parallelized DisadvantagesDisadvantages
– Difficult to understand and explain to end Difficult to understand and explain to end users.users.
– Abstraction of the problem and method to Abstraction of the problem and method to represent individuals is quite difficult.represent individuals is quite difficult.
– Determining fitness function is difficult.Determining fitness function is difficult.– Determining how to perform crossover and Determining how to perform crossover and
mutation is difficult.mutation is difficult.