CHAPTER 1:
INTRODUCTION
Big Data 3
Widespread use of personal computers and wireless communication leads to “big data”
We are both producers and consumers of data
Producer: when buy a product, rent a movie, write a blog, post on social media
Consumer: we want to have products and services specialized to us, expect our needs to be understood and interests to be predicted
Data is not random, it has structure, e.g., customer behavior
We need “big theory” to extract that structure from data for
(a) Understanding the process
(b) Making predictions for the future
Why “Learn” ? 4
Machine learning is programming computers to optimize a performance criterion using example data or past experience.
There is no need to “learn” to calculate payroll Learning is used when:
Human expertise does not exist (navigating on Mars), Humans are unable to explain their expertise (speech
recognition) Solution changes in time (routing on a computer network) Solution needs to be adapted to particular cases (user
biometrics) Seems like there is a hidden pattern in data but we can not
pinpoint it
What We Talk About When We Talk
About “Learning”? 5
Learning general models from a data of particular examples
Data is cheap and abundant (data warehouses, data marts); knowledge is expensive and scarce.
Example in retail: Customer transactions to consumer behavior:
People who bought “Blink” also bought “Outliers” (www.amazon.com)
Build a model that is a good and useful approximation to the data.
Allows us to predict future behavior, find hidden patterns etc.
Data Mining 6
Applications of machine learning to large databases is called Data Mining In data mining a large volume of data is processed to construct a
simple model with valuable use, e.g., having high predictive accuracy
There are abundant such application areas Retail: Market basket analysis, Customer relationship
management (CRM)
Finance: Credit scoring, fraud detection
Manufacturing: Control, robotics, troubleshooting
Medicine: Medical diagnosis
Telecommunications: Spam filters, intrusion detection
Bioinformatics: Motifs, alignment
Web mining: Search engines
...
What is Machine Learning? 7
Machine learning is not just database problem, it is also a part of Artificial Intelligence (AI)
To be “intelligent” it must have ability to learn
Machine learning is programming computers to optimize a performance criterion using example data or past experience.
This is often done by building a model (say, a predictive model based on past data for performing prediction on future data)
Such models are described by a number of “parameters”
Learning involves execution of a computer program to optimize the values of these parameters that in effect optimizes the performance criteria
Thus machine learning is essentially solving an “optimization” problem efficiently using computers
What is Machine Learning? 8
Role of Statistics/Probability: Machine learning uses theory of statistics in building mathematical models to solve an optimization problem
Such a model is used to make inference from a sample
Role of Computer science: Two fold role
First, we need “efficient algorithms” to solve the optimization problem (aka, training a model) as well as to store and process possibly massive amount of data
Second, once a model is learned, its representation and algorithmic solution for inference needs to be efficient as well
Examples of machine learning Applications 9
Association
Supervised Learning
Classification
Regression
Unsupervised Learning
Clustering
Dimension reduction
Reinforcement Learning
Learning Associations 10
Market Basket analysis:
Learning conditional probability P (Y | X, D ) that
somebody who buys X also buys Y where X and Y are
products/services.
D is the set of customer attributes, e.g., gender, age,
marital status etc
Example: P ( chips | beer ) = 0.7
Classification 11
Classification corresponds to learning a function (mapping) from input to output, where output is restricted to a finite number of classes (typically two but can have more classes)
Such a learned function is called a classifier
Input corresponds to any object represented by a set of attributes/features
Such a input-output function can be modeled in different ways and is typically expressed by a set of model parameters
“Training” of a classifier corresponds to learning optimum values of these model parameters
Requires past data (input-output pairs) to do that
“Testing” of classifier corresponds to predicting the class label (output) of an unseen data point given only the input (its features/attributes)
Example: Credit scoring
Two classes (high-risk, low-risk)
Each data point is a customer having two features/attributes: income and savings
Classification corresponds to differentiating between low-risk and high-risk customers from their income and savings
Discriminant is a function that separates different classes
Discriminant: IF income > θ1 AND savings > θ2
THEN low-risk ELSE high-risk
Classification 12
What are the model parameters in this example?
In this example how the number of features/attributes is related to number of model parameters?
Food for thought:
What happens when you have large number of features? (we have seen in Bag of word format we can have thousands of features)
Discriminant is a function that separates different classes
Discriminant: IF income > θ1 AND savings > θ2
THEN low-risk ELSE high-risk
Classification: Applications 13
Aka Pattern recognition
Face recognition: Pose, lighting, occlusion (glasses, beard), make-up, hair style
Character recognition: Different handwriting styles.
Speech recognition: Temporal dependency.
Medical diagnosis: From symptoms to illnesses
Biometrics: Recognition/authentication using physical and/or behavioral characteristics: Face, iris, signature, etc
Outlier/novelty detection:
Face Recognition 14
Training examples of a person
Test images
ORL dataset, AT&T Laboratories, Cambridge UK
Regression 15
Regression corresponds to learning a function (mapping) from input to output, where output is any real number
Such a learned function is called a regressor
Input corresponds to any object represented by a set of attributes/features
Such a input-output function can be modeled in different ways and is typically expressed by a set of model parameters
“Training” of a regression model corresponds to learning optimum values of these model parameters
Requires past data (input-output pairs) to do that
“Testing” of trained regression model corresponds to predicting the scalar output value (output) of an unseen data point given only the input (its features/attributes)
Example: Price of a car
Input is car attributes
Output is car price
Regression model y = g (x | q )
Model parameters are q
In this example model parameters controls the orientation of the blue line
Regression Applications 16
Navigating a car: Angle of the steering
Kinematics of a robot arm
α1= g1(x,y)
α2= g2(x,y)
α1
α2
(x,y)
Response surface design
Supervised Learning: Uses 17
Prediction of future cases: Use the rule to predict
the output for future inputs
Knowledge extraction: The rule is easy to
understand
Compression: The rule is simpler than the data it
explains
Outlier detection: Exceptions that are not covered
by the rule, e.g., fraud
Unsupervised Learning 18
Learning “what normally happens”
No output
Clustering: Grouping similar instances
Example applications
Customer segmentation in CRM
Image compression: Color quantization
Bioinformatics: Learning motifs
Reinforcement Learning 19
Learning a policy: A sequence of outputs
No supervised output but delayed reward
Credit assignment problem
Game playing
Robot in a maze
Multiple agents, partial observability, ...
Reinforcement learning a separate sub area of machine learning. Unfortunately we will not cover it in this class.
Resources: Datasets 20
UCI Repository: http://www.ics.uci.edu/~mlearn/MLRepository.html
Statlib: http://lib.stat.cmu.edu/
Resources: Journals 21
Journal of Machine Learning Research www.jmlr.org
Machine Learning
Neural Computation
Neural Networks
IEEE Trans on Neural Networks and Learning Systems
IEEE Trans on Pattern Analysis and Machine Intelligence
Journals on Statistics/Data Mining/Signal
Processing/Natural Language
Processing/Bioinformatics/...
Resources: Conferences 22
International Conference on Machine Learning (ICML)
European Conference on Machine Learning (ECML)
Neural Information Processing Systems (NIPS)
Uncertainty in Artificial Intelligence (UAI)
Computational Learning Theory (COLT)
International Conference on Artificial Neural Networks (ICANN)
International Conference on AI & Statistics (AISTATS)
International Conference on Pattern Recognition (ICPR)
...