Lecture 1: Basic Concepts ofMachine Leaning
Cognitive Systems II - Machine LearningSS 2005
Ute Schmid (lecture)
Emanuel Kitzelmann (practice)
Maximilian Roeglinger (tutor)
Applied Computer Science, Bamberg University
Lecture 1: Basic Concepts of Machine Leaning – p. 1
Organization of the Course
Homepage:http://www.cogsys.wiai.uni-bamberg.de/teaching/
Prerequesites: CogSys I
Textbook: Tom Mitchell (1997). Machine Learning.McGraw Hill.
PracticeProgramming Assignments in JavaMarked exercise sheets and extra points for the exam
Lecture 1: Basic Concepts of Machine Leaning – p. 2
Outline of the Course
Basic Concepts of Machine Learning
Basic Approaches to Concept LearningFoundations of Concept LearningDecision TreesPerceptrons and Multilayer-PerceptronsHuman Concept Learning
Special Aspects of Concept LearningInductive Logic ProgrammingGenetic AlgorithmsInstance-based LearningBayesian Learning
Lecture 1: Basic Concepts of Machine Leaning – p. 3
Outline of the Course
Learning Programs and StrategiesReinforcement LearningInductive Function Synthesis
Further Topics and Applications in Machine Learning
Lecture 1: Basic Concepts of Machine Leaning – p. 4
Course Objectives
introduce to the central approaches of machinelearning
point out relations to human learning
define a class of problems that encompassesinteresting forms of learning
explore algorithms that solve such problems
provide understanding of the fundamental structure oflearning problems and processes
Lecture 1: Basic Concepts of Machine Leaning – p. 5
Some Quotes as Motivation
If an expert system–brilliantly designed, engineered andimplemented–cannot learn not to repeat its mistakes, it isnot as intelligent as a worm or a sea anemone or a kitten.
Oliver G. Selfridge, from The Gardens of Learning
Find a bug in a program, and fix it, and the program willwork today. Show the program how to find and fix a bug,and the program will work forever.
Oliver G. Selfridge, in AI’s Greatest Trends and Controversies
If we are ever to make claims of creating an artificialintelligence, we must address issues in natural language,automated reasoning, and machine learning.
George F. Luger
Lecture 1: Basic Concepts of Machine Leaning – p. 6
What is Machine Learning?
Some definitions
Machine learning refers to a system capable of theautonomous acquisitionand integration of knowledge. Thiscapacity to learn from experience, analytical observation,and other means, results in a system that cancontinuously self-improveand thereby offer increasedefficiency and effectiveness.
http://www.aaai.org/AITopics/html/machine.html
The field of machine learning in concerned with thequestion of how to construct computer programmsthatautomatically improvewith experience.
Tom M. Mitchell, Machine Learning (1997)
Lecture 1: Basic Concepts of Machine Leaning – p. 7
What is Machine Learning?
machine learning is inherently a multidisciplinary fieldi.e. artificial intelligence, probability, statistics,computational complexity theory, informationtheory, philosophy, psychology, neurobiology, ...
Knowledge-based vs. Learning Systems
Different approaches
Concept vs. Classification LearningSymbolic vs. Statistical LearningInductive vs. Analytical Learning
Lecture 1: Basic Concepts of Machine Leaning – p. 8
Knowledge-based vs. Learning Systems
Knowledge-based Systems:Acquisition and modeling ofcommon-sense knowledge and expert knowledge⇒ limited to given knowledge base and rule set⇒ Inference: Deduction generates no new knowledge
but makes implicitly given knowledge explicit⇒ Top-Down: from rules to facts
Learning Systems: Extraction of knowledge and rules fromexamples/experience
Teach the system vs. program the systemLearning as inductive process
⇒ Bottom-Up: from facts to rules
Lecture 1: Basic Concepts of Machine Leaning – p. 9
Knowledge-based vs. Learning Systems
⇒ A flexible and adaptive organism cannot rely on a fixedset of behavior rules but must learn (over its completelife-span)!
⇒ Motivation for Learning Systems
Lecture 1: Basic Concepts of Machine Leaning – p. 10
Knowledge Acquisition Bottleneck
En
gin
eerin
g
Acq
uis
itio
n Knowledge
EXPERTSYSTEM
(Feigenbaum, 1983)
Break-through in computerchess with Deep Blue:Evaluation function of chessgrandmaster Joel Benjamin.Deep Blue cannot change theevaluation function by itself!
Experts are often not able toverbalize their specialknowledge.⇒ Indirect methods: Extractionof knowledge from expertbehavior in example situations(diagnosis of X-rays, controllinga chemical plant, ...)
Lecture 1: Basic Concepts of Machine Leaning – p. 11
Learning as Induction
Deduction Induction
All humans are mortal. (Axiom) Socrates is human. (Background K.)
Socrates is human. (Fact) Socrates is mortal. (Observation(s))
Conclusion: Generalization:
Socrates is mortal. All humans are mortal.
Deduction: from general to specific ⇒ proven correctness
Induction: from specific to general ⇒ (unproven)knowledge gain
Induction generates hypotheses notknowledge!
Lecture 1: Basic Concepts of Machine Leaning – p. 12
Epistemological problems
⇒ pragmatic solutions
Confirmation Theory: A hypothesis obtained bygeneralization gets supported by new observations(not proven!).
grue Paradox:All emeralds are grue.Something is grue, if it is green before a future time tand blue thereafter.⇒ Not learnable from examples!
Lecture 1: Basic Concepts of Machine Leaning – p. 13
Inductive Learning Hypothesis
as shown above inductive learning is not provencorrect
nevertheless the learning task is determine ahypothesis h ∈ H identical to the target concept c
(∀x ∈ X)[h(x) = c(x)]
only training examples D are available
inductive algorithms can at best guarantee that theoutput hypothesis D fits the target concept over D
(∀x ∈ D)[h(x) = c(x)]
⇒ Inductive Learning Hypothesis: Any hypothesis found toapproximate the target concept well over a sufficientlylarge set of training examples will also approximate thetarget function well over other unobserved examples
Lecture 1: Basic Concepts of Machine Leaning – p. 14
Concept vs. Classification Learning
Concept learning:
Objects are clustered in concepts.
Extensional: (infinite) set of all exemplarsIntensional: finite characterizationT = {x| has-3/4-legs(x), has-top(x)} �
���
����
����
������������������������
Construction of a finite characterization from a subsetof examples (“training set”).
Classification learning:
Identification of relevant attributes and theirinterrelation, which characterize an object as memberof a class.f : Xn → K
Lecture 1: Basic Concepts of Machine Leaning – p. 15
Concept Learning / Examples
+ Occurence of Tse-Tse fly yes/no, given geographicand climatic attributes+ risk of cardiac arrest yes/no, given medical data+ credit-worthinessof customer yes/no, given personaland customer data+ safe chemical process yes/no, given physical andchemical measurements
Generalization of pre-classified example data,application for prognosis
Lecture 1: Basic Concepts of Machine Leaning – p. 16
Symbolic vs. Statistical Learning
Symbolic Learning: underlying learning strategies such asrote learning, learning by being told, learning byanalogy, learning from examples and learning fromdiscovery
knowledge is represented in the form of symbolicdescriptions of the learned concepts, i.e.production rules or concept hierarchiesi.e. decision trees, inductive logic programming
Statistical Learning: modeling of the behavioursometimes called statistical inference,connectionstic learningi.e. Artificial Neuronal Networks, BayesianLearning
Lecture 1: Basic Concepts of Machine Leaning – p. 17
Designing a Learning System
learning system: A computer program is said to learnfrom experience E with respect to some class of tasksT and performance measure P , if its performance attasks in T , as measured by P , improves withexperience E.
i.e. a checkers learning problemT : playing checkersP : percent of games won against opponentsE: playing practice games against itself
Lecture 1: Basic Concepts of Machine Leaning – p. 18
Designing a Learning System
consider designing a program to learn to play checkersin order to illustrate some of the basic design issuesand approaches to machine learning
1. Choosing the Training Experiencedirect or indirect feedbackdegree to which the learner controls the sequenceof training examplesrepresentativity of the distribution of the trainingexamples
⇒ significant impact on success or failure
Lecture 1: Basic Concepts of Machine Leaning – p. 19
Designing a Learning System
2. Choosing the Target Function
determine what type of knowledge will be learned
most obvious form is a function, that chooses the best move forany given board state
i.e., ChooseMove : B → M
sometimes evaluation functions are easier to learn
i.e., V : B → <
assigns higher values to better boards states by recursivelygenerating the successor states and evaluating them until thegame is over
no efficient solution (nonoperational definition)
⇒ function approximation is sufficient (operational definition)
Lecture 1: Basic Concepts of Machine Leaning – p. 20
Designing a Learning System
3. Choosing a Representation for the Target Function
e.g. a large table, a set of rules or a linear function
e.g. V (b) = w0 + w1x1 + w2x2 + ... + wixi
4. Choosing a Function Approximation Algorithm
5. Estimating Training Values
it is easy to assign a value to a final state
the evaluation of intermediate states is more difficzult
Lecture 1: Basic Concepts of Machine Leaning – p. 21
Designing a Learning System
DetermineTarget Function
Determine Representationof Learned Function
Determine Type of Training Experience
DetermineLearning Algorithm
Games against self
Games against experts Table of correct
moves
Linear functionof six features
Artificial neural network
Polynomial
Gradient descent
Board ➝ value
Board➝ move
Completed Design
...
...
Linear programming
...
...
Lecture 1: Basic Concepts of Machine Leaning – p. 22
Notation
Instance SpaceX: set of all possible examples over which theconcept is defined (possibly attribute vectors)
Target Conceptc : x → {0, 1}: concept or function to be learned
Training example x ∈ X of the form < x, c(x) >
Training Set D: set of all available training examples
Hypothesis SpaceH: set of all possible hypotheses according to thehypothesis language
Hypothesish ∈ H: boolean valued function of the form X → {0, 1}
⇒ the goal is to find a h ∈ H, such that (∀x ∈ X)[h(x) = c(x)]
Lecture 1: Basic Concepts of Machine Leaning – p. 23
Properties of Hypotheses
general-to-specific ordering
naturally occuring order over H
learning algorithms can be designed to search H exhaustivelywithout explicitly enumerating each hypothesis h
hi is more general or equal to hk (written hi ≥g hk)⇔ (∀x ∈ X)[(hk(x) = 1) → (hj(x) = 1)]
hi is (stricly) more general to hk (written hi >G hk)⇔ (hj ≥g hk) ∧ (hk �g hi)
≥g defines a partial ordering over the Hypothesis Space H
Lecture 1: Basic Concepts of Machine Leaning – p. 24
Properties of Hypotheses - Example
h1 = Aldo loves playing Tennis if the sky is sunnyh2 = Aldo loves playing Tennis if the water is warmh3 = Aldo loves playing Tennis if the sky is sunny and the water is warm
⇒ h1 >g h3, h2 >g h3, h2 6> h1
Lecture 1: Basic Concepts of Machine Leaning – p. 25
Properties of Hypotheses
consistency
a hypothesis h is consistentwith a set of training examples D iffh(x) = c(x) for each example < x, c(x) > in D
Consistent(h, D) ≡ (∀ < x, c(x) >∈ D)[h(x) = c(x)]
that is, every example in D is classified correctly by thehypothesis
Lecture 1: Basic Concepts of Machine Leaning – p. 26
Properties of Hypotheses - Example
h1 is consistent with D
Lecture 1: Basic Concepts of Machine Leaning – p. 27