Department of Computer Science, University of Waikato, New Zealand
Geoff Holmes
WEKA project and team Data Mining process Data format Preprocessing Classification Regression Clustering Associations Attribute selection Visualization Performing experiments New Directions Conclusion
Data Mining using WEKA
2
Waikato Environment forKnowledge Analysis
Copyright: Martin Kramer ([email protected])
• PGSF/NERF project been going since 1994.
• New Java software development from 98 on.
• Project goals:• Develop a state-of-the-art workbench of data mining tools• Explore fielded applications• Develop new fundamental methods
3
WEKA TEAM Geoff Holmes, Ian Witten, Bernhard Pfahringer,
Eibe Frank, Mark Hall, Yong Wang, Remco Bouckaert, Peter Reutemann, Gabi Schmidberger, Dale Fletcher, Tony Smith, Mike Mayo and Richard Kirkby
Members on editorial board of MLJ, programme committees for ICML, ECML, KDD, ….
Authors of a widely adopted data mining textbook.
4
Data mining process
Select Preprocess Transform Mine Analyze &Assimilate
Selecteddata
Preprocesseddata
Transformeddata Extracted
informationAssimilatedknowledge
5
Data mining software Commercial packages (Cost ? X 106 dollars)
IBM Intelligent Miner SAS Enterprise Miner Clementine
WEKA (Free = GPL licence!) Java => Multi-platform Open source – means you get source code
6
Data format
Rectangular table format (flat file) very common Most techniques exist to deal with table format
Row=instance=individual=data point=case=record Column=attribute=field=variable=characteristic=dimension
Outlook Temperature Humidity Windy Play
Sunny Hot High False No
Sunny Hot High True No
Overcast Hot High False Yes
Rainy Mild Normal False Yes
… … … … …
7
Data complications Volume of data – sampling; essential attributes Missing data Inaccurate data Data filtering Data aggregation
8
WEKA’s ARFF format%% ARFF file for weather data with some numeric features%@relation weather
@attribute outlook {sunny, overcast, rainy}@attribute temperature numeric@attribute humidity numeric@attribute windy {true, false}@attribute play? {yes, no}
@datasunny, 85, 85, false, nosunny, 80, 90, true, noovercast, 83, 86, false, yes...
9
Attribute types ARFF supports numeric and nominal attributes Interpretation depends on learning scheme
Numeric attributes are interpreted as- ordinal scales if less-than and greater-than are used- ratio scales if distance calculations are performed
(normalization/standardization may be required) Instance-based schemes define distance between
nominal values (0 if values are equal, 1 otherwise) Integers: nominal, ordinal, or ratio scale?
10
Missing values Frequently indicated by out-of-range entries
Types: unknown, unrecorded, irrelevant Reasons: malfunctioning equipment, changes in
experimental design, collation of different datasets, measurement not possible
Missing value may have significance in itself (e.g. missing test in a medical examination) Most schemes assume that is not the case
“missing” may need to be coded as additional value
11
Getting to know the data Simple visualization tools are very useful for
identifying problems Nominal attributes: histograms (Distribution
consistent with background knowledge?) Numeric attributes: graphs (Any obvious outliers?)
2-D and 3-D visualizations show dependencies Domain experts need to be consulted Too much data to inspect? Take a sample!
12
Learning and using a model Learning
Learning algorithm takes instances of concept as input Produces a structural description (model) as output
Input:conceptto learn
Learningalgorithm Model
Prediction Model takes new instance as input Outputs prediction
Input Model Prediction
13
Structural descriptions (models) Some models are better than others
Accuracy Understandability
Models range from “easy to understand” to virtually incomprehensible Decision trees Rule induction Regression models Neural networks
Easier
Harder
14
Pre-processing the data Data can be imported from a file in various
formats: ARFF, CSV, C4.5, binary Data can also be read from a URL or from SQL
databases using JDBC Pre-processing tools in WEKA are called “filters” WEKA contains filters for:
Discretization, normalization, resampling, attribute selection, attribute combination, …
15
Explorer: pre-processing
16
Building classification models “Classifiers” in WEKA are models for predicting
nominal or numeric quantities Implemented schemes include:
Decision trees and lists, instance-based classifiers, support vector machines, multi-layer perceptrons, logistic regression, Bayes’ nets, …
“Meta”-classifiers include: Bagging, boosting, stacking, error-correcting output
codes, data cleansing, …
17
Explorer: classification
18
Explorer: classification
19
Explorer: classification
20
Explorer: classification
21
Explorer: classification
22
Explorer: classification
23
Explorer: classification
24
Explorer: classification
25
Explorer: classification/regression
26
Explorer: classification
27
Clustering data WEKA contains “clusterers” for finding groups of
instances in a datasets Implemented schemes are:
k-Means, EM, Cobweb Coming soon: x-means Clusters can be visualized and compared to “true”
clusters (if given) Evaluation based on loglikelihood if clustering
scheme produces a probability distribution
28
Explorer: clustering
29
Explorer: clustering
30
Explorer: clustering
31
Explorer: clustering
32
Finding associations WEKA contains an implementation of the Apriori
algorithm for learning association rules Works only with discrete data
Allows you to identify statistical dependencies between groups of attributes: milk, butter bread, eggs (with confidence 0.9 and
support 2000) Apriori can compute all rules that have a given
minimum support and exceed a given confidence
33
Explorer: association rules
34
Attribute selection Separate panel allows you to investigate which
(subsets of) attributes are the most predictive ones Attribute selection methods contain two parts:
A search method: best-first, forward selection, random, exhaustive, race search, ranking
An evaluation method: correlation-based, wrapper, information gain, chi-squared, PCA, …
Very flexible: WEKA allows (almost) arbitrary combinations of these two
35
Explorer: attribute selection
36
Data visualization Visualization is very useful in practice: e.g. helps
to determine difficulty of the learning problem WEKA can visualize single attributes (1-d) and
pairs of attributes (2-d) To do: rotating 3-d visualizations (Xgobi-style)
Color-coded class values “Jitter” option to deal with nominal attributes (and
to detect “hidden” data points) “Zoom-in” function
37
Explorer: data visualization
38
Performing experiments The Experimenter makes it easy to compare the
performance of different learning schemes applied to the same data.
Designed for nominal and numeric class problems Results can be written into file or database Evaluation options: cross-validation, learning
curve, hold-out Can also iterate over different parameter settings Significance-testing built in!
39
Experimenter: setting it up
40
Experimenter: running it
41
Experimenter: analysis
42
New Directions for Weka New user interface based on work flows New data mining techniques
PACE regression Bayesian Networks Logistic option trees
New frameworks for very large data sources (MOA) New applications in the agricultural sector
Matchmaker for RPBC Ltd Pest control for kiwifruit management Crop forecasting
43
Next Generation Weka: Knowledge flow GUI
44
Conclusions Weka is a comprehensive suite of Java programs
united under a common interface to permit exploration and experimentation on datasets using state-of-the-art techniques.
The software is available under the GPL from http://www.cs.waikato.ac.nz/~ml
Weka provides the perfect environment for ongoing research in data mining.