Upload
isaac-bates
View
218
Download
0
Tags:
Embed Size (px)
Citation preview
Machine Learning
CUNY Graduate Center
Lecture 1: Introduction
2
Today
• Welcome• Overview of Machine Learning• Class Mechanics• Syllabus Review• Basic Classification Algorithm
3
My research and background
• Speech– Analysis of Intonation– Segmentation
• Natural Language Processing– Computational Linguistics
• Evaluation Measures• All of this research relies heavily on
Machine Learning
4
You
• Why are you taking this class?• For Ph.D. students:
– What is your dissertation on?– Do you expect it to require Machine Learning?
• What is your background and comfort with– Calculus– Linear Algebra– Probability and Statistics
• What is your programming language of preference?– C++, java, or python are preferred
5
Machine Learning
• Automatically identifying patterns in data• Automatically making decisions based on
data• Hypothesis:
Data Learning Algorithm Behavior
Data Programmer Behavior
≥
6
Machine Learning in Computer Science
Machine LearningBiomedical/
ChemedicalInformatics
Financial Modeling
Natural Language Processing
Speech/Audio
Processing Planning
Locomotion
Vision/Image
Processing
Robotics
Human Computer Interaction
Analytics
7
Major Tasks
• Regression– Predict a numerical value from “other
information”
• Classification– Predict a categorical value
• Clustering– Identify groups of similar entities
• Evaluation
8
Feature Representations
• How do we view data?
Entity in the World
Web PageUser BehaviorSpeech or Audio DataVisionWinePeopleEtc.
Feature Representation
Machine Learning Algorithm
Feature Extraction
Our Focus
9
Feature Representations
Height Weight Eye Color Gender
66 170 Blue Male
73 210 Brown Male
72 165 Green Male
70 180 Blue Male
74 185 Brown Male
68 155 Green Male
65 150 Blue Female
64 120 Brown Female
63 125 Green Female
67 140 Blue Female
68 165 Brown Female
66 130 Green Female
10
Classification
• Identify which of N classes a data point, x, belongs to.
• x is a column vector of features.
OR
11
Target Values
• In supervised approaches, in addition to a data point, x, we will also have access to a target value, t.
Goal of Classification
Identify a function y, such that y(x) = t
12
Feature Representations
Height Weight Eye Color Gender
66 170 Blue Male
73 210 Brown Male
72 165 Green Male
70 180 Blue Male
74 185 Brown Male
68 155 Green Male
65 150 Blue Female
64 120 Brown Female
63 125 Green Female
67 140 Blue Female
68 165 Brown Female
66 130 Green Female
13
Graphical Example of Classification
14
Graphical Example of Classification
?
15
Graphical Example of Classification
?
16
Graphical Example of Classification
17
Graphical Example of Classification
18
Graphical Example of Classification
19
Decision Boundaries
20
Regression
• Regression is a supervised machine learning task. – So a target value, t, is given.
• Classification: nominal t • Regression: continuous t
Goal of Classification
Identify a function y, such that y(x) = t
21
Differences between Classification and Regression
• Similar goals: Identify y(x) = t.• What are the differences?
– The form of the function, y (naturally).– Evaluation
• Root Mean Squared Error• Absolute Value Error• Classification Error• Maximum Likelihood
– Evaluation drives the optimization operation that learns the function, y.
22
Graphical Example of Regression
?
23
Graphical Example of Regression
24
Graphical Example of Regression
25
Clustering
• Clustering is an unsupervised learning task.– There is no target value to shoot for.
• Identify groups of “similar” data points, that are “dissimilar” from others.
• Partition the data into groups (clusters) that satisfy these constraints1. Points in the same cluster should be similar.
2. Points in different clusters should be dissimilar.
26
Graphical Example of Clustering
27
Graphical Example of Clustering
28
Graphical Example of Clustering
29
Mechanisms of Machine Learning
• Statistical Estimation– Numerical Optimization– Theoretical Optimization
• Feature Manipulation• Similarity Measures
30
Mathematical Necessities
• Probability• Statistics• Calculus
– Vector Calculus
• Linear Algebra
• Is this a Math course in disguise?
31
Why do we need so much math?
• Probability Density Functions allow the evaluation of how likely a data point is under a model. – Want to identify good PDFs. (calculus)– Want to evaluate against a known PDF. (algebra)
32
Gaussian Distributions
• We use Gaussian Distributions all over the place.
33
Gaussian Distributions
• We use Gaussian Distributions all over the place.
34
Class Structure and Policies
• Course website:– http://eniac.cs.qc.cuny.edu/andrew/gcml-11/syllabus.html
• Google Group for discussions and announcements– http://groups.google.com/gcml-spring2011– Please sign up for the group ASAP. – Or put your email address on the sign up sheet, and you will be
sent an invitation.
35
Data Data Data
• “There’s no data like more data”• All machine learning techniques rely on the
availability of data to learn from.• There is an ever increasing amount of data
being generated, but it’s not always easy to process.
• UCI– http://archive.ics.uci.edu/ml/
• LDC (Linguistic Data Consortium)– http://www.ldc.upenn.edu/
36
Half time.
Get Coffee.Stretch.
37
Decision Trees
• Classification Technique.
color
h w w
w w h h
blue brown green
<66<140 <150
<66 <64<145 <170
m m
m
m mf f
f
f f
38
Decision Trees
• Very easy to evaluate. • Nested if statements
color
h w w
w w h h
blue brown green
<66<140 <150
<66 <64<145 <170
m m
m
m mf f
f
f f
39
More formal Definition of a Decision Tree
• A Tree data structure• Each internal node corresponds to a feature• Leaves are associated with target values.• Nodes with nominal features have N
children, where N is the number of nominal values
• Nodes with continuous features have two children for values less than and greater than or equal to a break point.
40
Training a Decision Tree
• How do you decide what feature to use?• For continuous features how do you
decide what break point to use?
• Goal: Optimize Classification Accuracy.
41
Example Data Set
Height Weight Eye Color Gender
66 170 Blue Male
73 210 Brown Male
72 165 Green Male
70 180 Blue Male
74 185 Brown Male
68 155 Green Male
65 150 Blue Female
64 120 Brown Female
63 125 Green Female
67 140 Blue Female
68 165 Brown Female
66 130 Green Female
42
Baseline Classification Accuracy
• Select the majority class.– Here 6/12 Male, 6/12 Female.– Baseline Accuracy: 50%
• How good is each branch?– The improvement to classification accuracy
43
Training Example
• Possible branches
color
blue brown green
2M / 2F 2M / 2F 2M / 2F
50% Accuracy before Branch
50% Accuracy after Branch
0% Accuracy Improvement
44
Example Data Set
Height Weight Eye Color Gender
63 125 Green Female
64 120 Brown Female
65 150 Blue Female
66 170 Blue Male
66 130 Green Female
67 140 Blue Female
68 145 Brown Female
6 155 Green Male
70 180 Blue Male
72 165 Green Male
73 210 Brown Male
74 185 Brown Male
45
Training Example
• Possible branches
height
<68
1M / 5F 5M / 1F
50% Accuracy before Branch
83.3% Accuracy after Branch
33.3% Accuracy Improvement
46
Example Data Set
Height Weight Eye Color Gender
64 120 Brown Female
63 125 Green Female
66 130 Green Female
67 140 Blue Female
68 145 Brown Female
65 150 Blue Female
68 155 Green Male
72 165 Green Male
66 170 Blue Male
70 180 Blue Male
74 185 Brown Male
73 210 Brown Male
47
Training Example
• Possible branches
weight
<165
1M / 6F 5M
50% Accuracy before Branch
91.7% Accuracy after Branch
41.7% Accuracy Improvement
48
Training Example
• Recursively train child nodes.
weight
<165
5M height
<68
5F 1M / 1F
49
Training Example
• Finished Tree weight
<165
5M height
<68
5F weight
<155
1M 1F
50
Generalization
• What is the performance of the tree on the training data?– Is there any way we could get less than 100%
accuracy?
• What performance can we expect on unseen data?
51
Evaluation
• Evaluate performance on data that was not used in training.
• Isolate a subset of data points to be used for evaluation.
• Evaluate generalization performance.
52
Evaluation of our Decision Tree
• What is the Training performance?• What is the Evaluation performance?
– Never classify female over 165– Never classify male under 165, and under 68.– The middle section is trickier.
• What are some ways to make these similar?
53
Pruning
• There are many pruning techniques.• A simple approach is to have a minimum
membership size in each node.weight
<165
5M height
<68
5F weight
<155
1M 1F
weight
<165
5M height
<68
5F 1F / 1M
54
Decision Tree Recap
• Training via Recursive Partitioning.• Simple, interpretable models.• Different node selection criteria can be
used.– Information theory is a common choice.
• Pruning techniques can be used to make the model more robust to unseen data.
55
Next Time: Math Primer
• Probability– Bayes Rule– Naïve Bayes Classification
• Calculus– Vector Calculus
• Optimization– Lagrange Multipliers