Page 1
Perceptrons (a history of Neural Network research)
Vicente MalaveDepartment of Cognitive Science
University of California, San Diego
Page 4
Peter Norvig (Google, 2010)
Page 5
Both videos describe the same method. What happened?
Page 6
What can you do with a neural network?
Page 7
What can you learn with a neural network?
Page 15
Information Processing Systems
... basically, can you build a machine that can actually do the task?
We'll analyze the behavior of these abstract machines, not caring if they run on a digital, analog, or meat-based computer.
Page 17
Pattern Recognition
Page 18
Your brain is so good it's hard to realize how difficult this is.
Page 21
Character Recognition
Page 22
so hard,
your bank designed this font to avoid doing it.
Page 27
Linear Threshold Unitsa neuron basically adds up the inputs, so we'll build a machine that does that.
Page 28
What can this do?
With only one input, you have a threshold, how much you need for the unit to fire.
Page 29
What can this do?
WIth two inputs, we're drawing a line between categories..
Page 30
Now you have (thousands, millions) of connections to set.
Page 31
That sounds hard.
Page 32
So, we don't do that. The machine will program itself.
Page 34
Error-Correction Procedure
3
Page 35
Error-Correction Procedure
3
Page 36
Error-Correction Procedure
3
Page 37
Error-Correction Procedure
3
Page 39
Mark I Perceptron
Page 45
Does it happen every time?
Page 47
Perceptron Convergence Theorem (1962, various)
* If there is a line that perfectly separates the points, the perceptron will find it, in a finite number of steps.
Page 49
Perceptrons (1969)
Page 53
Confocal Microscope
Page 54
.. Computer Science too
Alan Newell wrote a book on designing computer processors (hardware)
@MIT they decided to build their own computers!
Page 56
What can't you do with a line?
Page 58
Exclusive Or
You need a few units to do this with linear thresholds.
10 8080
Page 60
What can this do?
You could have a perceptron for people over 80.
Page 61
What can this do?
Or under 10.
Page 62
Exclusive Or
But not both (with a single unit).
10 8080
Page 63
In two dimensions,
Page 69
It will circle forever.
Page 70
Not just XOR
Minsky and Paper proved that Perceptrons* can't learn:● Connectedness (book cover)● Parity (odd or even number)
* ( with restrictions, like the number or width of connections )
Complexity : can you actually build this machine?
Page 71
Perceptrons (1969)
Page 73
... a long winter
And then,
Page 81
What can this do?
The perceptron has a hard boundary.
If you smooth it, you can take derivatives.
Page 82
Backpropagation
Error-correction, but now the hidden units can figure out how much they are helping (or not).
Chain Rule.
Now you can learn hidden units.
Page 85
1986: The year neural networks broke.
Page 87
.. it keeps working.
Page 90
What can you do with a neural network?
Page 92
What can you learn with a neural network?
Page 96
1987: Department of Cognitive Science
Page 98
Cognitive Science looks very different when you know that
learning is possible.
Page 100
http://mplab.ucsd.edu/wordpress/projects/bev1/Banner3b.png
(Butko et al, 2006)
Page 101
What became of our old friend, the Perceptron?
Page 102
maybe we don't need to be so clever, and we can just have a fixed hidden layer
Page 106
2006: don't call it a comeback.
Page 108
What about a random hidden layer?
Page 112
Wrapup
You can learn things with neural networks.Within limits, (xor, local minima).You can get really far if you push hard on a simple representation.Learning is possible for more things that you might think.We have a theory.
Page 113
This is what those math classes are for.