What makes ImageNet good for transfer learning?
Minyoung Huh, Pulkit Agrawal, Alexei A. Efros. arXiv 2016
Presented by: Ismail
Lets recap..
Week 1
So far..(What is ImageNet?)
So far.. (What is AlexNet?)
So far..(Performance of AlexNet!)
So far..(CNN activation as features?)
Slide credit: Huan Zhang, UCDavis
So far..(CNN activation as features?)
Slide credit: Huan Zhang, UCDavis
So far..(Can we do transfer learning?)
Slide credit: Jason Yosinski
Week 2
So far..(CNN features for object detection?)
Slide Credit: Ross Girshick
So far..(Pre-Training?)
Slide credit: Patrick Chen, UC Davis
Are these performance increase restricted to ImageNet?
1. How does the amount of pre-training data
affect transfer performance?
1. How does the amount of pre-training dataaffect transfer performance?
2. How does the taxonomy of the pre-training task affect transfer performance?
Bottom-up: 918, 753, 486, 79 and 9 classesTop-down: 127, 10 and 2 classes
2.1 -- Effect of number of pre-training classes on transfer performance?
Top-down: transfer performance
2.2 -- Does training with coarse classes induce features relevant for fine-grained recognition?
Induction accuracy, top-1 and top-5 NN in FC7
2.3 -- Does training with coarse classes induce features relevant for fine-grained recognition?
Induction accuracy, top-1 and top-5 NN in FC7
2.4 -- Does training with fine-grained classes induce features relevant for coarse recognition?
2.5 -- More Classes or More Examples Per Class?
2.6 -- How important is to pre-train on classes that are also present in a target task?
3. Does data augmentation from non-target classes always improve performance?
Splitting ImageNet..
Does adding arbitrary classes to pre-training data always improve transfer performance?