+ All Categories
Home > Documents > Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf ·...

Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf ·...

Date post: 06-Mar-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
33
Tutorial on Interpreting and Explaining Deep Models in Computer Vision Wojciech Samek (Fraunhofer HHI) Grégoire Montavon (TU Berlin) Klaus-Robert Müller (TU Berlin) 08:30 - 09:15 Introduction KRM 09:15 - 10:00 Techniques for Interpretability GM 10:00 - 10:30 Coffee Break ALL 10:30 - 11:15 Applications of Interpretability WS 11:15 - 12:00 Further Applications and Wrap-Up KRM
Transcript
Page 1: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

Tutorial on Interpreting and Explaining Deep Models in Computer Vision

Wojciech Samek(Fraunhofer HHI)

Grégoire Montavon(TU Berlin)

Klaus-Robert Müller(TU Berlin)

08:30 - 09:15 Introduction KRM09:15 - 10:00 Techniques for Interpretability GM10:00 - 10:30 Coffee Break ALL10:30 - 11:15 Applications of Interpretability WS11:15 - 12:00 Further Applications and Wrap-Up KRM

Page 2: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

2 / 33

Overview of Explanation Methods

Question: Which one to choose ?

Page 3: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

3 / 33

First Attempt: Distance to Ground Truth

DNN

groundtruth

explanation

inputevidence for “truck”

error

? ? ?

Page 4: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

4 / 33

First Attempt: Distance to Ground Truth

DNN

groundtruth

explanation

input

error

evidence for “car”

? ? ?? ? ?

Page 5: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

5 / 33

DNN

From Ground Truth Explanations to Axioms

Idea: Evaluate the explanation technique axiomatically, i.e. it must pass a number of predefined “unit tests”.[Sun’11, Bach’15, Montavon’17, Samek’17,Sundarajan’17, Kindermans’17, Montavon’18].

explanation technique

Page 6: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

6 / 33

Properties 1-2: Conservation and Positivity

[Montavon’17, see also Sun’11, Landecker’13, Bach’15]

Conservation: Total attribution on the input features should be proportional to the amount of (explainable) evidence at the output.

Positivity: If the neural network is certain about its prediction, input features are either relevant (positive) or irrelevant (zero).

DNN

explanation

Page 7: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

7 / 33

Property 3: Continuity [Montavon’18]

If two inputs are the almost the same, and the prediction is also almost the same, then the explanation should also be almost the same.

Method 1 Method 2Example:

Page 8: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

8 / 33

Testing Continuity

Sensitivity analysis

f(x)

Ri

input explanationscores

LRP-�1⇤0

Page 9: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

9 / 33

Property 4: Selectivity [Bach’15, Samek’17]

Model must agree with the explanation: If input features are attributed relevance, removing them should reduce evidence at the output.

Method 1 Method 2Example:

Page 10: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

10 / 33

Testing Selectivity with Pixel-Flipping

[Bach’15, Samek’17]

LRP-�1⇤0Sensitivity analysis

f(x)

inputexplanationscores

f(x)

Page 11: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

11 / 33

1. Conservation

2. Positivity

3. Continuity

4. Selectivity

Properties

Explanation techniques Uniform

(Gradient) 2

(Guided BP) 2

Gradient x Input

Guided BP x Input

LRP-�1⇤0

✔ ✔ ✔

✔ ✔ ✔

✔ ✔ ✔ ✔ ✔

...

...

✔✔

Page 12: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

12 / 33

Question: Can we deduce some properties without experiments, directly from the equations?

Page 13: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

13 / 33

Reminder

Backprop internals (for propagating gradient)

LRP-�1⇤0 internals (for propagating relevance)

Page 14: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

14 / 33

Example: Deducing Conservation

Summing gives the property

LRP-�1⇤0 propagation rule vs. grad D input

When bias is negative, grad D input will tend to inflate scores.

D input

Page 15: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

15 / 33

Example: Deducing Continuity

vs. grad D inputLRP-�1⇤0 propagation rule

(when bias negative, continuity due to denominator upper-bounding numerator.)

Page 16: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

16 / 33

Intermediate Conclusion

Ground-truth explanations are elusive. In practice, we are reduced to visual assessment or to test the explanation for a number of axioms.

Some properties can be deduced from the structure of the explanation method. Other can be tested empirically.

LRP-α1β0 satisfies key properties of an explanation. Sensitivity analysis and gradient D input have crucial limitations.

Page 17: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

17 / 33

From LRP to Deep Taylor Decomposition

The LRP-�1⇤0 rule

can be seen asa deep Taylor

decomposition (DTD)

which then yieldsdomain- and layer-specific rules

[Montavon’17]

Page 18: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

18 / 33

DTD: The Structure of Relevance

Proposition: Relevance at each layer is a product of the activation and an approximately constant term.

Page 19: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

19 / 33

DTD: The Relevance as a Neuron

Page 20: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

20 / 33

DTD: Taylor Expansion of the Relevance

Page 21: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

21 / 33

DTD: Decomposing the Relevance

Taylor expansion at root point:

Relevance can now bebackward propagated

Page 22: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

22 / 33

DTD: Choosing the Root Point

✔ ✔

1. nearest root

2. rescaled activation

3. rescaled excitations

Choice of root point

(LRP-�1⇤0)

(Deep Taylor generic)

Page 23: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

23 / 33

DTD: Verifying the Product Structure

2. apply LRP-�1⇤0 rule

1. assume it holdsin higher-layer

3. observe it also holds in lower-layer

But was it true?

Page 24: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

24 / 33

From LRP to Deep Taylor Decomposition

The LRP-�1⇤0 rule

can be seen asa deep Taylor

decomposition (DTD)

which then yieldsdomain- and layer-specific rules

[Montavon’17]

Page 25: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

25 / 33

DTD: Application to Input Layers

1. Choose a root point that is nearby and satisfies domain constraints

2. Inject it in the generic DTD rule to get the specific rule

Pixels: Embeddings:

image source:Tensorflow tutorial

Page 26: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

26 / 33

DTD: Application to Pooling Layers

A sum-pooling layer over positive activations is equivalent to a ReLU layer with weights 1.

A p-norm pooling layer can be approximated as a sum-pooling layer multiplied by a ratio of norms that we treat as constant [Montavon’17].

→ Treat pooling layers as ReLU detection layers

Page 27: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

27 / 33

Basic Recommendation for CNNs

LRP-� 1⇤ 0

DTD for

pixels

LRP-� 1⇤ 0

LRP-� 1⇤ 0

LRP-� 1⇤ 0

LRP-� 1⇤ 0

LRP-� 1⇤ 0 *

* For top-layers, other rules may improve selectivity

forward pass

backward pass

Page 28: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

28 / 33

DTD for Kernel Models [Kauffmann’18]

1. Build a neural network equivalent of the One-Class SVM:

Gaussian/Laplace Kernel

Student Kernel

2. Computes its deep Taylordecomposition

Outlier score

Page 29: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

29 / 33

Implementing the LRP-�1⇤0 ruleSequence of element-wise computations

Sequence of vector computations

Propagation rule to implement:

Page 30: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

30 / 33

Implementing the LRP-�1⇤0 rulePropagation rule to implement:

Code that reuses forward and gradient computations:

Page 31: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

31 / 33

How LRP Scales

No need for much computing power. GoogleNet explanation for single image can be done on the CPU.Linear time scaling allows to use LRP for real-time processing, or as part of training.

Page 32: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

32 / 33

Conclusion

Ground-truth explanations are elusive. In practice, we are reduced to visual assessment or to test the explanation for a number of axioms.

Some properties can be deduced from the structure of the explanation method. Other can be tested empirically.

LRP-α1β0 satisfies key properties of an explanation. Sensitivity analysis and gradient D input have crucial limitations.

This suitable LRP-α1β0 propagation rule can be seen as performing a deep Taylor decomposition for deep ReLU nets.

The deep Taylor decomposition allows to consistently extend the framework to new models and new types of data.

Page 33: Tutorial on Interpreting and Explaining Deep Models in ...iphome.hhi.de/samek/pdf/CVPR2018_2.pdf · a rof noitanalpxe eht tset ot ro tnmessessa lausiv ot decured nuomix.s mof ber

33 / 33

ReferencesS Bach, A Binder, G Montavon, F Klauschen, KR Müller, W Samek. On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation. PLOS ONE, 10(7):e0130140 (2015)

J Kauffmann, KR Müller, G Montavon: Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models. CoRR abs/1805.06230 (2018)

PJ Kindermans, S Hooker, J Adebayo, M Alber, K Schütt, S Dähne, D Erhan, B Kim: The (Un)reliability of saliency methods. CoRR abs/1711.00867 (2017)

W Landecker, M Thomure, L Bettencourt, M Mitchell, G Kenyon, S Brumby: Interpreting individual classifications of hierarchical networks. CIDM 2013: 32-38

G Montavon, S Lapuschkin, A Binder, W Samek, KR Müller: Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition 65: 211-222 (2017)

G Montavon, W Samek, KR Müller: Methods for interpreting and understanding deep neural networks. Digital Signal Processing 73: 1-15 (2018)

W Samek, A Binder, G Montavon, S Lapuschkin, KR Müller: Evaluating the Visualization of What a Deep Neural Network Has Learned. IEEE Trans. Neural Netw. Learning Syst. 28(11): 2660-2673 (2017)

Y Sun, M Sundararajan. Axiomatic attribution for multilinear functions. EC 2011: 177-178

M Sundararajan, A Taly, Q Yan: Axiomatic Attribution for Deep Networks. ICML 2017: 3319-3328


Recommended