Date post: | 04-May-2023 |
Category: |
Documents |
Upload: | khangminh22 |
View: | 1 times |
Download: | 0 times |
Under review as a conference paper at ICLR 2020
TESTING ROBUSTNESS AGAINSTUNFORESEEN ADVERSARIES
Anonymous authorsPaper under double-blind review
ABSTRACT
Most existing defenses against adversarial attacks only consider robustness to Lp-bounded distortions. In reality, the specific attack is rarely known in advanceand adversaries are free to modify images in ways which lie outside any fixeddistortion model; for example, adversarial rotations lie outside the set of Lp-bounded distortions. In this work, we advocate measuring robustness against amuch broader range of unforeseen attacks, attacks whose precise form is unknownduring defense design.We propose several new attacks and a methodology for evaluating a defenseagainst a diverse range of unforeseen distortions. First, we construct novel ad-versarial JPEG, Fog, Gabor, and Snow distortions to simulate more diverse adver-saries. We then introduce UAR, a summary metric that measures the robustnessof a defense against a given distortion. Using UAR to assess robustness againstexisting and novel attacks, we perform an extensive study of adversarial robust-ness. We find that evaluation against existingLp attacks yields redundant informa-tion which does not generalize to other attacks; we instead recommend evaluatingagainst our significantly more diverse set of attacks. We further find that adversar-ial training against either one or multiple distortions fails to confer robustness toattacks with other distortion types. These results underscore the need to evaluateand study robustness against unforeseen distortions.
1 INTRODUCTION
Neural networks perform well on many benchmark tasks (He et al., 2016) yet can be fooled by ad-versarial examples (Goodfellow et al., 2014), slightly distorted inputs designed to subvert a givenmodel. The adversary is frequently assumed to craft adversarial distortions under an L∞ constraint(Goodfellow et al., 2014; Madry et al., 2017; Xie et al., 2018), while other distortions such as adver-sarial geometric transformations, patches, and even 3D-printed objects have also been considered(Engstrom et al., 2017; Brown et al., 2017; Athalye et al., 2017). However, most work on adversar-ial robustness assumes the adversary is fixed and known. Defenses against adversarial attacks oftenleverage such knowledge when designing the defense, most commonly through adversarial training,which minimizes the adversarial loss against a fixed distortion type (Madry et al., 2017).
In practice, adversaries can modify their attacks and construct distortions whose precise form is notknown to the defense designers. In this work, we propose a methodology for assessing robustnessto such unforeseen attacks and use it to study how adversarial robustness transfers to them. To en-sure sufficient diversity, we introduce four novel adversarial attacks (§2) with qualitatively differentdistortion types: adversarial JPEG, Fog, Gabor, and Snow (sample images in Figure 1).
Our methodology (§3) involves evaluating a defense against a diverse set of held-out distortions notinvolved in the design of the defense; we suggest L∞, L1, Elastic, Fog, and Snow as an initial setto consider. For a fixed, held-out distortion, we then evaluate the defense against the distortion fora calibrated range of distortion sizes whose strength is roughly comparable across distortions. Foreach fixed distortion, our evaluation yields the summary metric UAR, which measures robustnessof a defense against that distortion relative to a model adversarially trained on that distortion. Weprovide code and calibrations to easily evaluate a defense against our suite of attacks and computeUAR for it at https://github.com/iclr-2020-submission/advex-uar.
1
Under review as a conference paper at ICLR 2020
Exi
stin
gA
ttack
s
L∞ L2 L1 Elastic
New
Atta
cks
JPEG Fog Gabor Snow
Figure 1: Attacked images (label “espresso maker”) against adversarially trained models with largeε. Each of the adversarial images above are optimized to maximize the classification loss.
Applying our method to 87 adversarially trained models and 8 different distortion types (§4), we findweaknesses in existing defenses and evaluation practices. Our results show that existing defensesbased on adversarial training do not generalize to unforeseen adversaries, even when restricted tothe 8 distortions in Figure 1. This adds to the mounting evidence that achieving robustness againsta single distortion type is insufficient to impart robustness to unforeseen attacks (Jacobsen et al.,2019; Jordan et al., 2019; Tramer & Boneh, 2019).
Turning to evaluation, our results demonstrate that accuracy against different Lp distortions is highlycorrelated relative to the other distortions we consider, suggesting that the common practice of eval-uating only against Lp distortions can give a misleading account of a model’s adversarial robustness.Our analysis using UAR demonstrates that our full suite of attacks adds signficant diversity and re-veals L∞, L1, Elastic, Fog, and Snow as a set with less correlated accuracy and UAR scores againstheld-out defenses. We suggest these attacks for use when evaluating against unforeseen adversaries.
A natural next approach is to defend against multiple distortion types simultaneously in the hope thatseeing a larger space of distortions provides greater transfer to unforeseen distortions. Unfortunately,we find that defending against even two different distortion types via joint adversarial training isdifficult (§5). Specifically, joint adversarial training leads to overfitting at moderate distortion sizes.
In summary, we make the following contributions:
1. We propose a method UAR to assess robustness of defenses against unforeseen adversaries.2. We introduce 4 novel attacks and apply UAR to assess how robustness transfers to these
attacks and 4 existing ones. Our results demonstrate that existing defense and evaluationmethods do not generalize well to unforeseen attacks.
3. We suggest the use of our more diverse attacks for evaluating novel defenses, highlightingL∞, L1, Elastic, Fog, and Snow as a diverse starting point.
2 A SET OF DIVERSE AND NOVEL ADVERSARIAL ATTACKS
We consider distortions (attacks) applied to an image x ∈ R3×224×224, represented as a vector ofRGB values. Let f : R3×224×224 → R100 be a model mapping images to logits1, and let `(f(x), y)denote the cross-entropy loss. For an input x with true label y and a target class y′ 6= y, ouradversarial attacks attempt to find x′ such that
1. the attacked image x′ is obtained by applying a constrained distortion to x, and2. the loss `(f(x′), y′) is minimized (targeted attack).
1We describe the attacks for ImageNet-100, but they can also be applied to CIFAR-10.
2
Under review as a conference paper at ICLR 2020
Exi
stin
gA
ttack
s
L∞ (4.1m, 11.1k, 32) L2 (1.3m, 4.8k, 99) L1 (224k, 2.6k, 218) Elastic (3.1m, 15.2k, 253)
New
Atta
cks
JPEG (5.4m, 18.7k, 255) Fog (4.1m, 13.2k, 89) Gabor (3.7m, 13.3k, 50) Snow (11.3m, 32.0k, 255)
Figure 2: Scaled pixel-level differences between original and attacked images for each attack (label“espresso maker”). The L1, L2, and L∞ norms of the difference are shown after the attack name.Our novel attacks display behavior which is qualitatively different from that of the Lp attacks. At-tacked images are shown in Figure 1, and unscaled differences are shown in Figure 9, Appendix B.1.
Adversarial training (Goodfellow et al., 2014) is a strong defense baseline against a fixed attack(Madry et al., 2017; Xie et al., 2018) which updates using an attacked image x′ instead of the cleanimage x at each training iteration.
We consider 8 attacks: L∞ (Goodfellow et al., 2014), L2 (Szegedy et al., 2013; Carlini & Wagner,2017), L1 (Chen et al., 2018), Elastic (Xiao et al., 2018), JPEG, Fog, Gabor, and Snow. We showsample attacked images in Figure 1 and the corresponding distortions in Figure 2. The JPEG, Fog,Gabor, and Snow attacks are new to this paper, and the L1 attack uses the Frank-Wolfe algorithm toimprove on previous L1 attacks. We now describe the attacks, whose distortion sizes are controlledby a parameter ε. We clamp output pixel values to [0, 255].
Existing attacks. The Lp attacks with p ∈ {1, 2,∞} modify an image x to an attacked imagex′ = x+δ. We optimize δ under the constraint ‖δ‖p ≤ ε, where ‖·‖p is theLp-norm on R3×224×224.
The Elastic attack warps the image by allowing distortions x′ = Flow(x, V ), where V :{1, . . . , 224}2 → R2 is a vector field on pixel space, and Flow sets the value of pixel (i, j) tothe bilinearly interpolated original value at (i, j) + V (i, j). We construct V by smoothing a vectorfieldW by a Gaussian kernel (size 25×25, std. dev. 3 for a 224×224 image) and optimizeW under‖W (i, j)‖∞ ≤ ε for all i, j. This differs in details from Xiao et al. (2018) but is similar in spirit.
Novel attacks. As discussed in Shin & Song (2017) for defense, JPEG compression applies alossy linear transformation JPEG based on the discrete cosine transform to image space, followedby quantization. The JPEG attack imposes the L∞-constraint ‖JPEG(x)− JPEG(x′)‖∞ ≤ ε on theattacked image x′. We optimize z = JPEG(x′) and apply a right inverse of JPEG to obtain x′.
Original Initialization Optimized
Figure 3: Snow before and after optimization.
Our novel Fog, Gabor, and Snow attacks are ad-versarial versions of non-adversarial distortionsproposed in the literature. Fog and Snow in-troduce adversarially chosen partial occlusionsof the image resembling the effect of mist andsnowflakes, respectively; stochastic versions ofFog and Snow appeared in Hendrycks & Diet-terich (2019). Gabor superimposes adversariallychosen additive Gabor noise (Lagae et al., 2009)onto the image; a stochastic version appeared inCo et al. (2019). These attacks work by optimizing a set of parameters controlling the distortion
3
Under review as a conference paper at ICLR 2020
0 1000 2000 3000 4000 5000
L2 distortion size
0
25
50A
ccu
racy
L2 attack vs. L2-adversarial training
0.0 2.5 5.0 7.5 10.0 12.5 15.0
Elastic distortion size
Elastic attack vs. L2-adversarial training
Figure 4: Accuracies ofL2 and Elastic attacks at different distortion sizes against a ResNet-50 modeladversarially trained against L2 at ε = 9600 on ImageNet-100. At small distortion sizes, the modelappears to defend well against Elastic, but large distortion sizes reveal a lack of transfer.
over an L∞-bounded set. Specifically, values for the diamond-square algorithm, sparse noise, andsnowflake brightness (Figure 3) are chosen adversarially for Fog, Gabor, and Snow, respectively.
Optimization. To handle L∞ and L2 constraints, we use randomly-initialized projected gradientdescent (PGD), which optimizes the distortion δ by gradient descent and projection to the L∞ andL2 balls (Madry et al., 2017). For L1 constraints, this projection is more difficult, and previousL1 attacks resort to heuristics (Chen et al., 2018; Tramer & Boneh, 2019). We use the randomly-initialized Frank-Wolfe algorithm (Frank & Wolfe, 1956), which replaces projection by a simpleroptimization of a linear function at each step (pseudocode in Appendix B.2).
3 MOTIVATION AND DESCRIPTION OF OUR METHODOLOGY
We now propose a method to assess robustness against unforeseen distortions, which relies on eval-uating a defense against a diverse set of attacks that were not used when designing the defense. Ourmethod must address the following issues:
• The range of distortion sizes must be wide enough to avoid the misleading behavior in whichrobustness appears to transfer at low distortion sizes but not at high distortion sizes (Figure 4);• The set of attacks considered must be sufficiently diverse.
We first provide a method to calibrate distortion sizes and then use it to define a summary metricthat assesses the robustness of a defense against a specific unforeseen attack. Using this metric, weare able to assess diversity and recommend a set of attacks to evaluate against.
Calibrate distortion size using adversarial training. As shown in Figure 4, the correlationbetween adversarial robustness against different distortion types may look different for differentranges of distortion sizes. It is therefore critical to evaluate on a wide enough range of distortionsize ε. We choose the minimum and maximum distortion sizes ε using the following principles;sample images at εmin and εmax are shown in Figure 5b.
1. The minimum distortion size εmin is the largest ε for which the adversarial validation ac-curacy against an adversarially trained model is comparable to that of a model trained andevaluated on unattacked data.
2. The maximum distortion size εmax is the smallest ε which either (a) yields images whichconfuse humans when applied against adversarially trained models or (b) reduces accuracyof adversarially trained models to below 25.
In practice, we select εmin and εmax according to these criteria from a sequence of ε which is geomet-rically increasing with ratio 2. We choose to evaluate against adversarially trained models becauseattacking against strong defenses is necessary to produce strong visual distortions (Figure 5a). Weintroduce the constraint that humans recognize attacked images at εmax because we find cases forL1, Fog, and Snow where adversarially trained models maintain non-zero accuracy for distortionsizes producing images incomprehensible to humans. An example for Snow is shown in Figure 5b.
UAR: an adversarial robustness metric. We measure a model’s robustness against a specificdistortion type by comparing it to adversarially trained models, which represent an approximate
4
Under review as a conference paper at ICLR 2020
clean vs. clean vs. ε = 2
vs. ε = 8 vs. ε = 16 vs. ε = 32
(a) The L∞ attack at ε = 32 applied to undefendedmodel and models adversarially trained against L∞at different distortion sizes. Attacking models trainedagainst larger ε produces greater visual distortion.
Min
imum
Dis
tort
ion
JPEG Gabor Snow
Max
imum
Dis
tort
ion
JPEG Gabor Snow
(b) The JPEG, Gabor, and Snow attacks applied to ad-versarially trained models at εmin and εmax. Distortionsare almost imperceptible at εmin, but make the imagebarely recognizable by humans at εmax.
Figure 5: Varying distortion size against adversarially trained models reveals full attack strength.
ceiling on performance with prior knowledge of the distortion type. For distortion typeA and size ε,let the Adversarial Training Accuracy ATA(A, ε) be the best adversarial accuracy on the validationset that can be achieved by adversarially training a specific architecture (ResNet-50 for ImageNet-100, ResNet-56 for CIFAR-10) against A.2 Even when evaluating a defense using an architectureother than ResNet-50 or ResNet-56, we recommend using the ATA values computed with thesearchitectures to allow for uniform comparisons.
Given a set of distortion sizes {ε1, . . . , εn}, we propose the summary metric UAR (UnforeseenAttack Robustness) normalizing the accuracy of a model M against adversarial training accuracy:
UAR(A,M) := 100 ·(
1
n
n∑k=1
Acc(A, εk,M)
)/(1
n
n∑k=1
ATA(A, εk)
). (1)
Here Acc(A, ε,M) is the accuracy of M against distortions of type A and magnitude ε. We expectmost UAR scores to be lower than 100 against held-out distortion types, as an UAR score greater than100 means that a defense is outperforming an adversarially trained model on that distortion. Thenormalizing factor in (1) is required to keep UAR scores roughly comparable between distortions, asdifferent distortions can have different strengths as measured by ATA at the chosen distortion sizes.
Having too many or too few εk values in a certain range may cause an attack to appear artificiallystrong or weak because the functional relation between distortion size and attack strength (measuredby ATA) varies between attacks. To make UAR roughly comparable between distortions, we evaluateat ε increasing geometrically from εmin to εmax by factors of 2 and take the subset of ε whose ATAvalues have minimum `1-distance to the ATA values of the L∞ attack at geometrically increasing ε.
For our 8 distortion types, we provide reference values of ATA(A, ε) on this calibrated range of 6distortion sizes on ImageNet-100 (Table 1, §4) and CIFAR-10 (Table 3, Appendix C.3.2). This al-lows UAR computation for a new defense using 6 adversarial evaluations and no adversarial training,reducing computational cost from 192+ to 6 NVIDIA V100 GPU-hours on ImageNet-100.
Evaluate against diverse distortion types. Since robustness against different distortion types mayhave low or no correlation (Figure 6b), measuring performance on different distortions is importantto avoid overfitting to a specific type, especially when a defense is constructed with it in mind (aswith adversarial training). Our results in §4 demonstrate that choosing appropriate distortion types toevaluate against requires some care, as distortions such as L1, L2, and L∞ that may seem differentcan actually have highly correlated scores against defenses (see Figure 6). We instead recommendevaluation against our more diverse attacks, taking the L∞, L1, Elastic, Fog, and Snow attacks as astarting point.
2As explained in Figure 13 (Appendix C.2), this usually requires training at distortion size ε′ > ε becausethe typical distortion seen during adversarial training is sub-maximal.
5
Under review as a conference paper at ICLR 2020
Table 1: Calibrated distortion sizes and ATA values for different distortion types on ImageNet-100.ATA values for CIFAR-10 are shown in Table 3 (Appendix C.3.2).
Attack ε1 ε2 ε3 ε4 ε5 ε6 ATA1 ATA2 ATA3 ATA4 ATA5 ATA6
L∞ 1 2 4 8 16 32 84.6 82.1 76.2 66.9 40.1 12.9L2 150 300 600 1200 2400 4800 85.0 83.5 79.6 72.6 59.1 19.9L1 9562.5 19125 76500 153000 306000 612000 84.4 82.7 76.3 68.9 56.4 36.1Elastic 0.250 0.500 2 4 8 16 85.9 83.2 78.1 75.6 57.0 22.5JPEG 0.062 0.125 0.250 0.500 1 2 85.0 83.2 79.3 72.8 34.8 1.1Fog 128 256 512 2048 4096 8192 85.8 83.8 79.0 68.4 67.9 64.7Snow 0.062 0.125 0.250 2 4 8 84.0 81.1 77.7 65.6 59.5 41.2Gabor 6.250 12.500 25 400 800 1600 84.0 79.8 79.8 66.2 44.7 14.6
4 UAR REVEALS THE NEED TO EVALUATE AGAINST MORE DIVERSE ATTACKS
We apply our methodology to the 8 attacks in §2 using models adversarially trained against theseattacks. Our results reveal that evaluating against the commonly used Lp-attacks gives highly corre-lated information which does not generalize to other unforeseen attacks. Instead, they suggest thatevaluating on diverse attacks is necessary and identify a set of 5 attacks with low pairwise robustnesstransfer which we suggest as a starting point when assessing robustness to unforeseen adversaries.
Dataset and model. We use two datasets: CIFAR-10 and ImageNet-100, the 100-class subset ofImageNet-1K (Deng et al., 2009) containing every 10th class by WordNet ID order. We use ResNet-56 for CIFAR-10 and ResNet-50 as implemented in torchvision for ImageNet-100 (He et al.,2016). We give training hyperparameters in Appendix A.
Adversarial training and evaluation procedure. We construct hardened models using adversarialtraining (Madry et al., 2017). To train against attack A, for each mini-batch of training images, weselect a uniform random (incorrect) target class for each image. For maximum distortion size ε, weapply the targeted attack A to the current model with distortion size ε′ ∼ Uniform(0, ε) and updatethe model with a step of stochastic gradient descent using only the resulting adversarial images (noclean images). The random size scaling improves performance especially against smaller distortions.We use 10 optimization steps for all attacks during training except for Elastic, where we use 30 stepsdue to its more difficult optimization problem. When PGD is used, we use step size ε/
√steps, the
optimal scaling for non-smooth convex functions (Nemirovski & Yudin, 1978; 1983).
We adversarially train 87 models against the 8 attacks from §2 at the distortion sizes described in§3 and evaluate them on the ImageNet-100 and CIFAR-10 validation sets against 200-step targetedattacks with uniform random (incorrect) target class. This uses more steps for evaluation than train-
L∞ L2
L1
JPE
GE
last
icFo
gG
abor
Snow
Normal Training
L∞ ε = 32
L2 ε = 4800
L1 ε = 612000
JPEG ε = 2
Elastic ε = 16
Fog ε = 8192
Gabor ε = 3200
Snow ε = 8
7 17 22 0 31 16 5 10
88 42 15 14 49 20 55 37
80 88 79 67 48 18 53 38
62 71 89 56 43 18 47 31
65 70 54 92 40 19 52 31
23 25 11 1 91 25 41 40
1 3 8 0 28 91 54 43
11 18 12 0 39 31 89 47
13 15 9 1 39 37 60 93
(a) UAR scores for adv. trained defenses (rows)against attacks (columns) on ImageNet-100. SeeFigure 12 for more ε values and Appendix C.3.2for CIFAR-10 results.
L∞ L2
L1
JPE
GE
last
ic
Fog
Gab
orSn
ow
L∞
L2
L1
JPEG
Elastic
Fog
Gabor
Snow−1.0
−0.5
0.0
0.5
1.0
Correlation
ofU
AR
scores
(b) Correlations between UAR scores in Figure 6a foreach attack (rows and columns). Correlation was com-puted over adversarial defenses in Figure 6a trainedwithout knowledge of the attacks (6 total per pair).
Figure 6: UAR scores demonstrate the need to evaluate against diverse attacks.
6
Under review as a conference paper at ICLR 2020
ing per best practices (Carlini et al., 2019). We use UAR to analyze the results in the remainder ofthis section, directing the reader to Figures 10 and 11 (Appendix C.2) for exhaustive results and toAppendix D for checks for robustness to random seed and number of attack steps.
Existing defense and evaluation methods do not generalize to unforeseen attacks. The manylow off-diagonal UAR scores in Figure 6a make clear that while adversarial training is a strong base-line against a fixed distortion, it only rarely confers robustness to unforeseen distortions. Notably,we were not able to achieve a high UAR against Fog except by directly adversarially training againstit. Despite the general lack of transfer in Figure 6a, the fairly strong transfer between the Lp-attacksis consistent with recent progress in simultaneous robustness to them (Croce & Hein, 2019).
Figure 6b shows correlations between UAR scores of pairs of attacks A and A′ against defensesadversarially trained without knowledge3 of A or A′. The results demonstrate that defenses trainedwithout knowledge of Lp-attacks have highly correlated UAR scores against the different Lp attacks,but this correlation does not extend to their evaluations against other attacks. This suggests that Lp-evaluations offer limited diversity and may not generalize to other unforeseen attacks.
TheL∞, L1, Elastic, Fog, and Snow attacks offer greater diversity. Our results onLp-evaluationsuggest that more diverse attack evaluation is necessary for generalization to unforeseen attacks. Asthe unexpected correlation between UAR scores against the pairs (Fog,Gabor) and (JPEG, L1) inFigure 6b demonstrates, even attacks with very different distortions may have correlated behaviors.Considering all attacks in Figure 6 together results in signficantly more diversity, which we suggestfor evaluation against unforeseen attacks. We suggest the 5 attacks (L∞, L1, Elastic, Fog, and Snow)with low UAR against each other and low correlation between UAR scores as a good starting point.
5 JOINT ADVERSARIAL TRAINING: DEFENDING AGAINST TWO DISTORTIONS
A natural idea to improve robustness against unforeseen adversaries is to adversarially train the samemodel against two different types of distortions simultaneously, with the idea that this will cover alarger portion of the space of distortions. We refer to this as joint adversarial training (Jordan et al.,2019; Tramer & Boneh, 2019). For two attacks A and A′, at each training step, we compute theattacked image under both A and A′ and backpropagate with respect to gradients induced by theimage with greater loss. This corresponds to the “max” loss described in Tramer & Boneh (2019).We jointly train models for (L∞, L2), (L∞, L1), and (L∞,Elastic) using the same setup as before
L∞ L2
L1
JPE
GE
last
ic
Normal Training
L∞ ε = 1, L2 ε = 300
L∞ ε = 2, L2 ε = 600
L∞ ε = 4, L2 ε = 1200
L∞ ε = 8, L2 ε = 2400
L∞ ε = 16, L2 ε = 4800
7 17 22 0 31
110 110 110 110 110
50 60 43 27 41
63 73 53 41 43
73 81 64 53 45
80 87 74 62 48
82 88 79 67 48
L∞ L2
L1
JPE
GE
last
ic
Normal Training
L∞ ε = 1, L1 ε = 38250
L∞ ε = 2, L1 ε = 76500
L∞ ε = 4, L1 ε = 153000
L∞ ε = 8, L1 ε = 306000
L∞ ε = 16, L1 ε = 612000
7 17 22 0 31
110 110 110 110 110
47 61 58 28 41
48 61 66 31 41
51 63 72 35 40
44 56 62 26 36
45 50 35 26 30
L∞ L2
L1
JPE
GE
last
ic
Normal Training
L∞ ε = 1, Elastic ε = 0.5
L∞ ε = 2, Elastic ε = 1
L∞ ε = 4, Elastic ε = 2
L∞ ε = 8, Elastic ε = 4
L∞ ε = 16, Elastic ε = 8
7 17 22 0 31
110 110 110 110 110
43 52 39 24 45
56 62 43 35 54
68 71 46 41 63
35 42 28 10 65
69 54 27 27 43
Figure 7: UAR scores for jointly adv. trained defenses (rows) against distortion types (columns).
Transfer for jointly trained models. Figure 7 reports UAR scores for jointly trained models usingResNet-50 on ImageNet-100; full evaluation accuracies are in Figure 19 (Appendix E). Comparingto Figure 6a and Figure 12 (Appendix E), we see that, relative to training against only L2, joint train-ing against (L∞, L2) slightly improves robustness against L1 without harming robustness againstother attacks. In contrast, training against (L∞, L1) is worse than either training against L1 or L∞separately (except at small ε for L1). Training against (L∞,Elastic) also performs poorly.
Joint training and overfitting. Jointly trained models achieve high training accuracy but poorvalidation accuracy (Figure 8) that fluctuates substantially for different random seeds (Table 4, Ap-pendix E.2). Figure 8 shows the overfitting behavior for (L∞,Elastic): L∞ validation accuracy de-creases significantly during training while training accuracy increases. This contrasts with standardadversarial training (Figure 8), where validation accuracy levels off as training accuracy increases.
3We exclude defenses adversarially trained against A and A′ to ensure that attacks are unforeseen.
7
Under review as a conference paper at ICLR 2020
0 20 40 60 80
Epoch
0
20
40
60
80
Ad
v.a
ccu
racy
Train, elastic ε = 4Train, L∞ ε = 8
Val, elastic ε = 4Val, L∞ ε = 8
0 20 40 60 80
Train, L∞ ε = 8
Val, L∞ ε = 8
Figure 8: Left: train and validation curves for joint training against L∞, ε = 8 and Elastic, ε = 4,Right: train and val curves for standard adversarial training for L∞, ε = 8. The joint validationaccuracy of L∞ decreases as training progresses, indicating overfitting.
Overfitting primarily occurs when training against large distortions. We successfully trained againstthe (L∞, L1) and (L∞,Elastic) pairs for small distortion sizes with accuracies comparable to butslightly lower than observed in Figure 11 for training against each attack individually (Figure 18,Appendix E). This agrees with behavior reported by Tramer & Boneh (2019) on CIFAR-10. Ourintuition is that harder training tasks (more diverse distortion types, larger ε) make overfitting morelikely. We briefly investigate the relation between overfitting and model capacity in Appendix E.3;validation accuracy appears slightly increased for ResNet-101, but overfitting remains.
6 DISCUSSION AND RELATED WORK
We have seen that robustness to one attack provides limited information about robustness to otherattacks, and moreover that adversarial training provides limited robustness to unforeseen attacks.These results suggest a need to modify or move beyond adversarial training. While joint adversarialtraining is one possible alternative, our results show it often leads to overfitting. Even ignoring this,it is not clear that joint training would confer robustness to attacks outside of those trained against.
Evaluating robustness has proven difficult, necessitating detailed study of best practices even for asingle fixed attack (Papernot et al., 2017; Athalye et al., 2018). We build on these best practices byshowing how to choose and calibrate a diverse set of unforeseen attacks. Our work is a supplement toexisting practices, not a replacement–we strongly recommend following the guidelines in (Papernotet al., 2017) and (Athalye et al., 2018) in addition to our recommendations.
Some caution is necessary when interpreting specific numeric results in our paper. Many previousimplementations of adversarial training fell prone to gradient masking (Papernot et al., 2017; En-gstrom et al., 2018), with apparently successful training occurring only recently (Madry et al., 2017;Xie et al., 2018). While evaluating with moderately many PGD steps (200) helps guard against this,(Qian & Wegman, 2019) shows that an L∞-trained model that appeared robust against L2 actuallyhad substantially less robustness when evaluating with 106 PGD steps. If this effect is pervasive,then there may be even less transfer between attacks than our current results suggest.
For evaluating against a fixed attack, DeepFool Moosavi-Dezfooli et al. (2015) and CLEVER Wenget al. (2018) can be seen as existing alternatives to UAR. They work by estimating “empirical ro-bustness”, which is the expected minimum ε needed to successfully attack an image. However, theseapply only to attacks which optimize over an Lp-ball of radius ε, and CLEVER can be susceptibleto gradient masking Goodfellow (2018). In addition, empirical robustness is equivalent to linearlyaveraging accuracy over ε, which has smaller dynamic range than the geometric average in UAR.
Our results add to a growing line of evidence that evaluating against a single known attack typeprovides a misleading picture of the robustness of a model (Sharma & Chen, 2017; Engstrom et al.,2017; Jordan et al., 2019; Tramer & Boneh, 2019; Jacobsen et al., 2019). Going one step further,we believe that robustness itself provides only a narrow window into model behavior; in addition torobustness, we should seek to build a diverse toolbox for understanding machine learning models,including visualization (Olah et al., 2018; Zhang & Zhu, 2019), disentanglement of relevant features(Geirhos et al., 2018), and measurement of extrapolation to different datasets (Torralba & Efros,2011) or the long tail of natural but unusual inputs (Hendrycks et al., 2019). Together, these windowsinto model behavior can give us a clearer picture of how to make models reliable in the real world.
8
Under review as a conference paper at ICLR 2020
REFERENCES
Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarialexamples. CoRR, abs/1707.07397, 2017. URL http://arxiv.org/abs/1707.07397.
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense ofsecurity: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420,2018.
Tom B. Brown, Dandelion Mane, Aurko Roy, Martın Abadi, and Justin Gilmer. Adversarial patch.CoRR, abs/1712.09665, 2017. URL http://arxiv.org/abs/1712.09665.
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE, 2017.
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, DimitrisTsipras, Ian J. Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarialrobustness. CoRR, abs/1902.06705, 2019. URL http://arxiv.org/abs/1902.06705.
Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. EAD: Elastic-net attacksto deep neural networks via adversarial examples. In Thirty-second AAAI conference on artificialintelligence, 2018.
Kenneth T. Co, Luis Munoz-Gonzalez, and Emil C. Lupu. Sensitivity of deep convolutional net-works to Gabor noise. CoRR, abs/1906.03455, 2019. URL http://arxiv.org/abs/1906.03455.
Francesco Croce and Matthias Hein. Provable robustness against all adversarial lp-perturbations forp ≥ 1. CoRR, abs/1905.11213, 2019. URL http://arxiv.org/abs/1905.11213.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hi-erarchical image database. In 2009 IEEE conference on computer vision and pattern recognition,pp. 248–255. IEEE, 2009.
Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. Arotation and a translation suffice: Fooling CNNs with simple transformations. arXiv preprintarXiv:1712.02779, 2017.
Logan Engstrom, Andrew Ilyas, and Anish Athalye. Evaluating and understanding the robustnessof adversarial logit pairing. arXiv preprint arXiv:1807.10272, 2018.
Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval researchlogistics quarterly, 3(1-2):95–110, 1956.
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, andWieland Brendel. ImageNet-trained CNNs are biased towards texture; increasing shape biasimproves accuracy and robustness. CoRR, abs/1811.12231, 2018. URL http://arxiv.org/abs/1811.12231.
Ian Goodfellow. Gradient masking causes CLEVER to overestimate adversarial perturbation size.arXiv preprint arXiv:1804.07870, 2018.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. arXiv preprint arXiv:1412.6572, 2014.
Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, An-drew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNetin 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residualnetworks. In European conference on computer vision, pp. 630–645. Springer, 2016.
Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common cor-ruptions and perturbations. In International Conference on Learning Representations, 2019.
9
Under review as a conference paper at ICLR 2020
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarialexamples. arXiv preprint arXiv:1907.07174, 2019.
Jrn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramr, and Nicolas Papernot. Ex-ploiting excessive invariance caused by norm-bounded adversarial robustness, 2019.
Matt Jordan, Naren Manoj, Surbhi Goel, and Alexandros G. Dimakis. Quantifying perceptual dis-tortion of adversarial examples. arXiv e-prints, art. arXiv:1902.08265, Feb 2019.
Ares Lagae, Sylvain Lefebvre, George Drettakis, and Philip Dutre. Procedural noise using sparseGabor convolution. ACM Trans. Graph., 28(3):54:1–54:10, July 2009. ISSN 0730-0301. doi: 10.1145/1531326.1531360. URL http://doi.acm.org/10.1145/1531326.1531360.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083,2017.
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. DeepFool: a simple andaccurate method to fool deep neural networks. arXiv preprint arXiv:1511.04599, 2015.
Arkadi Nemirovski and D Yudin. On Cezari’s convergence of the steepest descent method forapproximating saddle point of convex-concave functions. In Soviet Math. Dokl, volume 19, pp.258–269, 1978.
Arkadi Nemirovski and D Yudin. Problem Complexity and Method Efficiency in Optimization.Intersci. Ser. Discrete Math. Wiley, New York, 1983.
Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, andAlexander Mordvintsev. The building blocks of interpretability. Distill, 2018. doi: 10.23915/distill.00010. https://distill.pub/2018/building-blocks.
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and AnanthramSwami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACMon Asia conference on computer and communications security, pp. 506–519. ACM, 2017.
Haifeng Qian and Mark N. Wegman. L2-nonexpansive neural networks. In International Conferenceon Learning Representations (ICLR), 2019. URL https://openreview.net/forum?id=ByxGSsR9FQ.
Yash Sharma and Pin-Yu Chen. Attacking the Madry defense model with L1-based adversarialexamples. arXiv e-prints, art. arXiv:1710.10733, Oct 2017.
Richard Shin and Dawn Song. JPEG-resistant adversarial images. In NIPS 2017 Workshop onMachine Learning and Computer Security, 2017.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
Antonio Torralba and Alexei A. Efros. Unbiased look at dataset bias. In IEEE Conference onComputer Vision and Pattern Recognition (CVPR), 2011.
Florian Tramer and Dan Boneh. Adversarial training and robustness for multiple perturbations.arXiv e-prints, art. arXiv:1904.13000, Apr 2019.
Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, andLuca Daniel. Evaluating the robustness of neural networks: An extreme value theory approach.arXiv preprint arXiv:1801.10578, 2018.
Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially trans-formed adversarial examples. arXiv preprint arXiv:1801.02612, 2018.
Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, and Kaiming He. Feature denoisingfor improving adversarial robustness. arXiv preprint arXiv:1812.03411, 2018.
Tianyuan Zhang and Zhanxing Zhu. Interpreting adversarially trained convolutional neural net-works. In International Conference on Machine Learning (ICML), 2019.
10
Under review as a conference paper at ICLR 2020
A TRAINING HYPERPARAMETERS
For ImageNet-100, we trained on machines with 8 NVIDIA V100 GPUs using standard data aug-mentation He et al. (2016). Following best practices for multi-GPU training Goyal et al. (2017),we ran synchronized SGD for 90 epochs with batch size 32×8 and a learning rate schedule with 5“warm-up” epochs and a decay at epochs 30, 60, and 80 by a factor of 10. Initial learning rate afterwarm-up was 0.1, momentum was 0.9, and weight decay was 10−4. For CIFAR-10, we trained on asingle NVIDIA V100 GPU for 200 epochs with batch size 32, initial learning rate 0.1, momentum0.9, and weight decay 10−4. We decayed the learning rate at epochs 100 and 150.
B FURTHER ATTACK DETAILS
B.1 FURTHER EXAMPLES OF ATTACKS
We show the images corresponding to the ones in Figure 2, with the exception that they are notscaled. The non-scaled images are shown in Figure 9.
B.2 L1 ATTACK
We chose to use the Frank-Wolfe algorithm for optimizing the L1 attack, as Projected GradientDescent would require projecting onto a truncated L1 ball, which is a complicated operation. Incontrast, Frank-Wolfe only requires optimizing linear functions g>x over a truncated L1 ball; thiscan be done by sorting coordinates by the magnitude of g and moving the top k coordinates to theboundary of their range (with k chosen by binary search). This is detailed in Algorithm 1.
C FULL EVALUATION RESULTS
C.1 L1-JPEG AND L2-JPEG ATTACKS
We will present results with two additional versions of the JPEG attack which impose L1 or L2
constraints on the attack in JPEG-space instead of theL∞ constraint discussed in Section 2. To avoidconfusion, in this appendix, we denote the original JPEG attack by L∞-JPEG and these variants byL1-JPEG and L2-JPEG, respectively. Comparing the L1-JPEG and L2-JPEG attacks in Figure 10,
Exi
stin
gA
ttack
s
L∞ (4.1m, 11.1k, 32) L2 (1.3m, 4.8k, 99) L1 (224k, 2.6k, 218) Elastic (3.1m, 15.2k, 253)
New
Atta
cks
JPEG (5.4m, 18.7k, 255) Fog (4.1m, 13.2k, 89) Gabor (3.7m, 13.3k, 50) Snow (11.4m, 32.0k, 255)
Figure 9: Differences of the attacked images and original image for different attacks (label “espressomaker”). The L1, L2, and L∞ norms of the difference are shown in parentheses. As shown, ournovel attacks display qualitatively different behavior and do not fall under the Lp threat model.These differences are not scaled and are normalized so that no difference corresponds to white.
11
Under review as a conference paper at ICLR 2020
Algorithm 1 Pseudocode for the Frank-Wolfe algorithm for the L1 attack.
1: Input: function f , initial input x ∈ [0, 1]d, L1 radius ρ, number of steps T .2: Output: approximate maximizer x of f over the truncated L1 ball B1(ρ;x) ∩ [0, 1]d centered
at x.3:4: x(0) ← RandomInit(x) . Random initialization5: for t = 1, . . . , T do6: g ← ∇f(x(t−1)) . Obtain gradient7: for k = 1, . . . , d do8: sk ← index of the coordinate of g by with kth largest norm9: end for
10: Sk ← {s1, . . . , sk}.11:12: for i = 1, . . . , d do . Compute move to boundary of [0, 1] for each coordinate.13: if gi > 0 then14: bi ← 1− xi15: else16: bi ← −xi17: end if18: end for19: Mk ←
∑i∈Sk|bi| . Compute L1-perturbation of moving k largest coordinates.
20: k∗ ← max{k |Mk ≤ ρ} . Choose largest k satisfying L1 constraint.21: for i = 1, . . . , d do . Compute x maximizing g>x over the L1 ball.22: if i ∈ Sk∗ then23: xi ← xi + bi24: else if i = sk∗+1 then25: xi ← xi + (ρ−Mk∗) sign(gi)26: else27: xi ← xi28: end if29: end for30: x(t) ← (1− 1
t )x(t−1) + 1t x . Average x with previous iterates
31: end for32: x← x(T )
Table 2: ATA values for L1-JPEG and L2-JPEG on ImageNet-100.
Attack ε1 ε2 ε3 ε4 ε5 ε6 ATA1 ATA2 ATA3 ATA4 ATA5 ATA6
L2-JPEG 8 16 32 64 128 256 84.8 82.5 78.9 72.3 47.5 3.4L1-JPEG 256 1024 4096 16384 65536 131072 84.8 81.8 76.2 67.1 46.4 41.8
we find that they have extremely similar results, so we omit L1-JPEG in the full analysis for brevityand visibility. Calibration values for these attacks are shown in Table 2.
C.2 FULL EVALUATION RESULTS AND ANALYSIS FOR IMAGENET-100
We show the full results of all adversarial attacks against all adversarial defenses for ImageNet-100in Figure 11. As described, the Lp attacks and defenses give highly correlated information on held-out defenses and attacks respectively. Thus, we recommend evaluating on a wide range of distortiontypes. Full UAR scores are also provided for ImageNet-100 in Figure 12.
We further show selected results in Figure 13. As shown, a wide range of ε is required to see the fullbehavior.
12
Under review as a conference paper at ICLR 2020
L2-J
PE
Gε
=2
L2-J
PE
Gε
=16
L2-J
PE
Gε
=32
L2-J
PE
Gε
=64
L2-J
PE
Gε
=12
8
L2-J
PE
Gε
=25
6L
1-J
PE
Gε
=12
8
L1-J
PE
Gε
=10
24
L1-J
PE
Gε
=40
96
L1-J
PE
Gε
=16
384
L1-J
PE
Gε
=65
536
L1-J
PE
Gε
=13
1072
Attack (evaluation)
Normal training
L∞ ε = 1L∞ ε = 2L∞ ε = 4L∞ ε = 8L∞ ε = 16L∞ ε = 32
L2 ε = 150L2 ε = 300L2 ε = 600L2 ε = 1200L2 ε = 2400L2 ε = 4800
L1 ε = 9562.44L1 ε = 19125
L1 ε = 38250.1L1 ε = 76500L1 ε = 153000L1 ε = 306000L1 ε = 612000
L∞-JPEG ε = 0.03125L∞-JPEG ε = 0.0625L∞-JPEG ε = 0.125L∞-JPEG ε = 0.25L∞-JPEG ε = 0.5L∞-JPEG ε = 1L∞-JPEG ε = 2L∞-JPEG ε = 4
L2-JPEG ε = 2L2-JPEG ε = 4L2-JPEG ε = 8L2-JPEG ε = 16L2-JPEG ε = 32L2-JPEG ε = 64L2-JPEG ε = 128L2-JPEG ε = 256
L1-JPEG ε = 128L1-JPEG ε = 256L1-JPEG ε = 512L1-JPEG ε = 1024L1-JPEG ε = 2048L1-JPEG ε = 4096L1-JPEG ε = 8192L1-JPEG ε = 16384L1-JPEG ε = 32768L1-JPEG ε = 65536L1-JPEG ε = 131072
Elastic ε = 0.25Elastic ε = 0.5
Elastic ε = 1Elastic ε = 2Elastic ε = 4Elastic ε = 8
Elastic ε = 16
Att
ack
(adv
ersa
rial
trai
ning
)
70 0 0 0 0 0 50 0 0 0 0 0
86 10 0 0 0 0 76 5 0 0 0 085 34 1 0 0 0 79 16 0 0 0 084 54 5 0 0 0 79 31 1 0 0 079 43 6 0 0 0 74 33 2 0 0 073 26 3 0 0 0 64 25 3 0 0 068 7 1 0 0 0 56 23 4 0 0 0
85 2 0 0 0 0 73 2 0 0 0 085 17 0 0 0 0 78 7 0 0 0 085 49 2 0 0 0 82 24 0 0 0 082 74 25 0 0 0 81 56 4 0 0 077 74 57 7 0 0 77 68 25 1 0 068 67 62 33 1 0 68 66 49 12 2 3
84 1 0 0 0 0 71 1 0 0 0 084 3 0 0 0 0 78 5 0 0 0 084 20 0 0 0 0 81 18 0 0 0 084 48 4 0 0 0 83 47 2 0 0 080 62 16 0 0 0 80 71 12 0 0 078 68 41 2 0 0 78 74 36 1 0 071 67 53 13 0 0 71 69 52 6 0 0
86 55 3 0 0 0 83 36 0 0 0 087 74 18 0 0 0 85 57 2 0 0 086 81 51 1 0 0 85 72 14 0 0 084 82 73 17 0 0 84 78 43 2 0 081 80 77 57 2 0 80 79 69 28 2 180 79 76 68 32 0 80 79 76 68 48 4878 78 76 67 45 17 78 77 75 69 55 4776 75 72 59 30 8 76 75 73 66 56 50
86 23 0 0 0 0 83 17 0 0 0 086 56 3 0 0 0 84 41 0 0 0 086 76 24 0 0 0 85 64 3 0 0 086 82 61 3 0 0 85 78 24 0 0 084 82 76 29 0 0 84 81 57 3 0 081 80 79 65 4 0 81 80 73 34 2 178 77 76 72 39 0 77 77 75 67 38 3278 77 76 72 47 3 78 77 75 70 56 57
86 40 1 0 0 0 85 48 0 0 0 086 61 6 0 0 0 85 63 2 0 0 086 75 27 0 0 0 85 75 10 0 0 086 81 57 3 0 0 86 80 34 0 0 085 82 73 22 0 0 84 81 57 2 0 083 82 78 50 1 0 83 82 71 18 0 081 80 79 64 6 0 80 80 75 45 4 480 79 78 70 17 0 79 79 76 62 19 1778 77 76 70 22 0 78 77 75 65 31 2675 74 74 68 24 0 75 75 73 67 46 4273 72 71 64 18 0 73 72 71 64 41 37
82 0 0 0 0 0 62 1 0 0 0 084 1 0 0 0 0 67 3 0 0 0 083 4 0 0 0 0 67 6 0 0 0 082 8 0 0 0 0 68 9 0 0 0 078 4 0 0 0 0 61 7 0 0 0 070 1 0 0 0 0 45 3 0 0 0 055 0 0 0 0 0 32 1 0 0 0 0
0.0
0.2
0.4
0.6
0.8
1.0
Adversarial
accuracy
Figure 10: A comparison of L1-JPEG and L2-JPEG attacks.
13
Under review as a conference paper at ICLR 2020
No atta
ckL
=1L
=2L
=4L
=8L
=16L
=32 L2 =15
0L2
=300
L2 =60
0L2
=1200
L2 =24
00L2
=4800
L1 =95
62.44
L1 =19
125
L1 =38
250.1
L1 =76
500
L1 =15
3000
L1 =30
6000
L1 =61
2000
L-JP
EG =0.0
3125
L-JP
EG =0.0
625
L-JP
EG =0.1
25
L-JP
EG =0.2
5
L-JP
EG =0.5
L-JP
EG =1
L-JP
EG =2
L2-JP
EG =2
L2-JP
EG =4
L2-JP
EG =8
L2-JP
EG =16
L2-JP
EG =32
L2-JP
EG =64
L2-JP
EG =12
8
L2-JP
EG =25
6L1
-JPEG
=128
L1-JP
EG =25
6
L1-JP
EG =51
2
L1-JP
EG =10
24
L1-JP
EG =20
48
L1-JP
EG =40
96
L1-JP
EG =81
92
L1-JP
EG =16
384
L1-JP
EG =32
768
L1-JP
EG =65
536
L1-JP
EG =13
1072
Elastic
=0.25
Elastic
=0.5
Elastic
=1
Elastic
=2
Elastic
=4
Elastic
=8
Elastic
=16 Fog =12
8
Fog =25
6
Fog =51
2
Fog =10
24
Fog =20
48
Fog =40
96
Fog =81
92
Fog =16
384
Fog =32
768
Fog =65
536
Gabor
=6.25
Gabor
=12.5
Gabor
=25
Gabor
=50
Gabor
=100
Gabor
=200
Gabor
=400
Gabor
=800
Gabor
=1600
Gabor
=3200
Snow
=0.031
25
Snow
=0.062
5
Snow
=0.125
Snow
=0.25
Snow
=0.5Sn
ow =1
Snow
=2Sn
ow =4
Snow
=8
Snow
=16
Norm
al tr
aini
ng
L
=1
L
=2
L
=4
L
=8
L
=16
L
=32
L 2
=15
0L 2
=
300
L 2
=60
0L 2
=
1200
L 2
=24
00L 2
=
4800
L 1
=95
62.4
4L 1
=
1912
5L 1
=
3825
0.1
L 1
=76
500
L 1
=15
3000
L 1
=30
6000
L 1
=61
2000
L-JP
EG
=0.
0312
5L
-JPEG
=
0.06
25L
-JPEG
=
0.12
5L
-JPEG
=
0.25
L-JP
EG
=0.
5L
-JPEG
=
1L
-JPEG
=
2
L 2-JP
EG
=2
L 2-JP
EG
=4
L 2-JP
EG
=8
L 2-JP
EG
=16
L 2-JP
EG
=32
L 2-JP
EG
=64
L 2-JP
EG
=12
8L 2
-JPEG
=
256
L 1-JP
EG
=12
8L 1
-JPEG
=
256
L 1-JP
EG
=51
2L 1
-JPEG
=
1024
L 1-JP
EG
=20
48L 1
-JPEG
=
4096
L 1-JP
EG
=81
92L 1
-JPEG
=
1638
4L 1
-JPEG
=
3276
8L 1
-JPEG
=
6553
6L 1
-JPEG
=
1310
72
Elas
tic
=0.
25El
astic
=
0.5
Elas
tic
=1
Elas
tic
=2
Elas
tic
=4
Elas
tic
=8
Elas
tic
=16
Fog
=12
8Fo
g =
256
Fog
=51
2Fo
g =
1024
Fog
=20
48Fo
g =
4096
Fog
=81
92Fo
g =
1638
4Fo
g =
3276
8Fo
g =
6553
6
Gabo
r =
6.25
Gabo
r =
12.5
Gabo
r =
25Ga
bor
=50
Gabo
r =
100
Gabo
r =
200
Gabo
r =
400
Gabo
r =
800
Gabo
r =
1600
Gabo
r =
3200
Snow
=
0.03
125
Snow
=
0.06
25Sn
ow
=0.
125
Snow
=
0.25
Snow
=
0.5
Snow
=
1Sn
ow
=2
Snow
=
4Sn
ow
=8
Snow
=
16
8725
1 0
0 0
057
11 0
0 0
061
29 5
0 0
0 0
20 1
0 0
0 0
070
25 1
0 0
0 0
050
20 3
0 0
0 0
0 0
0 0
7947
6 0
0 0
054
15 1
0 0
0 0
0 0
012
3 1
0 0
0 0
0 0
064
32 7
1 0
0 0
0 0
0
8684
7014
0 0
086
8148
2 0
080
6635
5 0
0 0
8471
13 0
0 0
086
8466
10 0
0 0
076
6028
5 0
0 0
0 0
0 0
8475
36 3
0 0
074
47 9
0 0
0 0
0 0
074
28 4
1 0
0 0
0 0
080
6829
4 0
0 0
0 0
085
8581
50 2
0 0
8583
7118
0 0
8172
5218
1 0
084
8148
1 0
0 0
8584
7734
1 0
0 0
7969
4616
2 0
0 0
0 0
084
7852
7 0
0 0
7347
10 0
0 0
0 0
0 0
8159
12 1
0 0
0 0
0 0
7973
44 9
1 0
0 0
0 0
8483
8274
22 0
084
8379
48 2
080
7561
32 6
0 0
8482
6910
0 0
084
8379
54 5
0 0
079
7358
31 7
1 0
0 0
0 0
8379
6215
1 0
070
43 9
1 0
0 0
0 0
082
7537
3 0
0 0
0 0
079
7456
18 3
1 0
0 0
080
7979
7659
6 0
7978
7350
7 0
7264
4927
6 1
079
7759
12 0
0 0
7978
7043
6 0
0 0
7467
5433
11 2
0 0
0 0
079
7666
31 2
0 0
6333
6 1
1 0
0 0
1 0
7977
6417
1 0
0 0
0 0
7472
6231
9 2
0 0
0 0
7574
7473
6734
173
7163
30 3
058
4324
8 1
0 0
7367
37 3
0 0
073
7059
26 3
0 0
064
5642
2511
3 0
0 0
0 0
7371
6642
11 1
055
25 4
1 1
0 0
0 0
074
7368
44 4
0 0
0 0
068
6659
4017
6 1
0 0
071
7170
6962
40 8
6960
33 5
0 0
3721
8 2
0 0
065
42 8
0 0
0 0
6861
34 7
1 0
0 0
5647
3623
11 4
1 0
0 0
070
6862
4415
2 0
5529
5 1
0 0
0 0
0 0
7069
6349
16 2
2 1
0 0
6360
5437
17 5
1 0
0 0
8782
53 3
0 0
085
7834
1 0
080
6936
5 0
0 0
8151
2 0
0 0
085
8148
2 0
0 0
073
4815
2 0
0 0
0 0
0 0
8371
28 1
0 0
073
4610
0 0
0 0
0 0
062
15 3
0 0
0 0
0 0
078
6020
3 0
0 0
0 0
085
8474
22 0
0 0
8582
65 8
0 0
8276
5718
1 0
084
7522
0 0
0 0
8583
7317
0 0
0 0
7866
35 7
0 0
0 0
0 0
083
7643
3 0
0 0
7447
11 0
0 0
0 0
0 0
7739
6 1
0 0
0 0
0 0
7967
33 5
0 0
0 0
0 0
8484
8156
4 0
084
8377
40 1
083
8071
44 9
0 0
8481
60 3
0 0
085
8480
49 2
0 0
082
7559
24 3
0 0
0 0
0 0
8379
56 9
0 0
072
4511
1 0
0 0
0 0
081
6517
2 0
0 0
0 0
078
7246
11 1
0 0
0 0
082
8281
7428
0 0
8282
8069
15 0
8280
7765
32 4
082
8176
34 0
0 0
8282
8174
25 0
0 0
8180
7356
24 4
0 0
0 0
081
7866
23 1
0 0
6740
8 1
0 0
0 0
0 0
8175
42 5
0 0
0 0
0 0
7672
5722
3 1
0 0
0 0
7777
7674
56 6
077
7776
7350
277
7675
7157
23 1
7776
7563
9 0
077
7676
7457
7 0
077
7673
6852
25 5
1 0
0 0
7674
6838
4 0
059
30 6
1 1
1 0
1 0
076
7461
16 1
0 0
0 0
070
6858
3510
2 0
0 0
068
6868
6761
28 1
6868
6867
5920
6968
6866
6144
1368
6867
6437
2 0
6868
6867
6233
1 0
6868
6866
6149
3012
4 2
368
6663
4711
1 0
4923
5 1
1 1
0 1
0 0
6867
6130
3 1
1 0
0 0
6058
5440
17 5
2 1
0 0
8671
24 1
0 0
082
6414
0 0
083
7753
13 0
0 0
6718
0 0
0 0
084
7122
1 0
0 0
071
4411
1 0
0 0
0 0
0 0
8266
19 1
0 0
068
33 3
0 0
0 0
0 0
041
9 2
0 0
0 0
0 0
073
5014
2 0
0 0
0 0
086
7841
3 0
0 0
8474
32 1
0 0
8481
6832
3 0
076
41 2
0 0
0 0
8479
48 3
0 0
0 0
7863
28 5
0 0
0 0
0 0
083
7128
1 0
0 0
6936
4 0
0 0
0 0
0 0
5617
3 0
0 0
0 0
0 0
7559
22 2
0 0
0 0
0 0
8581
6211
0 0
084
8055
6 0
084
8277
5412
0 0
8164
14 0
0 0
084
8268
20 0
0 0
081
7551
18 3
0 0
0 0
0 0
8275
41 3
0 0
070
39 6
0 0
0 0
0 0
071
32 5
1 0
0 0
0 0
077
6430
4 0
0 0
0 0
084
8271
28 1
0 0
8381
6720
0 0
8483
8172
40 4
082
7545
3 0
0 0
8482
7648
4 0
0 0
8380
7247
14 2
0 0
0 0
082
7752
6 0
0 0
6837
5 0
0 0
0 0
0 0
7750
12 1
0 0
0 0
0 0
7667
34 8
1 0
0 0
0 0
8179
7243
3 0
080
7869
35 2
081
8079
7664
26 1
7974
6017
0 0
080
7976
6216
0 0
080
8078
7145
12 1
0 0
0 0
7975
5712
0 0
065
36 5
0 0
0 0
0 0
077
6123
3 0
0 0
0 0
073
6644
12 1
0 0
0 0
079
7772
5310
0 0
7877
7146
6 0
7978
7875
6945
777
7466
40 3
0 0
7877
7568
41 2
0 0
7878
7774
6436
7 1
0 0
077
7362
21 1
0 0
6132
4 0
0 0
0 0
0 0
7665
35 6
0 0
0 0
0 0
7065
4920
4 1
0 0
0 0
7271
6959
24 1
072
7169
5518
072
7271
7067
5524
7169
6551
15 0
071
7170
6753
13 0
071
7170
6964
5224
6 1
0 0
7068
6133
3 0
050
24 5
0 0
0 0
0 0
069
6242
12 1
0 0
0 0
060
5646
24 8
2 0
0 0
1
8775
28 1
0 0
083
58 7
0 0
075
5014
1 0
0 0
8683
54 2
0 0
086
8682
55 3
0 0
083
8067
36 7
0 0
0 0
0 0
8365
14 0
0 0
067
28 2
0 0
0 0
0 0
034
6 1
0 0
0 0
0 0
072
4411
1 0
0 0
0 0
087
8047
3 0
0 0
8471
19 0
0 0
7760
24 2
0 0
086
8475
17 0
0 0
8786
8474
18 0
0 0
8582
7657
21 2
0 0
0 0
083
7020
1 0
0 0
6933
3 0
0 0
0 0
0 0
5111
2 0
0 0
0 0
0 0
7551
13 1
0 0
0 0
0 0
8683
6814
0 0
084
7943
2 0
080
6737
7 0
0 0
8585
8357
1 0
086
8685
8151
1 0
085
8381
7247
14 1
0 0
0 0
8373
28 1
0 0
066
30 3
0 0
0 0
0 0
069
27 4
1 0
0 0
0 0
077
5920
3 0
0 0
0 0
084
8377
42 3
0 0
8381
6613
0 0
8073
5318
2 0
084
8483
7714
0 0
8484
8482
7317
0 0
8483
8278
6843
13 2
0 0
082
7540
3 0
0 0
6530
3 0
0 0
0 0
0 0
7852
13 1
0 0
0 0
0 0
7664
29 5
1 0
0 0
0 0
8180
7866
17 1
080
7974
41 3
078
7462
35 7
0 0
8181
8079
64 0
081
8181
8077
57 2
080
8080
7976
6952
28 9
2 1
8076
52 7
0 0
063
30 3
0 0
0 0
0 0
079
6935
6 1
0 0
0 0
074
6746
11 2
0 0
0 0
079
7977
6827
1 0
7978
7450
7 0
7775
6850
21 3
080
7979
7773
34 0
8079
7979
7668
32 0
8079
7979
7876
7368
6148
4878
7558
12 0
0 0
6129
3 0
0 0
0 0
0 0
7871
4712
1 0
0 0
0 0
7167
4817
4 1
0 0
0 0
7877
7663
19 1
078
7773
47 5
077
7568
4817
2 0
7878
7775
6235
178
7878
7876
6745
1778
7878
7777
7573
6965
5547
7774
5711
0 0
060
29 4
0 0
0 0
0 0
077
7044
14 2
0 0
0 0
070
6545
16 4
1 0
0 0
1
8764
12 0
0 0
080
44 2
0 0
072
4310
1 0
0 0
8571
15 0
0 0
086
8574
23 0
0 0
083
7654
17 1
0 0
0 0
0 0
8259
10 0
0 0
064
24 2
0 0
0 0
0 0
022
4 1
0 0
0 0
0 0
070
40 9
1 0
0 0
0 0
087
7526
1 0
0 0
8359
8 0
0 0
7552
16 1
0 0
086
8249
1 0
0 0
8686
8256
3 0
0 0
8481
7041
8 0
0 0
0 0
082
6515
0 0
0 0
6727
2 0
0 0
0 0
0 0
33 6
1 0
0 0
0 0
0 0
7444
10 1
0 0
0 0
0 0
8681
50 4
0 0
084
7424
1 0
079
6428
3 0
0 0
8685
7415
0 0
086
8684
7624
0 0
085
8379
6428
3 0
0 0
0 0
8369
19 1
0 0
068
33 3
0 0
0 0
0 0
050
11 2
0 0
0 0
0 0
075
5214
1 0
0 0
0 0
085
8370
15 0
0 0
8480
50 3
0 0
8173
4610
0 0
086
8582
56 1
0 0
8685
8582
61 3
0 0
8584
8378
6024
2 0
0 0
083
7330
1 0
0 0
6935
4 0
0 0
0 0
0 0
6825
4 0
0 0
0 0
0 0
7759
20 3
0 0
0 0
0 0
8483
7844
3 0
083
8270
18 0
081
7762
25 2
0 0
8484
8375
14 0
084
8484
8276
29 0
084
8383
8175
5722
3 0
0 0
8276
42 3
0 0
068
35 4
0 0
0 0
0 0
077
4710
1 0
0 0
0 0
077
6732
5 1
0 0
0 0
081
8179
6717
0 0
8180
7649
4 0
8077
7045
11 1
081
8181
7958
0 0
8181
8180
7965
4 0
8181
8180
7873
6034
10 2
180
7654
9 0
0 0
6634
5 0
0 0
0 0
0 0
7968
31 4
0 0
0 0
0 0
7569
4513
2 0
0 0
0 0
7778
7672
40 3
078
7775
6316
077
7571
5726
3 0
7877
7776
7119
078
7877
7776
7239
077
7777
7776
7572
6755
3832
7775
6219
1 0
061
29 4
0 0
0 0
0 0
076
7248
11 1
0 0
0 0
071
6855
23 6
1 0
0 0
177
7776
7136
2 0
7777
7562
15 0
7776
7159
30 5
078
7777
7671
40 0
7878
7877
7672
47 3
7878
7777
7775
7470
6356
5777
7564
19 1
0 0
6029
4 0
0 0
0 0
0 0
7671
4713
1 0
0 0
0 0
6966
5325
7 1
0 0
0 1
8766
15 0
0 0
081
51 5
0 0
077
5517
1 0
0 0
8369
16 0
0 0
086
8579
40 1
0 0
085
8375
4812
0 0
0 0
0 0
8158
10 0
0 0
064
24 1
0 0
0 0
0 0
024
4 1
0 0
0 0
0 0
071
40 8
1 0
0 0
0 0
086
7427
1 0
0 0
8363
11 0
0 0
7963
26 2
0 0
085
7838
1 0
0 0
8686
8261
6 0
0 0
8584
8063
26 2
0 0
0 0
081
6313
0 0
0 0
6830
2 0
0 0
0 0
0 0
34 7
1 0
0 0
0 0
0 0
7443
10 1
0 0
0 0
0 0
8679
47 3
0 0
084
7426
1 0
081
7138
5 0
0 0
8582
64 8
0 0
086
8584
7527
0 0
085
8482
7549
10 0
0 0
0 0
8267
18 0
0 0
068
30 3
0 0
0 0
0 0
048
11 2
0 0
0 0
0 0
076
5214
2 0
0 0
0 0
086
8366
12 0
0 0
8480
51 3
0 0
8275
5313
1 0
085
8477
36 0
0 0
8686
8581
57 3
0 0
8685
8480
6634
4 0
0 0
083
7329
2 0
0 0
6934
4 0
0 0
0 0
0 0
6321
3 0
0 0
0 0
0 0
7861
19 2
0 0
0 0
0 0
8482
7534
1 0
084
8167
13 0
082
7863
27 2
0 0
8484
8166
6 0
085
8484
8273
22 0
084
8483
8175
5722
2 0
0 0
8275
39 3
0 0
069
35 5
0 0
0 0
0 0
074
39 7
1 0
0 0
0 0
078
6631
4 0
0 0
0 0
083
8279
55 5
0 0
8381
7534
1 0
8178
6939
6 0
083
8382
7632
0 0
8383
8382
7850
1 0
8383
8382
7971
4918
3 0
081
7650
6 0
0 0
6634
5 0
0 0
0 0
0 0
7956
16 2
0 0
0 0
0 0
7668
38 7
1 0
0 0
0 0
8180
7864
14 0
080
8076
48 3
080
7770
4812
1 0
8180
8077
54 1
081
8180
8079
64 6
080
8180
8079
7567
4517
4 4
7975
55 9
0 0
065
33 4
0 0
0 0
0 0
078
6726
3 0
0 0
0 0
074
6947
12 2
0 0
0 0
080
7978
7026
1 0
7979
7658
8 0
7876
7153
18 1
080
7979
7766
3 0
8079
7979
7870
17 0
7979
7979
7876
7362
4119
1779
7658
12 0
0 0
6228
3 0
0 0
0 0
0 0
7871
39 6
0 0
0 0
0 0
7369
5017
3 1
0 0
0 0
7777
7771
33 1
078
7775
6011
077
7570
5622
2 0
7878
7777
68 6
078
7878
7776
7022
078
7777
7777
7573
6551
3126
7774
6015
1 0
059
27 3
0 0
0 0
0 0
077
7244
9 1
0 0
0 0
071
6750
21 4
1 0
0 0
076
7574
6937
2 0
7575
7359
13 0
7472
6854
22 2
075
7575
7468
9 0
7575
7574
7468
24 0
7575
7475
7473
7267
5946
4274
7157
16 1
0 0
5222
2 0
0 0
0 0
0 0
7570
5214
1 0
0 0
0 0
6663
4921
5 1
0 0
0 1
7273
7165
29 2
073
7269
5410
072
7065
5019
2 0
7373
7272
64 5
073
7373
7271
6418
073
7373
7272
7169
6455
4137
7269
5413
1 0
047
17 2
0 0
0 0
0 0
072
6744
11 1
0 0
0 0
063
5943
16 3
1 0
0 0
1
8763
14 0
0 0
079
44 4
0 0
072
4814
2 0
0 0
6414
0 0
0 0
082
5811
0 0
0 0
062
33 9
1 0
0 0
0 0
0 0
8578
45 2
0 0
070
35 4
0 0
0 0
0 0
039
10 2
1 0
0 0
0 0
075
5220
3 0
0 0
0 0
087
7325
1 0
0 0
8258
10 0
0 0
7658
25 4
0 0
076
34 1
0 0
0 0
8471
25 1
0 0
0 0
6743
17 3
0 0
0 0
0 0
086
8369
15 0
0 0
7242
8 0
0 0
0 0
0 0
5621
5 1
0 0
0 0
0 0
7661
30 8
1 0
0 0
0 0
8577
40 3
0 0
082
6721
1 0
077
6335
9 1
0 0
8054
9 0
0 0
083
7641
4 0
0 0
067
4722
6 1
0 0
0 0
0 0
8483
7851
3 0
072
4711
0 0
0 0
0 0
071
4214
3 0
0 0
0 0
076
6843
16 4
1 0
0 0
084
7849
7 0
0 0
8171
30 2
0 0
7562
3711
1 0
079
5611
0 0
0 0
8276
45 8
0 0
0 0
6852
29 9
2 0
0 0
0 0
084
8381
7332
1 0
7350
15 1
0 0
0 0
0 0
7655
22 5
1 0
0 0
0 0
7669
5126
9 2
1 1
0 0
8174
47 8
0 0
078
6727
2 0
066
4925
7 1
0 0
7036
4 0
0 0
078
6831
4 0
0 0
061
4221
7 1
0 0
0 0
0 0
8180
7978
7117
068
4613
1 0
0 0
0 0
075
5932
9 2
1 1
1 0
072
6855
3415
5 3
1 1
178
6939
5 0
0 0
7354
15 1
0 0
4728
10 2
0 0
053
13 1
0 0
0 0
7046
12 1
0 0
0 0
4527
11 3
0 0
0 0
0 0
078
7676
7576
57 4
6342
13 1
0 0
0 0
0 0
7159
3512
3 1
1 1
0 0
6663
5436
19 9
5 2
1 1
7458
22 2
0 0
063
32 5
0 0
031
14 4
1 0
0 0
27 3
0 0
0 0
055
22 2
0 0
0 0
032
16 4
1 0
0 0
0 0
0 0
7473
7271
7057
2160
3812
1 0
0 0
0 0
068
5229
10 3
1 1
1 0
063
6052
3824
14 9
4 1
1
8740
2 0
0 0
069
16 0
0 0
062
26 4
0 0
0 0
23 1
0 0
0 0
074
27 1
0 0
0 0
049
18 2
0 0
0 0
0 0
0 0
8154
9 0
0 0
083
6933
3 0
0 0
0 0
019
3 1
0 0
0 0
0 0
077
4712
1 0
0 0
0 0
088
40 2
0 0
0 0
6817
0 0
0 0
6024
3 0
0 0
017
1 0
0 0
0 0
7122
1 0
0 0
0 0
4515
2 0
0 0
0 0
0 0
082
5812
0 0
0 0
8578
5921
2 0
0 0
0 0
25 5
1 0
0 0
0 0
0 0
8057
19 2
0 0
0 0
0 0
8729
1 0
0 0
059
12 0
0 0
054
19 2
0 0
0 0
9 0
0 0
0 0
057
11 0
0 0
0 0
030
6 0
0 0
0 0
0 0
0 0
8261
14 1
0 0
086
8274
5116
2 1
1 1
131
6 1
0 0
0 0
0 0
081
6527
4 0
0 0
0 0
086
23 1
0 0
0 0
50 9
0 0
0 0
4916
2 0
0 0
0 6
0 0
0 0
0 0
45 8
0 0
0 0
0 0
29 7
1 0
0 0
0 0
0 0
082
6316
1 0
0 0
8684
7867
4316
5 2
1 1
40 9
2 0
0 0
0 0
0 0
8268
39 8
1 0
0 0
0 0
8517
1 0
0 0
042
6 0
0 0
046
16 2
0 0
0 0
4 0
0 0
0 0
035
5 0
0 0
0 0
027
7 1
0 0
0 0
0 0
0 0
8163
19 1
0 0
084
8379
7156
3416
7 3
152
16 3
1 0
0 0
0 0
080
7244
11 2
0 0
0 0
078
8 0
0 0
0 0
24 2
0 0
0 0
26 5
0 0
0 0
0 2
0 0
0 0
0 0
23 2
0 0
0 0
0 0
15 3
0 0
0 0
0 0
0 0
074
6019
1 0
0 0
7978
7672
6758
4629
10 3
6234
9 2
0 0
0 0
0 0
7569
5019
4 1
0 0
0 0
69 4
0 0
0 0
012
1 0
0 0
025
8 2
0 0
0 0
4 0
0 0
0 0
012
2 0
0 0
0 0
0 5
1 0
0 0
0 0
0 0
0 0
6549
11 0
0 0
070
7070
6868
6865
5739
1768
6663
4815
2 3
1 0
067
6560
4930
12 2
0 0
062
3 0
0 0
0 0
9 1
0 0
0 0
18 6
1 0
0 0
0 3
0 0
0 0
0 0
8 1
0 0
0 0
0 0
4 1
0 0
0 0
0 0
0 0
056
38 6
0 0
0 0
6465
6565
6464
6462
5333
6261
5952
27 6
3422
10 2
6159
5648
3619
6 1
0 0
51 3
0 0
0 0
0 8
1 0
0 0
022
9 2
0 0
0 0
4 0
0 0
0 0
0 8
1 0
0 0
0 0
0 5
2 0
0 0
0 0
0 0
0 0
4726
3 0
0 0
056
5757
5757
5755
5141
2354
5452
4521
427
1910
353
5146
3928
14 3
1 0
042
5 1
0 0
0 0
10 2
0 0
0 0
20 9
3 1
0 0
0 6
1 0
0 0
0 0
12 3
0 0
0 0
0 0
7 3
1 0
0 0
0 0
0 0
041
26 5
0 0
0 0
5150
5150
5049
4845
3928
4646
4538
19 4
2618
8 2
4744
4034
2211
3 0
0 0
8655
9 0
0 0
075
33 2
0 0
066
3911
1 0
0 0
35 2
0 0
0 0
064
16 1
0 0
0 0
030
8 1
0 0
0 0
0 0
0 0
8373
34 2
0 0
074
4912
0 0
0 0
0 0
082
6622
4 0
0 0
0 0
080
7141
9 1
0 0
0 0
085
37 3
0 0
0 0
6417
1 0
0 0
6029
6 1
0 0
016
1 0
0 0
0 0
42 5
0 0
0 0
0 0
15 3
0 0
0 0
0 0
0 0
083
7438
2 0
0 0
7552
17 1
0 0
0 0
0 0
8479
5614
1 0
0 0
0 0
8075
5418
3 1
0 0
0 0
8524
2 0
0 0
051
10 0
0 0
050
22 5
1 0
0 0
8 0
0 0
0 0
032
3 0
0 0
0 0
012
3 0
0 0
0 0
0 0
0 0
8273
37 3
0 0
072
4912
0 0
0 0
0 0
083
8072
41 7
0 0
0 0
079
7457
29 6
1 0
0 0
084
21 2
0 0
0 0
48 9
0 0
0 0
4919
4 1
0 0
0 8
0 0
0 0
0 0
31 3
0 0
0 0
0 0
12 3
0 0
0 0
0 0
0 0
081
7440
3 0
0 0
7144
9 0
0 0
0 0
0 0
8280
8064
23 2
1 1
0 0
7871
5730
11 2
1 0
0 0
8323
2 0
0 0
050
9 0
0 0
045
18 3
0 0
0 0
9 0
0 0
0 0
038
5 0
0 0
0 0
017
4 1
0 0
0 0
0 0
0 0
8072
39 4
0 0
070
43 6
0 0
0 0
0 0
081
7773
6961
12 9
2 0
076
7052
2710
2 0
0 0
083
33 3
0 0
0 0
5713
1 0
0 0
4517
3 0
0 0
014
1 0
0 0
0 0
5112
1 0
0 0
0 0
27 9
2 0
0 0
0 0
0 0
080
7238
4 0
0 0
6839
5 0
0 0
0 0
0 0
8076
7066
6253
21 2
0 0
7670
5225
6 1
0 0
0 0
8234
4 0
0 0
058
15 1
0 0
041
15 2
0 0
0 0
16 1
0 0
0 0
058
16 1
0 0
0 0
029
8 1
0 0
0 0
0 0
0 0
7971
42 6
0 0
068
40 6
0 0
0 0
0 0
077
7367
6157
4729
3 0
075
6951
25 7
1 0
0 0
081
37 4
0 0
0 0
5915
1 0
0 0
4015
2 0
0 0
017
1 0
0 0
0 0
5916
1 0
0 0
0 0
3210
2 0
0 0
0 0
0 0
078
7043
9 0
0 0
6943
9 0
0 0
0 0
0 0
7872
6457
5250
46 9
0 0
7571
5525
7 2
0 0
0 0
7938
5 0
0 0
060
16 1
0 0
041
14 2
0 0
0 0
22 1
0 0
0 0
063
21 1
0 0
0 0
031
10 2
0 0
0 0
0 0
0 0
7769
4510
0 0
069
4713
1 0
0 0
0 0
076
7162
5447
4961
28 3
074
7160
3210
3 0
0 0
077
35 5
0 0
0 0
5615
1 0
0 0
3612
2 0
0 0
021
1 0
0 0
0 0
5920
1 0
0 0
0 0
3110
2 0
0 0
0 0
0 0
075
6847
13 1
0 0
6950
21 4
1 0
0 0
0 0
7469
6153
4651
6645
15 1
7473
6750
23 8
2 0
0 0
8246
5 0
0 0
068
24 1
0 0
058
29 7
1 0
0 0
27 1
0 0
0 0
068
27 1
0 0
0 0
041
15 2
0 0
0 0
0 0
0 0
7552
13 1
0 0
074
5113
0 0
0 0
0 0
032
7 2
0 0
0 0
0 0
085
7633
4 0
0 0
0 0
081
47 6
0 0
0 0
6726
1 0
0 0
5830
7 0
0 0
026
1 0
0 0
0 0
6524
1 0
0 0
0 0
4618
3 0
0 0
0 0
0 0
074
5516
1 0
0 0
7659
22 2
0 0
0 0
0 0
5114
3 1
0 0
0 0
0 0
8683
6316
1 0
0 0
0 0
7943
7 0
0 0
062
23 2
0 0
053
26 6
0 0
0 0
20 1
0 0
0 0
054
16 1
0 0
0 0
033
11 1
0 0
0 0
0 0
0 0
7357
23 2
0 0
075
6028
4 0
0 0
0 0
065
30 6
1 0
0 0
0 0
085
8479
5110
1 0
0 0
078
31 4
0 0
0 0
5114
1 0
0 0
4418
4 1
0 0
014
1 0
0 0
0 0
39 8
1 0
0 0
0 0
22 6
1 0
0 0
0 0
0 0
074
6128
3 0
0 0
7563
3610
1 0
0 0
0 0
7353
17 3
0 0
0 0
0 0
8584
8174
40 9
1 0
0 0
7627
4 0
0 0
043
11 1
0 0
036
15 4
1 0
0 0
13 1
0 0
0 0
036
8 1
0 0
0 0
019
6 1
0 0
0 0
0 0
0 0
7263
32 4
0 0
074
6440
14 2
0 0
0 0
074
6535
8 2
0 0
0 0
082
8281
7869
3913
2 0
074
22 3
0 0
0 0
34 7
1 0
0 0
26 9
2 0
0 0
0 7
0 0
0 0
0 0
25 5
0 0
0 0
0 0
19 6
1 0
0 0
0 0
0 0
069
5831
4 0
0 0
7260
35 8
1 0
0 0
0 0
6958
29 6
1 0
0 0
0 0
8080
7769
4926
10 2
0 0
7125
4 0
0 0
028
6 0
0 0
012
3 1
0 0
0 0
9 1
0 0
0 0
024
4 0
0 0
0 0
016
5 1
0 0
0 0
0 0
0 0
6860
38 9
1 0
070
6036
12 2
0 0
0 0
070
6753
24 5
1 1
1 0
079
7876
7465
5140
20 4
067
24 5
1 0
0 0
25 6
1 0
0 0
11 4
1 0
0 0
0 7
1 0
0 0
0 0
20 4
1 0
0 0
0 0
15 6
1 0
0 0
0 0
0 0
063
5942
14 1
0 0
6655
3413
4 1
0 0
0 0
6462
5329
9 2
2 2
1 0
7575
7472
6761
4834
16 3
6634
10 2
0 0
041
14 2
0 0
025
11 3
1 0
0 0
15 3
0 0
0 0
029
8 2
0 0
0 0
020
8 3
1 0
0 0
0 0
0 0
6462
5327
3 0
065
5637
17 7
2 1
0 1
065
6563
5535
1413
9 6
272
7271
7170
6966
6041
861
3413
3 0
0 0
4219
4 1
0 0
3016
6 2
0 0
027
8 1
0 0
0 0
4017
4 1
0 0
0 0
2211
4 1
0 0
0 0
0 0
060
5852
31 6
1 0
6153
3519
7 3
1 1
1 1
6161
5954
4221
1914
8 3
6565
6464
6362
5954
36 9
0.0
0.2
0.4
0.6
0.8
1.0
Adversarial accuracy
Figu
re11
:Acc
urac
yof
adve
rsar
iala
ttack
(col
umn)
agai
nsta
dver
sari
ally
trai
ned
mod
el(r
ow)o
nIm
ageN
et-1
00.
14
Under review as a conference paper at ICLR 2020
L∞ L2
L1
L∞
-JP
EG
L2-J
PE
GE
last
ic
Fog
Gab
orSn
ow
Normal Training
L∞ ε = 1
L∞ ε = 2
L∞ ε = 4
L∞ ε = 8
L∞ ε = 16
L∞ ε = 32
L∞-JPEG ε = 0.0625
L∞-JPEG ε = 0.125
L∞-JPEG ε = 0.25
L∞-JPEG ε = 0.5
L∞-JPEG ε = 1
L∞-JPEG ε = 2
Fog ε = 128
Fog ε = 256
Fog ε = 512
Fog ε = 2048
Fog ε = 4096
Fog ε = 8192
7 17 22 0 0 31 16 5 10
110 110 110 110 110 110 110 110 110
46 54 37 24 21 40 29 29 25
60 64 42 36 30 42 29 41 31
72 74 48 45 37 44 27 53 37
83 72 42 42 32 47 23 60 41
89 60 27 30 24 49 19 58 41
88 42 15 14 11 49 20 55 37
110 110 110 110 110 110 110 110 110
36 44 34 49 48 38 23 17 16
46 52 38 63 59 39 22 27 20
56 61 43 73 69 40 22 39 24
67 69 48 85 80 41 21 50 30
69 72 56 96 91 41 21 53 32
65 70 54 92 98 40 19 52 31
110 110 110 110 110 110 110 110 110
12 21 22 0 0 34 41 6 15
11 22 21 0 0 35 50 8 19
8 18 18 0 0 36 58 10 23
5 12 15 0 0 36 78 20 31
2 7 8 0 0 34 90 29 34
1 3 8 0 0 28 91 54 43L∞ L2
L1
L∞
-JP
EG
L2-J
PE
GE
last
ic
Fog
Gab
orSn
ow
Normal Training
L2 ε = 150
L2 ε = 300
L2 ε = 600
L2 ε = 1200
L2 ε = 2400
L2 ε = 4800
L2-JPEG ε = 8
L2-JPEG ε = 16
L2-JPEG ε = 32
L2-JPEG ε = 64
L2-JPEG ε = 128
L2-JPEG ε = 256
Gabor ε = 6.25
Gabor ε = 12.5
Gabor ε = 25
Gabor ε = 400
Gabor ε = 800
Gabor ε = 1600
7 17 22 0 0 31 16 5 10
110 110 110 110 110 110 110 110 110
38 49 38 15 13 39 29 22 20
50 60 44 27 24 40 29 33 26
62 72 53 40 36 42 28 44 31
73 82 65 54 49 46 26 54 37
80 88 75 63 58 48 22 57 40
80 88 79 67 63 48 18 53 38
110 110 110 110 110 110 110 110 110
37 46 36 49 50 38 23 17 17
47 55 41 63 62 39 24 26 20
57 63 46 72 74 40 24 36 25
67 73 53 84 84 41 23 48 31
74 77 59 90 93 43 21 53 36
72 76 61 96 96 43 21 53 36
110 110 110 110 110 110 110 110 110
17 28 26 1 0 39 30 46 30
11 20 22 0 0 40 32 59 36
7 15 18 0 0 39 29 64 39
10 18 14 0 0 39 25 68 36
11 19 14 0 0 39 27 73 37
12 19 14 0 0 39 29 82 40
L∞ L2
L1
L∞
-JP
EG
L2-J
PE
GE
last
ic
Fog
Gab
orSn
ow
Normal Training
L1 ε = 9562
L1 ε = 19125
L1 ε = 76500
L1 ε = 153000
L1 ε = 306000
L1 ε = 612000
Elastic ε = 0.25
Elastic ε = 0.5
Elastic ε = 2
Elastic ε = 4
Elastic ε = 8
Elastic ε = 16
Snow ε = 0.0625
Snow ε = 0.125
Snow ε = 0.25
Snow ε = 2
Snow ε = 4
Snow ε = 8
7 17 22 0 0 31 16 5 10
110 110 110 110 110 110 110 110 110
26 40 43 5 6 37 22 14 16
33 47 49 12 14 39 23 21 20
50 63 70 34 35 41 24 38 27
54 66 81 42 42 41 24 44 30
59 70 87 51 50 43 21 48 33
62 71 89 56 55 43 18 47 31
110 110 110 110 110 110 110 110 110
21 32 30 4 3 41 24 14 18
27 38 34 10 7 46 27 22 24
37 46 37 19 15 68 30 42 36
36 44 30 11 9 81 29 46 40
31 36 19 4 3 91 26 45 39
23 25 11 1 1 91 25 41 40
110 110 110 110 110 110 110 110 110
15 24 22 0 0 32 35 18 39
14 22 20 0 0 33 36 28 52
10 16 16 0 0 34 39 39 59
8 9 4 0 0 34 38 52 71
8 8 4 0 0 34 36 50 78
13 15 9 1 0 39 37 60 93
Figure 12: UAR scores (multiplied by 100) for adv. trained defenses (rows) against distortion types(columns) for ImageNet-100.
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
L∞-JPEG distortion size for adversarial training
0
20
40
60
80
Ad
vers
aria
la
ccu
racy
L∞-JPEG attacks against L∞-JPEG
No attack
ε = 0.125
ε = 0.25
ε = 0.5
ε = 1
0 5 10 15 20 25 30
L∞ distortion size for adversarial training
L∞-attacks against L∞
No attack
ε = 2
ε = 4
ε = 8
ε = 16
Figure 13: Adversarial accuracies of attacks on adversarially trained models for different distortionsizes on ImageNet-100. For a given attack ε, the best ε′ to train against satisfies ε′ > ε because therandom scaling of ε′ during adversarial training ensures that a typical distortion during adversarialtraining has size smaller than ε′.
C.3 FULL EVALUATION RESULTS AND ANALYSIS FOR CIFAR-10
C.3.1 FULL RESULTS FOR CIFAR10
We show the results of adversarial attacks and defenses for CIFAR-10 in Figure 14. We experienceddifficulty training theL2 andL1 attacks at distortion sizes greater than those shown and have omittedthose runs, which we believe may be related to the small size of CIFAR-10 images.
C.3.2 ATA AND UAR FOR CIFAR-10
The ε calibration procedure for CIFAR-10 was similar to that used for ImageNet-100. We startedwith the perceptually small εmin values in Table 3 and increased ε geometrically with ratio 2 untiladversarial accuracy of an adversarially trained model dropped below 40. Note that this threshold
15
Under review as a conference paper at ICLR 2020
Table 3: Calibrated distortion sizes and ATA values for ResNet-56 on CIFAR-10
Attack ε1 ε2 ε3 ε4 ε5 ε6 ATA1 ATA2 ATA3 ATA4 ATA5 ATA6
L∞ 1 2 4 8 16 32 91.0 87.8 81.6 71.3 46.5 23.1L2 40 80 160 320 640 2560 90.1 86.4 79.6 67.3 49.9 17.3L1 195 390 780 1560 6240 24960 92.2 90.0 83.2 73.8 47.4 35.3L∞-JPEG 0.03125 0.0625 0.125 0.25 0.5 1 89.7 87.0 83.1 78.6 69.7 35.4L1-JPEG 2 8 64 256 512 1024 91.4 88.1 80.2 68.9 56.3 37.7Elastic 0.125 0.25 0.5 1 2 8 87.4 81.3 72.1 58.2 45.4 27.8
is higher for CIFAR-10 because there are fewer classes. The resulting ATA and UAR values forCIFAR10 are shown in Table 3 and Figure 15. We omitted calibration for the L2-JPEG attackbecause we chose too small a range of ε for our initial training experiments, and we plan to addressthis issue in the future.
D ROBUSTNESS OF OUR RESULTS
D.1 REPLICATION
We replicated our results for the first three rows of Figure 11 with different random seeds to see thevariation in our results. As shown in Figure 16, deviations in results are minor.
D.2 CONVERGENCE
We replicated the results in Figure 11 with 50 instead of 200 steps to see how the results changedbased on the number of steps in the attack. As shown in Figure 17, the deviations are minor.
E FURTHER RESULTS FOR JOINT TRAINING
E.1 FULL EXPERIMENTAL RESULTS
We show the evaluation accuracies of jointly trained models in Figure 18.
We show all the attacks against the jointly adversarially trained defenses in Figure 19.
E.2 DEPENDENCE ON RANDOM SEED
In Table 4, we study the dependence of joint adversarial training to random seed. We find that atlarge distortion sizes, joint training for certain pairs of distortions does not produce consistent resultsover different random initializations.
Table 4: Train and val accuracies for joint adversarial training at large distortion are dependent onseed. For train and val, ε′ is chosen uniformly at random between 0 and ε, and we used 10 steps forL∞ and L1 and 30 steps for elastic. Single adversarial training baselines are also shown.
Training parameters (ResNet-50) L∞ train other train L∞ val other valL∞ ε = 8,Elastic ε = 4, Seed 1 90 89 35 74L∞ ε = 8,Elastic ε = 4, Seed 2 89 90 47 44L∞ ε = 8,Elastic ε = 4, Seed 3 90 89 29 63L∞ ε = 16, L1 ε = 612000, Seed 1 86 87 22 16L∞ ε = 16, L1 ε = 612000, Seed 2 88 87 16 24L∞ ε = 8 81 – 74 –L∞ ε = 16 68 – 63 –Elastic ε = 4 – 88 – 76L1 ε = 612000 – 75 – 59
16
Under review as a conference paper at ICLR 2020
Table 5: Training and validation numbers for ResNet-101 and ResNet-50 for joint training againstL∞, ε = 8 and elastic, ε = 4.
Training parameters L∞ train other train L∞ val other valL∞ ε = 8,Elastic ε = 4, ResNet-50 Seed 1 90 89 35 74L∞ ε = 8,Elastic ε = 4, ResNet-50 Seed 2 89 90 47 44L∞ ε = 8,Elastic ε = 4 ResNet-101 90 91 49 46
E.3 OVERFITTING AND MODEL CAPACITY
As a first test to understand the relationship between model capacity and overfitting, we trainedResNet-101 models using the same procedure as in Section 5. Briefly, overfitting still occurs, butResNet-101 achieves a few percentage points higher than ResNet-50.
We show the training curves in Figure 20 and the training and validation numbers in Table 5.
17
Under review as a conference paper at ICLR 2020
No atta
ckL
=1 L =2 L =4 L =8
L =16 L =32
L2 =10 L2 =20 L2 =40 L2 =80
L2 =16
0L2
=320
L2 =64
0L2
=1280
L2 =25
60L2
=5120 L1
=195
L1 =39
0L1
=780
L1 =15
60L1
=3120
L1 =62
40
L1 =12
480
L1 =24
960
L1 =49
920
L-JP
EG =0.0
3125
L-JP
EG =0.0
625
L-JP
EG =0.1
25
L-JP
EG =0.2
5
L-JP
EG =0.5
L-JP
EG =1
L2-JP
EG =0.0
625
L2-JP
EG =0.1
25
L2-JP
EG =0.2
5
L2-JP
EG =0.5
L2-JP
EG =1
L2-JP
EG =2
L2-JP
EG =4
L2-JP
EG =8
L1-JP
EG =1
L1-JP
EG =2
L1-JP
EG =4
L1-JP
EG =8
L1-JP
EG =16
L1-JP
EG =32
L1-JP
EG =64
L1-JP
EG =12
8
L1-JP
EG =25
6
L1-JP
EG =51
2
L1-JP
EG =10
24Ela
stic =0.1
25
Elastic
=0.25
Elastic
=0.5Ela
stic =1
Elastic
=2Ela
stic =4
Elastic
=8
Elastic
=16
Norm
al tr
aini
ng
L
=1
L
=2
L
=4
L
=8
L
=16
L
=32
L 2
=10
L 2
=20
L 2
=40
L 2
=80
L 2
=16
0L 2
=
320
L 2
=64
0L 2
=
1280
L 2
=25
60L 2
=
5120
L 1
=19
5L 1
=
390
L 1
=78
0L 1
=
1560
L 1
=31
20L 1
=
6240
L 1
=12
480
L 1
=24
960
L 1
=49
920
L-JP
EG
=0.
0312
5L
-JPEG
=
0.06
25L
-JPEG
=
0.12
5L
-JPEG
=
0.25
L-JP
EG
=0.
5L
-JPEG
=
1
L 2-JP
EG
=0.
0625
L 2-JP
EG
=0.
125
L 2-JP
EG
=0.
25L 2
-JPEG
=
0.5
L 2-JP
EG
=1
L 2-JP
EG
=2
L 2-JP
EG
=4
L 2-JP
EG
=8
L 1-JP
EG
=1
L 1-JP
EG
=2
L 1-JP
EG
=4
L 1-JP
EG
=8
L 1-JP
EG
=16
L 1-JP
EG
=32
L 1-JP
EG
=64
L 1-JP
EG
=12
8L 1
-JPEG
=
256
L 1-JP
EG
=51
2L 1
-JPEG
=
1024
Elas
tic
=0.
125
Elas
tic
=0.
25El
astic
=
0.5
Elas
tic
=1
Elas
tic
=2
Elas
tic
=4
Elas
tic
=8
Elas
tic
=16
9359
9 0
0 0
091
8349
5 1
3 4
3 3
185
7139
9 0
0 0
0 0
21 0
0 0
0 0
9291
8350
5 0
2 2
8982
6124
2 0
0 0
0 0
0 9
0 0
0 0
0 0
0
9389
7937
1 0
092
9289
7425
0 2
2 2
190
8673
44 9
0 0
0 0
8350
3 0
0 0
9292
9188
7222
0 0
9190
8466
28 3
0 0
0 0
067
11 0
0 0
0 0
093
9186
6412
0 0
9292
9083
50 4
0 2
2 1
9087
7854
17 1
0 0
087
6914
0 0
092
9291
9081
41 1
092
9186
7339
7 0
0 0
0 0
7826
1 0
0 0
0 1
9190
8878
40 1
091
9190
8567
17 0
0 1
090
8880
6228
3 0
0 0
8879
40 1
0 0
9190
9089
8459
8 0
9089
8677
5216
1 0
0 0
083
50 3
0 0
0 0
089
8887
8263
13 0
8888
8785
7433
0 0
0 0
8786
8065
35 5
0 0
086
8053
4 0
088
8888
8783
6616
088
8785
7757
24 3
0 0
0 0
8366
12 0
0 0
0 0
8383
8280
7139
283
8382
8175
49 4
0 0
083
8178
6948
16 1
0 0
8178
6420
0 0
8383
8382
8071
33 1
8382
8176
6539
11 1
0 0
080
7234
3 0
0 0
184
8382
7766
4723
8484
8280
6943
10 0
0 0
8178
7363
4827
8 1
082
7762
29 6
184
8484
8379
6636
884
8381
7666
4932
18 9
4 1
8066
22 2
0 0
0 1
9281
48 4
0 0
091
8980
42 3
3 5
5 3
189
8263
27 2
0 0
0 0
5910
0 0
0 0
9291
8979
37 1
2 3
9087
7442
8 0
0 0
0 0
038
1 0
0 0
0 0
094
8867
13 0
0 0
9392
8762
8 0
3 3
2 1
9185
7035
4 0
0 0
074
22 0
0 0
093
9391
8655
4 0
192
8980
5312
0 0
0 0
0 0
51 3
0 0
0 0
0 0
9390
8142
1 0
093
9290
7832
1 3
3 2
192
8879
5315
0 0
0 0
8555
4 0
0 0
9392
9289
7626
0 1
9290
8569
30 3
0 0
0 0
069
13 0
0 0
0 0
092
9086
6613
0 0
9291
9085
61 9
0 3
2 1
9189
8468
33 4
0 0
088
7730
0 0
092
9291
9085
58 6
091
9087
7851
14 1
0 0
0 0
7932
1 0
0 0
0 0
9089
8777
38 1
090
9089
8675
33 0
0 1
089
8886
7752
16 1
0 0
8884
60 7
0 0
9090
9089
8775
25 0
9089
8782
6431
5 0
0 0
082
55 5
0 0
0 0
087
8685
8060
11 0
8786
8685
8057
8 0
0 0
8686
8480
6736
5 0
085
8476
37 1
086
8686
8685
8156
686
8685
8376
5824
3 0
0 0
8266
18 1
0 0
0 0
8079
7977
6834
180
8080
7977
6732
0 0
080
7979
7773
5724
1 0
7979
7562
14 0
7979
8079
7977
6930
7979
7978
7669
5123
4 0
077
7038
6 1
0 0
173
7373
7166
4914
7373
7373
7166
5015
0 0
7373
7271
6962
4721
173
7270
6447
1473
7373
7372
7267
5473
7373
7271
6861
5037
2516
7267
5123
6 2
2 3
6969
6867
6349
3369
6868
6867
6250
3217
468
6868
6765
5846
3424
6868
6661
4834
6969
6868
6867
6452
6868
6868
6764
5849
4035
3268
6452
3421
11 6
777
7674
6855
30 7
7776
7574
6754
24 3
0 0
7676
7471
6450
3116
576
7469
5216
177
7776
7675
7261
2676
7676
7471
6447
2711
4 2
7364
33 4
1 0
2 4
9384
55 6
0 0
092
9084
52 5
1 4
4 3
191
8875
44 8
0 0
0 0
6817
0 0
0 0
9392
9184
50 4
0 2
9189
8155
15 1
0 0
0 0
041
2 0
0 0
0 0
094
8662
13 0
0 0
9392
8560
10 0
3 3
3 1
9290
8363
28 3
0 0
074
29 1
0 0
093
9391
8662
11 0
192
9187
7338
7 0
0 0
0 0
47 3
0 0
0 0
0 0
9373
31 2
0 0
092
8770
25 1
1 2
3 2
192
9083
6631
3 0
0 0
5510
0 0
0 0
9392
8976
35 2
0 1
9290
8467
29 3
0 0
0 0
034
1 0
0 0
0 0
094
5614
1 0
0 0
9182
48 8
0 0
1 2
2 1
9289
8368
37 8
0 0
040
4 0
0 0
092
9186
6419
1 0
191
8983
6943
11 0
0 0
0 0
25 0
0 0
0 0
0 0
9449
15 1
0 0
088
7440
8 0
0 1
2 2
190
8781
7045
14 0
0 0
38 5
0 0
0 0
9291
8561
20 1
0 0
9187
7758
28 6
0 0
0 0
026
1 0
0 0
0 0
093
5413
1 0
0 0
8978
43 7
0 0
0 1
2 1
9086
7962
35 9
1 0
039
6 0
0 0
092
9084
6119
1 0
090
8779
6338
14 2
0 0
0 0
35 1
0 0
0 0
0 0
9266
20 1
0 0
090
8255
12 0
0 0
0 1
190
8782
7045
15 1
0 0
44 9
0 0
0 0
9190
8462
22 2
0 0
8986
7757
3010
2 0
0 0
046
3 0
0 0
0 0
089
7748
12 1
0 0
8884
7139
8 0
0 0
1 0
8886
8374
5527
4 0
067
34 7
1 0
089
8885
7853
17 2
088
8682
7152
2810
2 0
0 0
6319
1 0
0 0
0 0
4646
4744
3720
245
4546
4642
3416
2 0
046
4646
4747
4742
26 4
4849
4539
25 3
4748
4849
4948
4535
5052
5457
5756
5446
3014
640
3423
8 1
0 0
1
9288
7531
1 0
091
9086
6414
0 3
3 2
188
8162
27 3
0 0
0 0
9083
50 2
0 0
9191
9190
8770
17 0
9190
8882
6329
5 0
0 0
057
5 0
0 0
0 0
191
8879
46 3
0 0
9089
8774
30 1
1 2
2 1
8883
6838
7 0
0 0
090
8776
28 0
090
9090
9089
8456
590
9089
8780
6130
5 0
0 0
6611
0 0
0 0
0 0
8987
8159
10 0
088
8886
7844
4 0
2 1
187
8373
4813
1 0
0 0
8887
8363
5 0
8989
8988
8886
7737
8988
8887
8578
6332
5 0
071
21 0
0 0
0 0
086
8480
6522
1 0
8585
8378
5612
0 1
1 0
8481
7456
23 2
0 0
085
8583
7530
086
8686
8585
8480
6386
8585
8584
8173
5424
4 0
7233
1 0
0 0
0 0
8381
7970
39 4
082
8281
7864
27 1
0 0
082
8075
6337
7 0
0 0
8282
8279
61 3
8383
8382
8282
8074
8383
8282
8280
7769
5228
1173
47 5
0 0
0 0
080
7977
6944
6 0
7979
7875
6530
2 0
0 0
7978
7462
38 9
0 0
079
7978
7770
3580
8080
7979
7977
7380
8080
7979
7876
7469
6253
7251
8 0
0 0
0 0
9474
23 0
0 0
092
8763
11 0
2 2
2 2
186
7035
5 0
0 0
0 0
50 4
0 0
0 0
9393
9075
25 0
1 1
9186
7136
4 0
0 0
0 0
014
0 0
0 0
0 0
193
7730
0 0
0 0
9288
7018
0 2
3 3
2 0
8672
40 7
0 0
0 0
069
15 0
0 0
093
9391
8448
2 0
291
8878
5111
0 0
0 0
0 0
21 0
0 0
0 0
0 1
9386
58 7
0 0
093
9181
42 2
1 2
2 2
188
7852
15 0
0 0
0 0
8348
2 0
0 0
9393
9289
7422
0 1
9290
8567
28 3
0 0
0 0
037
1 0
0 0
0 0
093
8871
20 0
0 0
9291
8661
9 0
2 2
2 1
8982
6325
2 0
0 0
089
7627
0 0
093
9392
9187
60 6
092
9189
8260
22 2
0 0
0 0
53 2
0 0
0 0
0 1
9186
6921
0 0
091
8985
6313
0 2
2 2
188
8266
31 3
0 0
0 0
9085
62 7
0 0
9291
9191
8880
39 1
9191
9087
7854
17 1
0 0
058
5 0
0 0
0 0
090
8780
51 5
0 0
8988
8676
38 2
1 2
2 1
8784
7346
11 0
0 0
089
8779
44 1
090
8989
8988
8572
2090
8989
8885
7755
19 1
0 0
6714
0 0
0 0
0 0
8886
8161
15 0
088
8785
7952
7 0
1 2
086
8477
5622
2 0
0 0
8787
8470
15 0
8888
8888
8786
8156
8888
8887
8682
7349
15 1
071
26 0
0 0
0 0
085
8380
6830
2 0
8584
8378
6219
0 0
1 0
8482
7763
34 5
0 0
085
8483
7848
085
8585
8584
8481
7285
8585
8483
8279
6947
16 3
7340
2 0
0 0
0 0
9379
37 1
0 0
092
8974
26 1
1 3
3 2
187
7750
13 0
0 0
0 0
7121
0 0
0 0
9393
9186
58 6
0 1
9290
8566
26 2
0 0
0 0
024
0 0
0 0
0 0
093
8560
10 0
0 0
9290
8350
5 1
4 4
3 1
8982
6326
2 0
0 0
083
52 5
0 0
093
9291
8979
35 1
292
9189
8157
17 1
0 0
0 0
43 2
0 0
0 0
0 0
9178
44 4
0 0
090
8777
40 2
0 2
2 2
187
8060
21 1
0 0
0 0
8875
29 0
0 0
9292
9291
8667
14 0
9291
9087
7645
9 0
0 0
043
1 0
0 0
0 0
091
8775
34 1
0 0
9089
8670
23 0
1 2
1 1
8885
7242
8 0
0 0
089
8359
9 0
091
9190
9088
8047
291
9090
8883
6937
5 0
0 0
60 8
0 0
0 0
0 0
8987
7947
4 0
089
8886
7637
1 1
3 2
188
8576
5114
0 0
0 0
8886
7534
0 0
8989
8989
8884
6921
8989
8988
8679
6022
1 0
066
13 0
0 0
0 0
088
8579
54 8
0 0
8787
8577
46 4
0 2
1 1
8684
7756
20 1
0 0
087
8580
55 5
088
8888
8787
8577
4388
8787
8785
8271
4712
0 0
6818
0 0
0 0
0 0
8886
8160
14 0
087
8785
7953
7 0
2 2
186
8478
6025
2 0
0 0
8786
8367
16 0
8888
8887
8785
8058
8888
8787
8683
7659
27 3
071
25 0
0 0
0 0
086
8481
6420
0 0
8685
8479
5812
0 0
1 0
8583
7863
30 3
0 0
086
8582
7232
086
8686
8685
8581
6686
8686
8584
8379
6946
15 2
7231
1 0
0 0
0 0
8584
8167
27 1
085
8483
7962
18 0
0 1
084
8378
6534
5 0
0 0
8584
8275
45 1
8585
8585
8584
8170
8585
8585
8483
8073
5936
1372
38 2
0 0
0 0
084
8380
7138
2 0
8483
8279
6728
0 0
0 0
8382
7868
43 9
0 0
084
8382
7756
484
8484
8484
8381
7484
8484
8483
8280
7666
4926
7548
4 0
0 0
0 0
8281
7971
40 3
082
8281
7867
31 1
0 0
081
8077
6744
11 0
0 0
8282
8176
59 9
8282
8282
8281
7973
8282
8282
8281
7975
6956
3874
50 6
0 0
0 0
0
9385
6013
0 0
092
9082
50 8
0 2
2 2
189
8264
32 6
0 0
0 0
6519
1 0
0 0
9291
8769
24 2
0 0
8879
5723
3 0
0 0
0 0
086
60 5
0 0
0 0
091
8462
17 1
0 0
9088
8154
12 0
0 0
0 0
8782
6738
10 1
0 0
070
29 2
0 0
090
8986
7132
3 0
086
7958
26 4
0 0
0 0
0 0
8778
33 0
0 0
0 0
8781
6525
2 0
087
8579
5718
1 0
0 0
084
7966
4215
2 0
0 0
7343
8 0
0 0
8786
8474
45 9
0 0
8477
6135
10 1
0 0
0 0
085
8162
8 0
0 0
082
7867
36 6
0 0
8280
7659
23 3
0 0
0 0
7770
5636
15 3
0 0
073
5219
3 0
081
8180
7454
20 2
079
7464
4520
6 1
0 0
0 0
8179
7240
7 1
0 0
8175
6542
15 1
080
7870
4817
2 0
0 0
070
5945
3118
9 3
1 0
6847
20 4
0 0
7878
7770
5020
3 0
7773
6447
2610
3 1
0 0
079
7671
5837
1911
776
7164
4519
2 0
7573
6647
18 3
0 0
0 0
6354
3925
15 8
3 0
064
4619
4 1
074
7473
6649
21 4
072
6961
4627
11 4
1 0
0 0
7472
6858
4533
2310
6865
5941
14 1
068
6659
4217
3 0
0 0
054
4533
2010
3 1
0 0
5841
16 3
0 0
6867
6559
4319
4 0
6562
5440
2411
4 1
0 0
068
6763
5342
3428
1561
5853
3813
1 0
6059
5339
15 2
0 0
0 0
5041
3119
9 3
0 0
053
3714
2 0
060
5958
5339
17 3
058
5549
3721
8 2
1 0
0 0
6160
5849
3929
2318
0.0
0.2
0.4
0.6
0.8
1.0
Adversarial accuracy
Figu
re14
:Acc
urac
yof
adve
rsar
iala
ttack
(col
umn)
agai
nsta
dver
sari
ally
trai
ned
mod
el(r
ow)o
nC
IFA
R-1
0.
18
Under review as a conference paper at ICLR 2020
L∞ L2
L1
L∞
-JP
EG
L1-J
PE
GE
last
ic
Normal Training
L∞ ε = 1
L∞ ε = 2
L∞ ε = 4
L∞ ε = 8
L∞ ε = 16
L∞ ε = 32
L∞-JPEG ε = 0.03125
L∞-JPEG ε = 0.0625
L∞-JPEG ε = 0.125
L∞-JPEG ε = 0.25
L∞-JPEG ε = 0.5
L∞-JPEG ε = 1
17 16 48 5 25 3
110110110110110110
51 49 69 31 37 21
63 59 73 38 39 28
74 67 76 47 40 36
83 72 77 50 39 43
89 75 78 55 40 51
94 72 76 58 48 45
110110110110110110
48 43 61 51 42 17
54 50 66 63 49 21
59 55 69 74 58 25
63 59 71 81 64 29
68 64 73 88 79 34
68 64 71 94 99 35
L∞ L2
L1
L∞
-JP
EG
L1-J
PE
GE
last
ic
Normal Training
L2 ε = 40
L2 ε = 80
L2 ε = 160
L2 ε = 320
L2 ε = 640
L2 ε = 2560
L1-JPEG ε = 2
L1-JPEG ε = 8
L1-JPEG ε = 64
L1-JPEG ε = 256
L1-JPEG ε = 512
L1-JPEG ε = 1024
17 16 48 5 25 3
110110110110110110
53 53 74 32 38 22
64 63 80 44 40 30
73 73 84 54 41 38
80 81 88 64 46 45
84 86 88 70 51 52
87 85 86 78 71 66
110110110110110110
39 37 62 32 41 12
49 47 68 54 51 18
60 58 73 76 66 26
65 62 75 84 85 30
68 66 76 87 92 34
68 66 75 88 96 35
L∞ L2
L1
L∞
-JP
EG
L1-J
PE
GE
last
ic
Normal Training
L1 ε = 195
L1 ε = 390
L1 ε = 780
L1 ε = 1560
L1 ε = 6240
L1 ε = 49920
Elastic ε = 0.125
Elastic ε = 0.25
Elastic ε = 0.5
Elastic ε = 1
Elastic ε = 2
Elastic ε = 8
17 16 48 5 25 3
110110110110110110
36 38 70 19 34 12
40 41 79 24 39 13
26 26 79 15 37 9
18 15 80 10 37 7
17 13 77 10 36 10
49 47 61 47 51 29
110110110110110110
40 37 63 19 24 41
41 38 65 23 25 53
43 40 65 28 27 64
47 41 57 33 29 75
49 35 51 32 29 89
45 31 37 27 25 86
Figure 15: UAR scores on CIFAR-10. Displayed UAR scores are multiplied by 100 for clarity.
No
atta
ckL∞ε
=1
L∞ε
=2
L∞ε
=4
L∞ε
=8
L∞ε
=16
L∞ε
=32
L2ε
=15
0L
2ε
=30
0L
2ε
=60
0L
2ε
=12
00L
2ε
=24
00L
2ε
=48
00L
1ε
=95
62.4
4
L1ε
=19
125
L1ε
=76
500
L1ε
=15
3000
L1ε
=30
6000
L1ε
=61
2000
L∞
-JP
EGε
=0.
0312
5
L∞
-JP
EGε
=0.
125
L∞
-JP
EGε
=0.
25
L∞
-JP
EGε
=0.
5
L∞
-JP
EGε
=1
L2-J
PE
Gε
=2
L2-J
PE
Gε
=16
L2-J
PE
Gε
=32
L2-J
PE
Gε
=64
L2-J
PE
Gε
=12
8
L2-J
PE
Gε
=25
6E
last
icε
=0.
25E
last
icε
=1
Ela
sticε
=2
Ela
sticε
=4
Ela
sticε
=8
Ela
sticε
=16
Attack (evaluation)
L∞ ε = 1
L∞ ε = 2
L∞ ε = 4
L∞ ε = 8
L∞ ε = 16
L∞ ε = 32
L2 ε = 150
L2 ε = 300
L2 ε = 600
L2 ε = 1200
L2 ε = 2400
L2 ε = 4800
L1 ε = 9562.44
L1 ε = 19125
L1 ε = 38250.1
L1 ε = 76500
L1 ε = 153000
L1 ε = 306000
L1 ε = 612000
Att
ack
(adv
ersa
rial
trai
ning
)
87 84 70 13 0 0 0 85 81 47 2 0 0 80 66 6 0 0 0 84 13 0 0 0 86 10 0 0 0 0 85 38 3 0 0 0
85 85 81 50 2 0 0 85 83 71 18 0 0 81 72 18 1 0 0 84 50 1 0 0 85 33 1 0 0 0 84 52 7 0 0 0
84 83 82 74 23 0 0 84 83 78 47 2 0 80 74 29 5 0 0 83 67 7 0 0 84 49 3 0 0 0 83 62 16 1 0 0
80 80 79 77 59 6 0 80 78 73 46 5 0 72 62 23 6 0 0 79 54 8 0 0 79 41 5 0 0 0 79 66 30 3 0 0
74 74 74 73 67 34 1 74 72 64 35 3 0 61 48 10 2 0 0 73 41 4 0 0 73 29 4 0 0 0 74 67 43 10 1 0
71 71 70 69 63 40 8 69 62 35 6 0 0 40 22 2 0 0 0 65 9 0 0 0 69 8 1 0 0 0 70 64 46 15 1 0
87 82 54 3 0 0 0 85 79 33 0 0 0 81 68 5 0 0 0 82 2 0 0 0 85 2 0 0 0 0 84 31 2 0 0 0
86 84 74 21 0 0 0 85 82 65 8 0 0 82 76 19 1 0 0 84 20 0 0 0 85 14 0 0 0 0 83 44 4 0 0 0
85 84 80 56 3 0 0 84 83 78 41 1 0 83 79 44 8 0 0 84 59 3 0 0 84 51 2 0 0 0 83 57 10 0 0 0
82 82 80 73 28 0 0 82 81 79 68 15 0 81 80 65 32 4 0 82 75 33 0 0 82 73 24 0 0 0 81 65 21 1 0 0
77 77 76 74 56 6 0 77 77 76 73 48 2 77 76 71 56 22 1 77 75 62 9 0 77 75 57 6 0 0 77 68 37 3 0 0
69 69 68 67 61 27 1 69 69 69 67 60 20 69 69 66 61 45 14 69 68 63 38 2 69 68 63 34 2 0 69 63 48 12 1 0
87 70 22 0 0 0 0 82 63 13 0 0 0 84 77 13 1 0 0 66 0 0 0 0 84 1 0 0 0 0 82 18 0 0 0 0
86 77 43 3 0 0 0 84 74 33 1 0 0 84 81 33 3 0 0 76 2 0 0 0 84 3 0 0 0 0 83 29 1 0 0 0
85 80 60 10 0 0 0 84 79 54 5 0 0 84 83 53 12 0 0 81 13 0 0 0 84 17 0 0 0 0 83 40 2 0 0 0
84 81 70 29 1 0 0 83 80 66 21 0 0 83 83 71 41 5 0 81 42 3 0 0 83 46 3 0 0 0 82 52 6 0 0 0
82 80 75 47 4 0 0 81 79 72 40 2 0 81 81 76 63 24 1 80 63 20 0 0 81 64 20 0 0 0 80 59 13 0 0 0
78 77 72 51 10 0 0 78 76 70 45 6 0 78 78 75 67 40 4 77 65 35 2 0 78 67 35 2 0 0 77 62 22 1 0 0
74 73 68 52 15 0 0 73 72 67 46 9 0 73 73 71 66 47 12 72 62 44 9 0 73 63 45 8 0 0 72 59 26 2 0 00.0
0.2
0.4
0.6
0.8
1.0
Adversarial
accuracy
Figure 16: Replica of the first three block rows of Figure 11 with different random seeds. Deviationsin results are minor.
19
Under review as a conference paper at ICLR 2020
No atta
ckL
=1L
=2L
=4L
=8L
=16L
=32 L2 =15
0L2
=300
L2 =60
0L2
=1200
L2 =24
00L2
=4800
L1 =95
62.44
L1 =19
125
L1 =38
250.1
L1 =76
500
L1 =15
3000
L1 =30
6000
L1 =61
2000
L-JP
EG =0.0
3125
L-JP
EG =0.0
625
L-JP
EG =0.1
25
L-JP
EG =0.2
5
L-JP
EG =0.5
L-JP
EG =1
L-JP
EG =2
L2-JP
EG =2
L2-JP
EG =4
L2-JP
EG =8
L2-JP
EG =16
L2-JP
EG =32
L2-JP
EG =64
L2-JP
EG =12
8
L2-JP
EG =25
6L1
-JPEG
=128
L1-JP
EG =25
6
L1-JP
EG =51
2
L1-JP
EG =10
24
L1-JP
EG =20
48
L1-JP
EG =40
96
L1-JP
EG =81
92
L1-JP
EG =16
384
L1-JP
EG =32
768
L1-JP
EG =65
536
L1-JP
EG =13
1072
Elastic
=0.25
Elastic
=0.5
Elastic
=1
Elastic
=2
Elastic
=4
Elastic
=8
Elastic
=16 Fog =12
8
Fog =25
6
Fog =51
2
Fog =10
24
Fog =20
48
Fog =40
96
Fog =81
92
Fog =16
384
Fog =32
768
Fog =65
536
Gabor
=6.25
Gabor
=12.5
Gabor
=25
Gabor
=50
Gabor
=100
Gabor
=200
Gabor
=400
Gabor
=800
Gabor
=1600
Gabor
=3200
Snow
=0.031
25
Snow
=0.062
5
Snow
=0.125
Snow
=0.25
Snow
=0.5Sn
ow =1
Snow
=2Sn
ow =4
Snow
=8
Snow
=16
Norm
al tr
aini
ng
L
=1
L
=2
L
=4
L
=8
L
=16
L
=32
L 2
=15
0L 2
=
300
L 2
=60
0L 2
=
1200
L 2
=24
00L 2
=
4800
L 1
=95
62.4
4L 1
=
1912
5L 1
=
3825
0.1
L 1
=76
500
L 1
=15
3000
L 1
=30
6000
L 1
=61
2000
L-JP
EG
=0.
0312
5L
-JPEG
=
0.06
25L
-JPEG
=
0.12
5L
-JPEG
=
0.25
L-JP
EG
=0.
5L
-JPEG
=
1L
-JPEG
=
2
L 2-JP
EG
=2
L 2-JP
EG
=4
L 2-JP
EG
=8
L 2-JP
EG
=16
L 2-JP
EG
=32
L 2-JP
EG
=64
L 2-JP
EG
=12
8L 2
-JPEG
=
256
L 1-JP
EG
=12
8L 1
-JPEG
=
256
L 1-JP
EG
=51
2L 1
-JPEG
=
1024
L 1-JP
EG
=20
48L 1
-JPEG
=
4096
L 1-JP
EG
=81
92L 1
-JPEG
=
1638
4L 1
-JPEG
=
3276
8L 1
-JPEG
=
6553
6L 1
-JPEG
=
1310
72
Elas
tic
=0.
25El
astic
=
0.5
Elas
tic
=1
Elas
tic
=2
Elas
tic
=4
Elas
tic
=8
Elas
tic
=16
Fog
=12
8Fo
g =
256
Fog
=51
2Fo
g =
1024
Fog
=20
48Fo
g =
4096
Fog
=81
92Fo
g =
1638
4Fo
g =
3276
8Fo
g =
6553
6
Gabo
r =
6.25
Gabo
r =
12.5
Gabo
r =
25Ga
bor
=50
Gabo
r =
100
Gabo
r =
200
Gabo
r =
400
Gabo
r =
800
Gabo
r =
1600
Gabo
r =
3200
Snow
=
0.03
125
Snow
=
0.06
25Sn
ow
=0.
125
Snow
=
0.25
Snow
=
0.5
Snow
=
1Sn
ow
=2
Snow
=
4Sn
ow
=8
Snow
=
16
8728
2 0
0 0
058
13 0
0 0
070
4413
2 0
0 0
22 1
0 0
0 0
072
28 2
0 0
0 0
059
32 9
1 0
0 0
0 0
0 0
8050
9 0
0 0
070
37 6
0 0
0 0
0 0
018
5 2
1 0
0 0
0 0
064
3710
1 0
0 0
0 0
0
8684
7015
0 0
085
8250
3 0
081
7043
10 1
0 0
8472
16 0
0 0
086
8468
13 0
0 0
077
6436
10 1
0 0
0 0
0 0
8475
39 4
0 0
078
6023
3 0
0 0
0 0
075
34 6
1 0
0 0
0 0
080
6733
7 1
0 0
0 0
085
8580
51 3
0 0
8583
7121
0 0
8174
5724
3 0
084
8149
2 0
0 0
8584
7738
1 0
0 0
7971
5222
4 0
0 0
0 0
084
7953
10 0
0 0
7656
19 2
1 1
0 1
0 0
8162
16 2
0 0
0 0
0 0
8073
4714
2 0
0 0
0 0
8484
8274
24 0
084
8379
50 3
081
7664
39 9
1 0
8382
7013
0 0
084
8380
56 6
0 0
079
7462
3811
1 0
0 0
0 0
8379
6218
1 0
073
5015
2 1
1 1
1 1
082
7741
5 0
0 0
0 0
079
7659
23 4
1 1
0 0
080
7979
7659
7 0
7978
7352
9 0
7365
5331
9 1
080
7760
15 0
0 0
7978
7146
8 0
0 0
7470
5939
16 4
1 0
0 0
079
7766
34 4
0 0
6640
9 1
1 1
1 1
1 1
7978
6518
1 0
0 0
0 0
7573
6538
11 3
1 1
1 1
7574
7473
6735
173
7164
32 4
059
4627
10 2
0 0
7368
39 5
0 0
074
7058
29 4
0 0
064
5846
3115
5 1
0 0
0 0
7371
6544
12 1
058
29 6
1 1
1 1
1 1
074
7369
46 5
1 1
1 0
068
6661
4421
8 2
1 0
071
7170
6963
4314
6860
35 7
0 0
3924
9 3
0 0
065
4511
1 0
0 1
6860
35 9
2 0
1 1
5951
4127
15 7
2 0
0 0
070
6863
4616
2 0
5833
9 1
1 1
1 1
1 0
7069
6452
18 3
3 2
1 1
6361
5641
21 7
2 1
0 0
8782
53 4
0 0
085
7836
1 0
080
7143
10 1
0 0
8153
3 0
0 0
085
8151
3 0
0 0
075
5523
4 0
0 0
0 0
0 0
8472
32 2
0 0
077
5619
2 0
0 0
0 0
065
21 4
1 0
0 0
0 0
078
6124
4 0
0 0
0 0
085
8473
23 0
0 0
8582
6610
0 0
8377
6126
3 0
084
7625
0 0
0 0
8584
7320
0 0
0 0
7969
4211
1 0
0 0
0 0
084
7645
5 0
0 0
7655
20 2
0 0
0 0
0 0
7845
8 1
0 0
0 0
0 0
7969
35 7
1 0
0 0
0 0
8484
8158
4 0
084
8378
41 1
083
8073
5013
0 0
8481
61 4
0 0
084
8480
52 3
0 0
081
7661
30 6
0 0
0 0
0 0
8379
5812
0 0
075
5216
2 0
0 0
0 0
081
6720
2 0
0 0
0 0
079
7350
14 2
0 0
0 0
082
8281
7429
0 0
8282
8068
17 0
8180
7767
38 6
082
8176
37 0
0 0
8282
8174
28 0
0 0
8180
7459
28 6
0 0
0 0
081
7867
24 2
0 0
6945
12 1
0 1
1 1
0 1
8176
45 6
0 0
1 0
0 0
7672
6026
5 1
0 0
1 0
7777
7673
57 7
077
7676
7249
377
7675
7158
26 3
7676
7564
13 0
077
7776
7458
8 0
077
7674
6956
29 8
1 0
0 0
7674
6839
5 1
061
34 8
1 1
0 0
1 0
176
7461
19 1
0 1
1 0
069
6861
3812
3 1
1 1
168
6868
6761
28 1
6868
6867
6020
6868
6866
6146
1668
6867
6438
2 0
6868
6867
6336
2 0
6868
6866
6251
3315
6 3
368
6763
4912
1 0
5127
6 1
1 1
1 1
1 1
6867
6131
3 1
1 1
1 1
6059
5542
19 6
2 1
1 1
8671
26 1
0 0
082
6516
0 0
083
7961
21 1
0 0
6920
1 0
0 0
084
7225
1 0
0 0
074
5320
3 0
0 0
0 0
0 0
8267
22 1
0 0
074
4810
1 0
0 0
0 0
047
13 3
1 0
0 0
0 0
073
5318
2 0
0 0
0 0
086
7843
3 0
0 0
8475
34 1
0 0
8481
7241
5 0
077
43 3
0 0
0 0
8479
49 5
0 0
0 0
8068
4010
1 0
0 0
0 0
083
7132
2 0
0 0
7449
12 1
0 0
0 0
0 0
6122
4 1
0 0
0 0
0 0
7558
23 3
1 0
0 0
0 0
8581
6313
0 0
084
8057
7 0
085
8378
5818
1 0
8166
19 1
0 0
084
8269
23 1
0 0
082
7759
26 5
0 0
0 0
0 0
8375
43 4
0 0
074
5113
1 0
0 0
0 0
073
38 7
1 0
0 0
0 0
076
6532
6 1
0 0
0 0
084
8271
31 1
0 0
8381
6923
0 0
8483
8173
43 5
082
7548
5 0
0 0
8482
7751
5 0
0 0
8381
7453
19 3
0 0
0 0
082
7853
8 0
0 0
7248
12 1
0 0
0 0
0 0
7855
16 2
0 0
0 0
0 0
7667
39 9
1 0
0 0
0 0
8179
7346
4 0
080
7970
39 2
081
8079
7764
24 1
7976
6320
1 0
080
7977
6420
1 0
080
7978
7044
13 1
0 0
0 0
7975
5814
0 0
069
4511
0 0
0 0
0 0
078
6327
4 0
0 0
0 0
072
6646
15 2
0 0
0 1
179
7773
5412
0 0
7877
7249
8 0
7979
7876
7149
877
7569
47 4
0 0
7877
7569
45 4
0 0
7878
7774
6638
9 1
0 0
077
7462
23 1
0 0
6439
9 1
0 0
0 0
0 0
7567
39 8
1 0
0 0
0 0
6964
4921
5 1
1 0
1 1
7271
6960
28 1
072
7168
5722
172
7271
7067
5724
7170
6654
18 0
072
7170
6756
17 0
071
7170
6965
5025
7 1
1 0
7068
6135
4 0
053
30 8
1 1
0 1
0 0
069
6243
14 2
1 1
0 0
060
5646
25 9
2 1
1 1
1
8775
29 1
0 0
083
60 9
0 0
077
5825
4 0
0 0
8683
57 3
0 0
086
8682
57 4
0 0
084
8171
4613
1 0
0 0
0 0
8266
18 1
0 0
075
4811
1 0
0 0
0 0
041
9 2
1 0
0 0
0 0
073
4713
2 0
0 0
0 0
087
8148
3 0
0 0
8471
21 0
0 0
7965
33 6
0 0
086
8576
20 0
0 0
8786
8474
21 0
0 0
8583
7862
30 5
0 0
0 0
083
7122
1 0
0 0
7551
13 1
0 0
0 0
0 0
5515
3 1
0 0
0 0
0 0
7451
18 3
0 0
0 0
0 0
8683
6816
0 0
084
7844
3 0
080
7144
11 1
0 0
8685
8360
2 0
086
8685
8155
2 0
085
8381
7451
20 3
0 0
0 0
8373
31 2
0 0
075
5113
1 0
0 0
0 0
071
33 6
1 0
0 0
0 0
076
5823
4 1
0 0
0 0
084
8377
43 3
0 0
8381
6616
0 0
8074
5624
3 0
084
8483
7720
0 0
8484
8482
7321
0 0
8483
8279
7049
18 4
1 0
083
7543
4 0
0 0
7349
13 1
0 0
0 0
0 0
7856
18 3
0 0
0 0
0 0
7665
34 6
1 0
0 0
0 1
8180
7866
19 1
080
7974
42 3
079
7565
3910
1 0
8181
8079
66 1
081
8180
8077
59 3
081
8080
7976
7157
3513
3 2
7975
5310
0 0
070
4612
1 0
0 0
0 0
079
7241
8 1
0 0
0 0
074
6848
15 2
1 0
0 0
079
7977
6830
2 0
7978
7453
9 0
7775
6954
24 4
080
7979
7773
41 0
8080
8079
7669
39 1
8079
7978
7776
7469
6253
4879
7559
14 1
0 0
6744
11 1
0 0
0 0
0 0
7873
5016
3 1
1 0
0 0
7168
5321
5 1
1 0
0 1
7877
7664
20 1
078
7773
50 8
077
7569
5222
4 0
7878
7876
7061
1378
7878
7877
7265
5378
7878
7877
7674
7269
6560
7774
5713
1 0
065
4210
1 0
0 0
0 0
077
7149
18 4
1 1
1 0
070
6549
20 6
2 1
0 1
1
8764
14 0
0 0
080
45 3
0 0
075
5319
2 0
0 0
8472
17 0
0 0
086
8576
26 0
0 0
083
7861
27 5
0 0
0 0
0 0
8161
14 1
0 0
075
5012
1 0
0 0
0 0
028
6 2
0 0
0 0
0 0
071
4413
2 0
0 0
0 0
087
7528
1 0
0 0
8260
9 0
0 0
7760
25 4
0 0
086
8151
2 0
0 0
8786
8259
4 0
0 0
8482
7450
15 1
0 0
0 0
082
6618
1 0
0 0
7548
11 0
0 0
0 0
0 0
38 9
2 1
0 0
0 0
0 0
7445
14 2
0 0
0 0
0 0
8681
50 4
0 0
084
7426
1 0
080
6737
7 0
0 0
8684
7519
0 0
086
8684
7727
0 0
085
8380
6838
7 0
0 0
0 0
8370
23 1
0 0
075
5013
1 0
0 0
0 0
054
15 3
1 0
0 0
0 0
075
5519
2 0
0 0
0 0
085
8370
17 0
0 0
8481
52 4
0 0
8175
5216
1 0
085
8582
57 1
0 0
8585
8582
63 4
0 0
8584
8279
6330
5 0
0 0
083
7433
2 0
0 0
7551
14 1
0 0
0 0
0 0
7030
6 1
0 0
0 0
0 0
7660
25 4
0 0
0 0
0 1
8482
7846
3 0
083
8170
20 0
081
7764
30 4
0 0
8484
8376
17 0
084
8484
8377
33 0
084
8383
8175
6028
6 1
0 0
8275
44 4
0 0
074
4813
1 0
0 0
0 0
078
5214
2 0
0 0
0 0
077
6636
7 1
0 0
0 0
081
8179
6719
0 0
8180
7650
4 0
7978
7049
15 1
081
8181
7960
1 0
8181
8180
7966
6 0
8181
8080
7874
6240
15 3
180
7656
10 1
0 0
7044
11 1
0 0
0 0
0 0
8069
35 6
1 1
0 0
0 0
7570
4915
2 1
0 0
1 1
7777
7671
40 3
077
7776
6418
076
7572
6030
5 0
7878
7776
7226
078
7777
7776
7244
177
7777
7776
7572
6857
4133
7774
6221
2 0
065
38 9
1 0
0 0
0 0
077
7351
13 2
1 1
1 0
070
6856
27 7
2 1
0 1
177
7776
7138
2 0
7777
7561
16 0
7775
7260
34 7
078
7777
7672
48 0
7878
7777
7673
5514
7877
7777
7675
7371
6458
5577
7564
22 1
0 0
6438
10 1
0 1
0 1
0 0
7672
5015
2 1
0 0
0 0
7067
5730
9 2
1 0
1 1
8767
17 1
0 0
080
53 6
0 0
078
6327
3 0
0 0
8470
19 0
0 0
086
8579
43 2
0 0
086
8476
5618
2 0
0 0
0 0
8160
12 1
0 0
073
44 8
0 0
0 0
0 0
031
7 2
0 0
0 0
0 0
070
4410
2 0
0 0
0 0
086
7430
1 0
0 0
8364
13 0
0 0
8168
36 5
0 0
085
7841
2 0
0 0
8685
8363
8 0
0 0
8584
8168
35 5
0 0
0 0
082
6316
0 0
0 0
7448
10 0
0 0
0 0
0 0
4110
2 0
0 0
0 0
0 0
7349
13 2
0 0
0 0
0 0
8679
49 4
0 0
083
7429
1 0
081
7245
9 1
0 0
8582
6510
0 0
086
8584
7631
0 0
085
8583
7554
17 1
0 0
0 0
8268
22 1
0 0
075
4811
1 0
0 0
0 0
054
16 3
1 0
0 0
0 0
074
5316
2 0
0 0
0 0
086
8267
13 0
0 0
8580
50 4
0 0
8377
5719
1 0
086
8478
39 1
0 0
8685
8582
59 4
0 0
8685
8480
7040
7 1
0 0
082
7332
2 0
0 0
7550
14 1
0 0
0 0
0 0
6727
5 1
0 0
0 0
0 0
7662
23 4
1 0
0 0
0 0
8483
7534
1 0
084
8167
14 0
082
7866
33 4
0 0
8484
8166
8 0
085
8584
8274
25 0
084
8483
8276
6128
5 1
0 0
8275
41 4
0 0
074
4914
1 0
0 0
0 0
075
43 9
1 0
0 0
0 0
077
6634
6 1
0 0
0 0
083
8279
56 6
0 0
8382
7535
1 0
8178
7044
9 0
083
8382
7636
0 0
8383
8382
7952
1 0
8383
8282
7972
5322
4 1
081
7652
8 0
0 0
7145
13 1
0 0
0 0
0 0
7960
20 2
0 0
0 0
0 0
7669
4310
1 0
0 0
0 0
8180
7866
15 0
080
8075
49 3
079
7772
5216
1 0
8181
8078
58 1
081
8180
8078
65 9
081
8080
8079
7668
4922
6 3
7975
5511
0 0
069
4311
1 0
0 0
0 0
078
6832
5 1
0 0
0 0
075
7050
16 3
1 0
0 0
180
7978
7027
1 0
7979
7658
9 0
7877
7255
23 2
080
7979
7868
7 0
8079
7979
7871
25 0
8079
7979
7877
7364
4423
1878
7559
14 1
0 0
6640
9 1
1 0
0 0
0 0
7872
43 9
1 0
0 0
0 0
7469
5221
4 1
1 0
1 1
7777
7671
34 2
077
7775
6113
077
7671
5826
3 0
7878
7877
7012
078
7878
7776
7132
078
7777
7777
7573
6754
3628
7774
6017
1 0
064
37 7
1 0
1 0
0 0
177
7348
12 1
0 0
0 0
070
6853
25 6
2 1
1 0
176
7574
6938
2 0
7574
7360
15 0
7473
6857
28 4
075
7575
7469
22 0
7575
7575
7469
40 1
7575
7575
7474
7268
6150
4574
7158
17 1
0 0
5831
6 1
1 1
0 1
1 0
7572
5619
2 1
1 0
0 0
6764
5224
7 2
1 1
1 1
7273
7164
31 2
073
7269
5411
072
7166
5224
4 0
7373
7372
6617
073
7373
7371
6636
173
7373
7372
7169
6557
4642
7269
5414
1 0
052
25 5
1 1
1 0
1 1
172
6849
15 2
1 1
0 0
064
6047
21 5
1 1
0 1
1
8763
15 0
0 0
079
45 4
0 0
075
5826
5 0
0 0
6516
1 0
0 0
082
6013
0 0
0 0
068
4417
3 0
0 0
0 0
0 0
8578
47 4
0 0
077
5315
1 0
0 0
0 0
045
15 4
1 0
0 0
0 0
074
5526
5 1
0 0
0 0
187
7428
1 0
0 0
8260
12 0
0 0
7864
3711
1 0
077
36 2
0 0
0 0
8472
28 2
0 0
0 0
7052
26 7
1 0
0 0
0 0
086
8369
19 0
0 0
7757
22 2
0 0
0 0
0 0
6025
7 2
0 0
0 0
0 0
7562
3411
2 1
0 0
0 0
8577
42 4
0 0
082
6824
1 0
077
6846
19 3
0 0
8057
11 0
0 0
083
7644
6 0
0 0
071
5632
10 2
0 0
0 0
0 0
8483
7852
4 0
076
5824
3 0
0 0
0 0
073
4818
4 1
0 0
0 0
076
6849
21 6
2 1
0 0
084
7851
9 0
0 0
8272
34 3
0 0
7666
4619
4 0
079
5814
0 0
0 0
8276
4910
1 0
0 0
7159
3817
3 0
0 0
0 0
084
8381
7430
1 0
7559
25 4
0 0
0 0
0 0
7659
29 7
1 1
1 1
0 0
7669
5429
10 4
1 1
1 1
8175
4910
0 0
078
6729
3 0
068
5432
12 2
0 0
7039
5 0
0 0
078
6834
5 0
0 0
064
5030
11 2
0 0
0 0
0 0
8180
7978
6714
172
5322
3 0
0 0
0 0
075
6338
13 3
1 1
1 1
172
6855
3515
6 2
1 1
178
6942
6 0
0 0
7356
17 1
0 0
5235
16 5
1 0
055
16 1
0 0
0 0
7148
13 1
0 0
0 0
5134
18 5
1 0
0 0
0 0
077
7776
7675
5415
6749
21 4
0 0
0 0
0 0
7262
4217
4 1
1 1
1 1
6663
5438
20 9
4 2
1 1
7459
24 2
0 0
064
35 6
0 0
036
20 7
2 0
0 0
30 4
0 0
0 0
056
24 3
0 0
0 0
039
23 8
2 0
0 0
0 0
0 0
7473
7272
7063
5064
4419
4 1
0 0
0 0
069
5737
18 6
2 1
1 1
163
6052
3823
12 5
2 1
1
8742
2 0
0 0
070
19 1
0 0
068
4011
2 0
0 0
25 1
0 0
0 0
074
31 1
0 0
0 0
058
29 7
1 0
0 0
0 0
0 0
8157
12 1
0 0
085
7852
12 1
0 0
0 0
024
7 2
1 0
0 0
0 0
076
4917
2 0
0 0
0 0
088
41 3
0 0
0 0
6919
1 0
0 0
6637
9 1
0 0
020
0 0
0 0
0 0
7225
1 0
0 0
0 0
5625
5 0
0 0
0 0
0 0
082
6116
1 0
0 0
8782
7140
6 0
0 0
1 0
32 8
2 1
0 0
0 0
0 0
7959
23 3
0 0
0 0
0 0
8731
2 0
0 0
062
14 0
0 0
064
33 7
1 0
0 0
10 0
0 0
0 0
059
13 0
0 0
0 0
042
13 2
0 0
0 0
0 0
0 0
8262
19 1
0 0
086
8480
6632
6 1
1 1
137
9 2
0 0
0 0
0 0
081
6430
6 0
0 0
0 0
086
25 1
0 0
0 0
5310
0 0
0 0
6131
7 0
0 0
0 7
0 0
0 0
0 0
48 9
0 0
0 0
0 0
4315
2 0
0 0
0 0
0 0
082
6421
2 0
0 0
8685
8276
6030
8 2
1 1
4814
4 1
0 0
0 0
0 0
8168
39 9
1 0
0 0
0 0
8519
1 0
0 0
044
8 0
0 0
058
30 6
1 0
0 0
5 0
0 0
0 0
038
5 0
0 0
0 0
041
14 2
0 0
0 0
0 0
0 0
8166
23 2
0 0
084
8482
7870
5430
12 3
160
25 6
1 0
0 0
0 0
081
7244
11 1
0 0
0 0
078
9 0
0 0
0 0
26 3
0 0
0 0
3712
1 0
0 0
0 2
0 0
0 0
0 0
25 3
0 0
0 0
0 0
25 7
1 0
0 0
0 0
0 0
074
6023
2 0
0 0
7979
7876
7368
5838
12 3
6746
18 4
1 0
0 0
0 0
7570
5020
4 0
0 0
0 0
69 4
0 0
0 0
014
2 0
0 0
036
16 4
1 0
0 0
5 0
0 0
0 0
014
2 0
0 0
0 0
0 9
2 0
0 0
0 0
0 0
0 0
6450
12 1
0 0
070
7070
7069
6967
6145
2468
6764
5323
6 8
6 4
166
6558
4421
4 1
0 0
062
3 0
0 0
0 0
10 1
0 0
0 0
2712
3 0
0 0
0 3
0 0
0 0
0 0
9 1
0 0
0 0
0 0
8 2
0 0
0 0
0 0
0 0
056
38 7
0 0
0 0
6565
6666
6666
6664
5535
6261
6055
3618
4439
2915
6160
5444
2810
1 0
0 1
51 3
1 0
0 0
0 9
1 0
0 0
029
14 4
1 0
0 0
4 0
0 0
0 0
0 8
1 0
0 0
0 0
0 8
3 1
0 0
0 0
0 0
0 0
4627
3 0
0 0
057
5859
5859
5957
5343
2454
5453
4728
1235
3224
1453
5146
3618
5 1
0 0
042
6 1
0 0
0 0
10 2
0 0
0 0
2614
5 1
0 0
0 7
1 0
0 0
0 0
12 3
1 0
0 0
0 0
12 4
1 0
0 0
0 0
0 0
041
26 5
0 0
0 0
5151
5251
5151
4947
4029
4747
4641
2412
3530
2212
4745
4030
16 4
1 0
0 0
8656
10 0
0 0
075
35 3
0 0
070
5122
4 0
0 0
37 3
0 0
0 0
065
20 1
0 0
0 0
043
16 3
0 0
0 0
0 0
0 0
8373
37 3
0 0
078
6125
3 0
0 0
0 0
083
7132
7 1
0 0
0 0
080
7144
12 2
0 0
0 0
085
39 4
0 0
0 0
6419
1 0
0 0
6642
15 3
0 0
018
1 0
0 0
0 0
46 6
0 0
0 0
0 0
27 7
1 0
0 0
0 0
0 0
082
7441
4 0
0 0
7863
31 7
1 0
0 0
0 0
8481
6525
4 1
1 1
0 0
8075
5823
4 1
0 0
0 0
8526
2 0
0 0
053
12 1
0 0
060
3512
2 0
0 0
10 0
0 0
0 0
034
4 0
0 0
0 0
021
6 1
0 0
0 0
0 0
0 0
8274
41 4
0 0
076
5823
3 0
0 0
0 0
083
8279
5413
1 1
1 1
079
7561
33 8
2 0
0 0
084
23 2
0 0
0 0
5010
1 0
0 0
5834
11 2
0 0
010
1 0
0 0
0 0
34 4
0 0
0 0
0 0
22 6
1 0
0 0
0 0
0 0
082
7343
5 0
0 0
7557
21 2
0 0
0 0
0 0
8382
8178
48 5
4 3
2 0
7874
6239
15 4
1 0
0 0
8326
2 0
0 0
052
10 1
0 0
057
30 9
1 0
0 0
10 0
0 0
0 0
040
6 0
0 0
0 0
028
9 2
0 0
0 0
0 0
0 0
8073
41 5
0 0
075
5619
1 0
0 0
0 0
081
8079
8078
4228
20 8
277
7157
3513
4 1
0 0
083
34 4
0 0
0 0
5715
1 0
0 0
5329
9 1
0 0
017
1 0
0 0
0 0
5314
1 0
0 0
0 0
3916
4 1
0 0
0 0
0 0
080
7242
7 0
0 0
7353
17 1
0 0
0 0
0 0
8179
7879
8078
6436
8 1
7771
5531
10 2
0 0
0 0
8236
4 0
0 0
058
16 1
0 0
050
24 7
1 0
0 0
18 1
0 0
0 0
059
18 1
0 0
0 0
039
17 4
1 0
0 0
0 0
0 0
7971
44 8
0 0
072
5418
1 0
0 0
0 0
079
7776
7676
7873
4511
176
7056
32 9
3 1
0 0
181
38 5
0 0
0 0
6017
1 0
0 0
4923
6 1
0 0
019
1 0
0 0
0 0
6018
1 0
0 0
0 0
4220
5 1
0 0
0 0
0 0
078
7145
11 0
0 0
7355
21 2
0 0
0 0
0 0
7976
7473
7577
7762
24 3
7571
5830
10 3
1 0
0 0
7940
6 0
0 0
060
18 1
0 0
049
24 6
0 0
0 0
24 1
0 0
0 0
063
23 2
0 0
0 0
041
18 5
1 0
0 0
0 0
0 0
7670
4713
0 0
072
5525
5 1
0 0
0 0
077
7471
7072
7679
7349
1275
7162
3715
5 2
0 0
077
36 6
0 0
0 0
5717
1 0
0 0
4419
4 1
0 0
023
1 0
0 0
0 0
6021
2 0
0 0
0 0
4019
5 1
0 0
0 0
0 0
074
6848
16 1
0 0
7158
3110
2 1
1 1
0 0
7572
7069
6975
7875
6536
7473
6952
2610
4 1
0 0
8247
6 0
0 0
069
26 1
0 0
062
4114
3 0
0 0
29 2
0 0
0 0
068
29 2
0 0
0 0
048
24 5
1 0
0 0
0 0
0 0
7454
17 2
0 0
078
6532
5 0
0 0
0 0
037
11 3
1 0
0 0
0 0
085
7537
6 1
0 0
0 0
081
48 8
0 0
0 0
6829
2 0
0 0
6339
14 2
0 0
028
1 0
0 0
0 0
6527
2 0
0 0
0 0
5228
7 1
0 0
0 0
0 0
075
5520
3 0
0 0
7969
39 8
0 0
0 0
0 0
5620
5 1
0 0
0 0
0 0
8682
6518
2 0
0 0
0 0
7944
8 0
0 0
063
25 2
0 0
058
3613
2 0
0 0
22 2
0 0
0 0
056
18 1
0 0
0 0
041
17 4
1 0
0 0
0 0
0 0
7458
28 4
0 0
078
6842
12 2
0 0
0 0
067
3810
3 1
0 0
0 0
085
8480
53 9
1 0
0 0
078
33 5
0 0
0 0
5316
1 0
0 0
5129
10 2
0 0
015
1 0
0 0
0 0
4110
1 0
0 0
0 0
3012
3 0
0 0
0 0
0 0
074
6131
5 0
0 0
7869
4518
4 1
0 0
0 0
7560
25 7
1 0
1 0
0 0
8584
8376
44 8
1 0
0 0
7627
5 0
0 0
044
12 1
0 0
043
23 7
2 0
0 0
16 2
0 0
0 0
038
9 1
0 0
0 0
028
12 3
1 0
0 0
0 0
0 0
7363
36 6
0 0
076
6948
22 6
1 1
1 0
074
6844
14 3
1 1
1 0
082
8281
7971
4410
1 0
074
23 4
0 0
0 0
35 8
1 0
0 0
3414
4 1
0 0
0 8
1 0
0 0
0 0
27 5
0 0
0 0
0 0
2812
3 0
0 0
0 0
0 0
069
6033
6 0
0 0
7466
4518
4 1
0 0
0 0
7061
3914
4 2
2 1
1 0
8080
7769
5130
14 2
0 0
7127
5 0
0 0
030
6 1
0 0
018
6 2
0 0
0 0
9 1
0 0
0 0
025
5 1
0 0
0 0
025
11 3
1 0
0 0
0 0
0 0
6860
4011
1 0
072
6445
21 6
1 1
1 0
070
6859
35 9
3 3
2 2
178
7878
7564
5039
19 3
167
26 6
1 0
0 0
27 6
1 0
0 0
17 6
2 0
0 0
0 9
1 0
0 0
0 0
21 5
1 0
0 0
0 0
2511
3 1
0 0
0 0
0 0
064
5842
16 2
0 0
6760
4119
7 2
1 0
1 1
6562
5741
18 8
8 5
3 2
7575
7472
6860
5139
14 2
6636
11 2
0 0
041
16 3
0 0
032
17 6
1 0
0 0
16 3
0 0
0 0
030
9 2
0 0
0 0
029
15 6
2 0
0 0
0 0
0 0
6461
5429
4 1
066
6043
22 9
3 1
1 1
165
6563
5842
2523
2016
1172
7272
7170
6967
5934
961
3514
3 1
0 0
4320
5 1
0 0
3622
10 4
1 0
029
9 2
0 0
0 0
4118
4 1
0 0
0 0
3017
7 2
1 0
0 0
0 0
060
5851
32 7
1 0
6256
3922
9 4
1 1
1 1
6161
5955
4633
3128
2417
6565
6464
6362
5951
34 8
0.0
0.2
0.4
0.6
0.8
1.0
Adversarial accuracy
Figu
re17
:Rep
lica
ofFi
gure
11w
ith50
step
sin
stea
dof
200
atev
alua
tion
time.
Dev
iatio
nsin
resu
ltsar
em
inor
.
20
Under review as a conference paper at ICLR 2020
L∞ε = 1
L∞ε = 2
L∞ε = 4
L∞ε = 8
L∞ε = 16
L2 ε = 300
L2 ε = 600
L2 ε = 1200
L2 ε = 2400
L2 ε = 4800
84 81 73 60 110
84 81 73 59 110
82 80 73 58 110
76 76 74 57 110
110 110 110 110 29
L∞ attacking jointly trained (L∞, L2)
L∞ε = 1
L∞ε = 2
L∞ε = 4
L∞ε = 8
L∞ε = 16
L1 ε = 38250.1
L1 ε = 76500
L1 ε = 153000
L1 ε = 306000
L1 ε = 612000
82 75 71 51 110
81 69 32 55 110
80 72 34 2 110
75 66 25 1 110
110 110 110 110 0
L∞ attacking jointly trained (L∞, L1)
L∞ε = 1
L∞ε = 2
L∞ε = 4
L∞ε = 8
L∞ε = 16
Elastic ε = 0.5
Elastic ε = 1
Elastic ε = 2
Elastic ε = 4
Elastic ε = 8
83 81 73 58 110
81 78 72 58 110
79 75 69 55 110
77 74 55 0 110
110 110 110 110 7
L∞ attacking jointly trained (L∞, Elastic)
L∞ε = 1
L∞ε = 2
L∞ε = 4
L∞ε = 8
L∞ε = 16
L2 ε = 300
L2 ε = 600
L2 ε = 1200
L2 ε = 2400
L2 ε = 4800
83 83 83 79 110
78 79 79 76 110
68 69 68 69 110
49 49 48 48 110
110 110 110 110 19
L2 attacking jointly trained (L∞, L2)
L∞ε = 1
L∞ε = 2
L∞ε = 4
L∞ε = 8
L∞ε = 16
L1 ε = 38250.1
L1 ε = 76500
L1 ε = 153000
L1 ε = 306000
L1 ε = 612000
77 76 70 60 110
69 68 71 33 110
48 56 50 53 110
19 17 8 4 110
110 110 110 110 0
L1 attacking jointly trained (L∞, L1)
L∞ε = 1
L∞ε = 2
L∞ε = 4
L∞ε = 8
L∞ε = 16
Elastic ε = 0.5
Elastic ε = 1
Elastic ε = 2
Elastic ε = 4
Elastic ε = 8
82 82 80 77 110
78 78 76 73 110
73 72 70 63 110
72 24 3 45 110
110 110 110 110 0
Elastic attacking jointly trained (L∞, Elastic)
Figure 18: Evaluation accuracies of jointly trained models. Attack and training ε values are equal.
No
atta
ck
L∞ε
=1
L∞ε
=2
L∞ε
=4
L∞ε
=8
L∞ε
=16
L∞ε
=32
L2ε
=15
0L
2ε
=30
0L
2ε
=60
0L
2ε
=12
00L
2ε
=24
00L
2ε
=48
00L
1ε
=0.
0635
26
L1ε
=0.
1270
53
L1ε
=0.
2541
06
L1ε
=0.
5082
11L
1ε
=1.
0164
2L
1ε
=2.
0328
4L
1ε
=4.
0656
9L∞
-JP
EGε
=0.
0312
5
L∞
-JP
EGε
=0.
0625
L∞
-JP
EGε
=0.
125
L∞
-JP
EGε
=0.
25
L∞
-JP
EGε
=0.
5
L∞
-JP
EGε
=1
L∞
-JP
EGε
=2
L2-J
PE
Gε
=2
L2-J
PE
Gε
=4
L2-J
PE
Gε
=8
L2-J
PE
Gε
=16
L2-J
PE
Gε
=32
L2-J
PE
Gε
=64
L2-J
PE
Gε
=12
8
L2-J
PE
Gε
=25
6L
1-J
PE
Gε
=12
8
L1-J
PE
Gε
=25
6
L1-J
PE
Gε
=51
2
L1-J
PE
Gε
=10
24
L1-J
PE
Gε
=20
48
L1-J
PE
Gε
=40
96
L1-J
PE
Gε
=81
92
L1-J
PE
Gε
=16
384
L1-J
PE
Gε
=32
768
L1-J
PE
Gε
=65
536
L1-J
PE
Gε
=13
1072
Ela
sticε
=0.
25E
last
icε
=0.
5E
last
icε
=1
Ela
sticε
=2
Ela
sticε
=4
Ela
sticε
=8
Ela
sticε
=16
L∞ ε = 1, L2 ε = 300
L∞ ε = 2, L2 ε = 600
L∞ ε = 4, L2 ε = 1200
L∞ ε = 8, L2 ε = 2400
L∞ ε = 16, L2 ε = 4800
L∞ ε = 1, L1 ε = 0.254106
L∞ ε = 2, L1 ε = 0.508211
L∞ ε = 4, L1 ε = 1.01642
L∞ ε = 8, L1 ε = 2.03284
L∞ ε = 16, L1 ε = 4.06569
L∞ ε = 1, Elastic ε = 0.5
L∞ ε = 2, Elastic ε = 1
L∞ ε = 4, Elastic ε = 2
L∞ ε = 8, Elastic ε = 4
L∞ ε = 16, Elastic ε = 8
86 84 75 24 0 0 0 85 83 65 8 0 0 82 75 55 17 1 0 0 84 75 21 0 0 0 0 85 84 73 17 0 0 0 0 79 66 35 6 0 0 0 0 0 0 0 84 77 42 4 0 0 0
85 84 80 59 4 0 0 85 83 78 43 1 0 83 81 70 44 8 0 0 84 82 62 4 0 0 0 85 84 81 53 3 0 0 0 81 76 60 27 4 0 0 0 0 0 0 83 79 57 10 0 0 0
82 81 80 73 31 1 0 81 81 79 68 16 0 81 79 75 64 33 4 0 81 80 75 34 0 0 0 81 81 80 72 25 0 0 0 80 77 72 55 24 4 0 0 0 0 0 80 78 66 22 1 0 0
77 76 76 73 58 7 0 76 76 75 72 48 2 76 76 74 69 55 21 1 76 76 74 62 9 0 0 76 76 76 73 55 6 0 0 76 75 74 67 51 22 5 1 0 0 0 75 74 68 39 4 0 0
69 68 68 67 62 30 1 69 69 69 67 59 19 69 68 68 66 60 42 12 69 68 68 64 38 1 0 69 69 69 68 62 33 1 0 69 68 68 66 61 49 29 12 5 2 3 68 66 63 48 11 1 0
85 82 70 19 0 0 0 84 82 65 12 0 0 84 82 76 55 12 0 0 83 72 26 1 0 0 0 84 83 74 30 1 0 0 0 82 75 55 20 2 0 0 0 0 0 0 83 76 44 4 0 0 0
84 81 69 25 0 0 0 83 80 65 17 0 0 83 83 80 69 32 2 0 81 72 36 2 0 0 0 83 82 74 40 2 0 0 0 82 79 69 44 12 1 0 0 0 0 0 82 76 48 5 0 0 0
82 79 70 33 1 0 0 80 78 67 26 1 0 81 80 79 72 49 9 0 78 71 48 8 0 0 0 81 79 73 50 8 0 0 0 80 79 75 63 33 7 0 0 0 0 0 79 73 52 8 0 0 0
77 73 62 22 1 0 0 75 73 58 16 0 0 76 74 72 61 33 4 0 73 62 28 2 0 0 0 76 73 64 33 3 0 0 0 74 71 62 43 18 3 0 0 0 0 0 73 67 43 5 0 0 0
67 64 54 20 1 0 0 66 63 51 13 0 0 65 63 58 41 12 1 0 64 54 20 1 0 0 0 66 64 56 24 1 0 0 0 63 57 45 22 5 1 0 0 0 0 0 64 57 34 4 0 0 0
86 83 65 9 0 0 0 84 79 44 2 0 0 79 68 41 9 1 0 0 84 72 14 0 0 0 0 85 82 59 8 0 0 0 0 71 52 23 4 0 0 0 0 0 0 0 85 82 67 15 0 0 0
85 84 78 40 1 0 0 84 82 68 14 0 0 80 72 55 22 2 0 0 84 80 45 1 0 0 0 84 83 72 27 1 0 0 0 74 62 38 12 2 0 0 0 0 0 0 84 83 78 47 2 0 0
83 83 81 69 14 0 0 83 81 75 41 1 0 78 71 56 30 7 0 0 82 80 62 5 0 0 0 83 82 75 40 3 0 0 0 74 64 44 19 4 0 0 0 0 0 0 83 82 80 70 19 0 0
76 70 49 9 0 0 0 72 63 29 2 0 0 61 46 22 5 1 0 0 64 33 3 0 0 0 0 73 63 30 2 0 0 0 0 56 38 16 4 1 0 0 0 0 0 0 74 73 70 65 45 3 0
69 68 67 62 45 7 0 68 66 55 26 3 0 48 42 27 12 3 0 0 66 60 33 4 0 0 0 68 64 50 21 2 0 0 0 59 51 37 21 8 2 0 0 0 0 0 67 64 58 33 6 0 0
Figure 19: All attacks (columns) vs. jointly adversarially trained defense (rows).
0 20 40 60 80
Epoch
25
50
75
Ad
v.a
ccu
racy
Train, elastic ε = 4Train, L∞ ε = 8
Val, elastic ε = 4Val, L∞ ε = 8
Figure 20: Train and validation curves for joint training against L∞, ε = 4 and elastic, ε = 8using ResNet-101. As shown, the validation accuracies decrease as training progresses, indicatingoverfitting.
21