Boundary thickness and robustness in learning models
Abstract
Robustness of machine learning models to various adversarial and nonadversarial corruptions continues to be of interest. In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness. Thick decision boundaries lead to improved performance, while thin decision boundaries lead to overfitting (e.g., measured by the robust generalization gap between training and testing) and lower robustness. We show that a thicker boundary helps improve robustness against adversarial examples (e.g., improving the robust test accuracy of adversarial training) as well as socalled outofdistribution (OOD) transforms, and we show that many commonlyused regularization and data augmentation procedures can increase boundary thickness. On the theoretical side, we establish that maximizing boundary thickness during training is akin to the socalled mixup training. Using these observations, we show that noiseaugmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms. We can also show that the performance improvement in several lines of recent work happens in conjunction with a thicker boundary.
1 Introduction
Recent work has rehighlighted the importance of various forms of robustness of machine learning models. For example, it is by now well known that by modifying natural images with barelyvisible perturbations, one can get neural networks to misclassify images [goodfellow2014explaining, nguyen2015deep, moosavi2016deepfool, carlini2017towards]. Researchers have come to call these slightlybutadversarially perturbed images adversarial examples. As another example, it has become wellknown that, even aside from such worstcase adversarial examples, neural networks are also vulnerable to socalled outofdistribution (OOD) transforms [hendrycks2019benchmarking], i.e., those which contain common corruptions and perturbations that are frequently encountered in natural images. These topics have received interest because they provide visuallycompelling examples that expose an inherent lack of stability/robustness in these already hardtointerpret models [madry2017towards, zhang2019theoretically, cohen2019certified, hendrycks2019augmix, papernot2016distillation, athalye2018obfuscated, tramer2017ensemble], but of course similar concerns arise in other less visuallycompelling situations.
In this paper, we study neural network robustness through the lens of what we will call boundary thickness, a new and intuitive concept that we introduce. Boundary thickness can be considered a generalization of the standard margin, used in maxmargin type learning [elsayed2018large, bartlett2017spectrally, sokolic2017robust]. Intuitively speaking, the boundary thickness of a classifier measures the expected distance to travel along line segments between different classes across a decision boundary. We show that thick decision boundaries have a regularization effect that improves robustness, while thin decision boundaries lead to overfitting and reduced robustness. We also illustrate that the performance improvement in several lines of recent work happens in conjunction with a thicker boundary, suggesting the utility of this notion more generally.
More specifically, for adversarial robustness, we show that five commonly used ways to improve robustness can increase boundary thickness and reduce the robust generalization gap (which is the difference between robust training accuracy and robust test accuracy) during adversarial training. We also show that trained networks with thick decision boundaries tend to be more robust against OOD transforms. We focus on mixup training [zhang2017mixup], a recentlydescribed regularization technique that involves training on data that have been augmented with pseudodata points that are convex combinations of the true data points. We show that mixup improves robustness to OOD transforms, while at the same time achieving a thicker decision boundary. In fact, the boundary thickness can be understood as a dual concept to the mixup training objective, in the sense that the former is maximized as a result of minimizing the mixup loss. In contrast to measures like margin, boundary thickness is easy to measure, and (as we observe through counter examples) boundary thickness can differentiate neural networks of different robust generalization gap, while margin cannot.
For those interested primarily in training, our observations also lead to novel training procedures. Specifically, we design and study a novel noiseaugmented extension of mixup, referred to as noisy mixup, which augments the data through a mixup with random noise, to improve robustness to image imperfections. We show that noisy mixup thickens the boundary, and thus it significantly improves robustness, including black/whitebox adversarial attacks, as well as OOD transforms.
In more detail, here is a summary of our main contributions.

We introduce the concept of boundary thickness (Section 2), and we illustrate its connection to various existing concepts, including showing that as a special case it reduces to margin.

We demonstrate empirically that a thin decision boundary leads to poor adversarial robustness as well as poor OOD robustness (Section 3), and we evaluate the effect of model adjustments that affect boundary thickness. In particular, we show that five commonly used regularization and data augmentation schemes ( regularization, regularization, large learning rate [li2019towards], early stopping, and cutout [devries2017improved]) all increase boundary thickness and reduce overfitting of adversarially trained models (measured by the robust accuracy gap between training and testing). We also show that boundary thickness outperforms margin as a metric in measuring the robust generalization gap.

We show that our new insights on boundary thickness makes way for the design of new robust training schemes (Section 4). In particular, we designed a noiseaugmentation training scheme that we call noisy mixup to increase boundary thickness and improve the robust test accuracy of mixup for both adversarial examples and OOD transforms. We also show that mixup achieves the minimax decision boundary thickness, providing a theoretical justification for both mixup and noisy mixup.
Overall, our main conclusion is the following.
Boundary thickness is a reliable and easytomeasure metric that is associated with model robustness, and training a neural network while ensuring a thick boundary can improve robustness in various ways that have received attention recently.
Related work.
Both adversarial robustness [goodfellow2014explaining, nguyen2015deep, moosavi2016deepfool, carlini2017towards, madry2017towards, zhang2019theoretically, cohen2019certified, athalye2018obfuscated] and OOD robustness [hendrycks2019augmix, hendrycks2019benchmarking, yin2019fourier, liang2017enhancing, snoek2019can] have been wellstudied in the literature. From a geometric perspective, one expects robustness of a machine learning model to relate to its decision boundary. In [goodfellow2014explaining], the authors claim that adversarial examples arise from the linear nature of neural networks, hinting at the relationship between decision boundary and robustness. In [tanay2016boundary], the authors provide the different explanation that the decision boundary is not necessarily linear, but it tends to lie close to the “data submanifold.” This explanation is supported by the idea that crossentropy loss leads to poor margins [nar2018cross]. Some other works also study the connection between geometric properties of a decision boundary and the robustness of the model, e.g., on the boundary curvature [moosavi2017universal, fawzi2017robustness]. Another related line of recent work points out that the inductive bias of neural networks towards “simple functions” may have a negative effect on network robustness [nakkiran2019adversarial], though being useful to explain generalization [de2019random, valle2018deep]. These papers support our observation that natural training tends to generate simple, thin, but easytoattack decision boundaries, while avoiding thicker more robust boundaries.
Mixup is a regularization technique introduced by [zhang2017mixup]. In mixup, each sample is obtained from two samples and using a linear combination , with some random . The label is similarly obtained by a linear combination. Mixup has been successfully combined with other data augmentations to improve accuracy. For instance, [lamb2019interpolated] uses mixup to interpolate adversarial examples to improve adversarial robustness. The authors in [hendrycks2019augmix] mix images augmented with various forms of transformations with a smoothness training objective to improve OOD robustness. Compared to these prior works, the extended mixup with simple noiseimage augmentation studied in our paper is motivated from the perspective of decision boundaries; and it provides a concrete explanation for the performance improvement, as regularization is introduced by a thicker boundary. Another recent paper [rice2020overfitting] also shows the importance of reducing overfitting in adversarial training, e.g., using early stopping, which we also demonstrate as one way to increase boundary thickness.
2 Boundary Thickness
In this section, we introduce boundary thickness and discuss its connection with related notions.
2.1 Boundary thickness
Consider a classification problem with classes on the domain space of data . Let be the prediction function, so that for class represents the posterior probability , where represents a feature vector and response pair. Clearly, . For neural networks, the function is the output of the softmax layer. In the following definition, we quantify the thickness of a decision boundary by measuring the posterior probability difference on line segments connecting pairs of points (where are not restricted to the training set).
Definition 1 (Boundary Thickness).
For and a distribution over pairs of points , let the predicted labels of and be and respectively. Then, the boundary thickness of a prediction function is
(1) 
where , is the indicator function, and .
Intuitively, boundary thickness captures the distance between two level sets and by measuring the expected gap on random line segments in . See Figure 1. Note that in addition to the two constants, and , Definition 1 of boundary thickness requires one to specify a distribution to choose pairs of points . We show that specific instances of recovers margin (Section 2.2) and mixup regularization (Section A). For the rest of the paper, we set as follows. Choose uniformly at random from the training set. Denote its predicted label. Then, choose to be an adversarial example generated by attacking to a random target class . We first look at a simple example on linear classifiers to illustrate the concept.
Example 1 (Binary Linear Classifier).
Consider a binary linear classifier, with weights and bias . The prediction score vector is , where is the sigmoid function. In this case, measuring thickness in the adversarial direction means that we choose and such that . In the following proposition, we quantify the boundary thickness for a binary linear classifier. (See Section B.1 for the proof.)
Proposition 2.1 (Boundary Thickness of Binary Linear Classifier).
Let . If , then the thickness of the binary linear classifier is given by:
(2) 
Note that and should be chosen such that the condition in Proposition 2.1 is satisfied. Otherwise, for linear classifiers, the segment between and is not long enough to span the gap between the two level sets and and cannot simultaneously intersect the two level sets.
For a pictorial illustration of why boundary thickness should be related to robustness and why a thicker boundary should be desirable, see Figures 1 and 1. The three curves in each figure represent the three level sets , , and . The thinner boundary in Figure 1 easily fits the narrow space between two different classes, but it is easier to attack. The thicker boundary in Figure 1, however, is harder to fit data with a small loss, but it is also more robust and harder to attack. To further justify the intuition, we provide an additional example in Section C. Note that the intuition discussed here is reminiscent of max margin optimization, but it is in fact more general. In Section, 2.2, we highlight differences between the two concepts (and later, in Section 3.3, we also show that margin is not a particularly good indicator of robust performance).
2.2 Boundary thickness generalizes margin
We first show that boundary thickness reduces to margin in the special case of binary linear SVM. We then extend this result to general classifiers.
Example 2 (Support Vector Machines).
As an application of Proposition 2.1, we can compute the boundary thickness of a binary SVM, which we show is equal to the margin. Suppose we choose and to be the values of evaluated at two support vectors, i.e., at and . Then, and . Thus, from (2), we obtain , which is the (inputspace) margin of an SVM.
We can also show that the reduction to margin applies to more general classifiers. Let denote the decision boundary between classes and . The (inputspace) margin [elsayed2018large] of on a dataset is defined as
(3) 
where is the projection onto the decision boundary . See Figure 1 and 1.
Boundary thickness for the case when , , and when is so chosen that is the projection for the worst case class , reduces to margin. See Figure 1 for an illustration of this relationship for a twoclass problem. Note that the left hand side of (4) is a “worstcase” version of the boundary thickness in (1). This can be formalized in the following proposition. (See Section B.2 for the proof.)
Proposition 2.2 (Margin is a Special Case of Boundary Thickness).
Choose as an arbitrary point in the dataset , with predicted label . For another class , choose . Then,
(4) 
Remark 1 (Margin versus Thickness as a Metric).
It is often impractical to compute the margin for general nonlinear functions. On the other hand, as we illustrate below, measuring boundary thickness is straightforward. As noted by [zhang2017mixup], using mixup tends to make a decision boundary more “linear,” which helps to reduce unnecessary oscillations in the boundary. As we show in Section 4.1, mixup effectively makes the boundary thicker. This effect is not directly achievable by increasing margin.
2.3 A thick boundary mitigates boundary tilting
Boundary tilting was introduced by [tanay2016boundary] to capture the idea that for many neural networks the decision boundary “tilts” away from the maxmargin solution and instead leans towards a “data submanifold,” which then makes the model less robust. Define the cosine similarity between two vectors and as:
(5) 
For a dataset , in which the two classes and are linearly separable, boundary tilting can be defined as the worsecase cosine similarity between a classifier and the hardSVM solution :
(6) 
which is a function of the in the constraint . In the following proposition, we formalize the idea that boundary thickness tends to mitigate tilting. (See Section B.3 for the proof; and see also Figure 1.)
Proposition 2.3 (A Thick Boundary Mitigates Boundary Tilting).
The worstcase boundary tilting is a nonincreasing function of .
A smaller cosine similarity between the and the SVM solution corresponds to more tilting. From Proposition 2.1, for a linear classifier, we know that is inversely proportional to thickness. Thus, is a nondecreasing function in thickness. That is, a thicker boundary leads to a larger , which means the worstcase boundary tilting is mitigated. We demonstrate Proposition 2.3 on the more general nonlinear classifiers in Section D.
3 Boundary Thickness and Robustness
In this section, we measure the change in boundary thickness by slightly altering the training algorithm in various ways, and we illustrate the corresponding change in robust accuracy. We show that across many different training schemes, boundary thickness corresponds strongly with model robustness. We observe this correspondence for both nonadversarial as well as adversarial training. We also present a use case illustrating why using boundary thickness rather than margin as a metric for robustness is useful. More specifically, we show that a thicker boundary reduces overfitting in adversarial training, while margin is unable to differentiate different levels of overfitting.
3.1 Nonadversarial training
Here, we compare the boundary thicknesses and robustness of models trained with three different schemes on CIFAR10, namely training without weight decay, training with standard weight decay, and mixup training [zhang2017mixup]. Note that these three training schemes impose increasingly stronger regularization. We use different neural networks, including ResNets, VGGs, and DenseNet.^{1}^{1}1The models in Figure 2 are from https://github.com/kuangliu/pytorchcifar/blob/master/models/resnet.py. All models are trained with the same initial learning rate of 0.1. At both epoch 100 and 150, we reduce the current learning rate by a factor of 10. The thickness of the decision boundary is measured as described in Section 2 with and . When measuring thickness on the adversarial direction, we use an PGD20 attack with size 1.0 and step size 0.2. The results are shown in Figure 2.
From Figure 2, we see that the thickness of mixup is larger than that of training with standard weight decay, which is in turn larger than that of training without weight decay. From the thickness drop at epochs 100 and 150, we conclude that learning rate decay reduces the boundary thickness. Then, we compare the OOD robustness for the three training procedures on the same set of trained networks. For OOD transforms, we follow the setup in [hendrycks2019using], and we evaluate the trained neural networks on CIFAR10C, which contains 15 different types of corruptions, including noise, blur, weather, and digital corruption. From Figure 2, we see that the OOD robustness corresponds to boundary thickness across different training schemes for all the tested networks.
See Section E.1 on more details of the experiment. See Section E.2 on a discussion of why the adversarial direction is preferred in measuring thickness. See Section E.3 on a thorough ablation study of the hyperparameters, such as and , and on the results of two other datasets, namely CIFAR100 and SVHN. See Section E.4 for a visualization of the decision boundaries of normal versus mixup training, which shows that mixup indeed achieves a thicker boundary.
3.2 Adversarial training
Here, we compare the boundary thickness of adversarially trained neural networks in different training settings. More specifically, we study the effect of five regularization and data augmentation schemes, including large initial learning rate, regularization (weight decay), regularization, early stopping, and cutout. We choose a variety of hyperparameters and plot the robust test accuracy versus thickness. We only choose hyperparameters such that the natural training accuracy is larger than 90%. We also plot the robust generalization gap versus thickness. See Figure 3 and Figure 3. We again observe a similar correspondence—the robust generalization gap reduces with increasing thickness.
Experimental details. In our experiments, we train a ResNet18 on CIFAR10. In each set of experiments, we only change one parameter. In the standard setting, we follow convention and train with learning rate 0.1, weight decay 5e4, attack range pixels, 10 iterations for each attack, and 2 pixels for the stepsize. Then, for each set of experiments, we change one parameter based on the standard setting. For , and cutout, we only use one of them at a time to separate their effects. See Section F.1 for the details of these hyperparameters. Specifically, see Figure 14 which shows that all the five regularization and augmentation schemes increase boundary thickness. We train each model for enough time (400 epochs) to let both the accuracy curves and the boundary thickness stabilize, and to filter out the effect of early stopping. In Section F.2, we reimplement the whole procedure with the same early stopping at 120 epochs and learning rate decay at epoch 100. We show that the positive correspondence between robustness and boundary thickness remains the same (see Figure 15). In Section F.3, we provide an ablation study on the hyperparameters in measuring thickness and again show the same correspondence for the other settings (see Figure 16).
3.3 Boundary thickness versus margin
Here, we compare margin versus boundary thickness at differentiating robustness levels. See Figure 3, where we sort the different models shown in Figure 3 by robustness, and we plot their thickness measurements using gradually darker colors. We see that while boundary thickness correlates well with robustness and hence can differentiate different robustness levels, margin is not able to do this.
From (3), we see that computing the margin requires computing the projection , which is intractable for general nonlinear functions. Thus, we approximate the margin on the direction of an adversarial attack (which is the projection direction for linear classifiers). Another important point here is that we compute the average margin for all samples in addition to the minimum (worstcase) margin in Definition 3. The minimum margin is almost zero in all cases due to the existence of certain samples that are extremely close to the boundary. That is, the standard (widely used) definition of margin performs even worse.
4 Applications of Boundary Thickness
Our insights allow us to devise new training schemes, as well as to explain the robustness phenomenons discovered in some contemporary work, when viewed through the connection between thickness and robustness.
4.1 Noisy mixup
Dataset  Method  OOD  Blackbox  PGD20  

8pixel  6pixel  4pixel  
Cifar10  Mixup  78.50.4  46.31.4  2.00.1  3.20.1  6.30.1 
Noisy mixup  83.60.3  78.01.0  11.73.3  16.24.2  25.75.0  
Cifar100  Mixup  51.30.4  37.31.1  0.00.0  0.00.0  0.10.0 
Noisy mixup  52.50.7  60.10.3  1.50.2  2.60.1  6.70.9 
Motivated by the success of mixup [zhang2017mixup] and our insights into boundary thickness, we introduce and evaluate a training scheme called noisymixup.
Theoretical justification. Before presenting noisy mixup, we strengthen the connection between mixup and boundary thickness by stating that the model which minimizes the mixup loss also achieves optimal boundary thickness in a minimax sense. Specifically, we can prove the following: For a fixed arbitrary integer , the model obtained by mixup training achieves the minimax boundary thickness, i.e., , where the minimum is taken over all possible pairs of such that , and the max is taken over all prediction functions such that . See Section A for the formal theorem statement and proof.
Ordinary mixup thickens decision boundary by mixing different training samples. The idea of noisy mixup, on the other hand, is to thicken the decision boundary between clean samples and arbitrary transformations. This increases the robust performance on OOD images, for example on images that have been transformed using a noise filter or a rotation. Interestingly, pure noise turns out to be good enough to represent such arbitrary transformations. So, while the ordinary mixup training obtains one mixup sample by linearly combining two data samples and , in noisymixup, one of the combintions of and , with some probability , is replaced by an image that consists of random noise. The label of the noisy image is “NONE.” Specifically, in the CIFAR10 dataset, we let the “NONE” class be the class. Note that this method is different than common noise augmentation because we define a new class of pure noise, and we mix it with ordinary samples.
The comparison between the noisy mixup and ordinary mixup training is shown in Table 1. For OOD accuracy, we follow [hendrycks2019augmix] and use both CIFAR10C and CIFAR100C. For PGD attack, we use an attack with 20 steps and with step size being 1/10 of the attack range. We report the results of three different attack ranges, namely 8pixel, 6pixel, and 4pixel. For blackbox attack, we use ResNet110 to generate the transfer attack. The other parameters are the same with the 8pixel whitebox attack. For each method and dataset, we run the training procedures with three learning rates (0.01, 0.03, 0.1), each for three times, and we report the mean and standard deviation of the best performing learning rate. See Section G for more details of the experiment.
From Table 1, we see that noisy mixup significantly improves the robust accuracy of different types of corruptions. In Figure 4, we show that noisy mixup indeed achieves a thicker boundary than ordinary mixup. We use pure noise to represent OOD, but this simple choice already shows a significant improvement in both OOD and adversarial robustness. This opens the door to devising new mechanisms with the goal of increasing boundary thickness to increase robustness against other forms of image imperfections and/or attacks.
4.2 Explaining robustness phenomena using boundary thickness
Robustness to image saturation. We study the connection between boundary thickness and the saturationbased perturbation [zhang2019interpreting]. In [zhang2019interpreting], the authors show that adversarial training can bias the neural network towards “shapeoriented” features and reduce the reliance on “texturebased” features. One result in [zhang2019interpreting] shows that adversarial training outperforms normal training when the saturation on the images is high. In Figure 5, we show that boundary thickness measured on saturated images in adversarial training is indeed higher than that in normal training.^{2}^{2}2We use the online implementation in https://github.com/PKUAI26/ATCNN.
A thick boundary reduces nonrobust features. We illustrate the connection to nonrobust features, proposed by [ilyas2019adversarial] to explain the existence of adversarial examples. The authors show, perhaps surprisingly, that a neural network trained on data that is completely mislabeled through adversarial attacks can achieve nontrivial generalization accuracy on the clean test data (see Section H for the experimental protocols of [ilyas2019adversarial] which we use.) They attribute this behavior to the existence of nonrobust features which are essential for generalization but at the same time are responsible for adversarial vulnerability.
We show that the generalization accuracy defined in this sense decreases if the classifier used to generate adversarial examples has a thicker decision boundary. In other words, a thicker boundary removes more nonrobust features. We consider four settings in CIFAR10: (1) training without weight decay; (2) training with the standard weight decay 5e4; (3) training with the standard weight decay but with a small learning rate ; and (4) training with mixup. See Figures 5 and 5 for a summary of the results. Looking at these two figures together, we see that an increase in the boundary thickness through different training schemes reduces the generalization accuracy, as defined above, and hence the amount of nonrobust features retained. Note that the natural test accuracy of the four source networks cannot explain the difference in Figure 5, which are 0.943 (“normal”), 0.918 (“no decay”), 0.899 (“small lr”), and 0.938 (“mixup”). For instance, training with no weight decay has the highest generalization accuracy defined in the sense above, but its natural accuracy is only 0.918.
5 Conclusions
We introduce boundary thickness, a more robust notion of the size of the decision boundary of a machine learning model, and we provide a range of theoretical and empirical results illustrating its utility. This includes that a thicker decision boundary reduces overfitting in adversarial training, and that it can improve both adversarial robustness and OOD robustness. Thickening the boundary can also reduce boundary tilting and the reliance on “nonrobust features.” We apply the idea of thick boundary optimization to propose noisy mixup, and we empirically show that using noisy mixup improves robustness. We also show that boundary thickness reduces to margin in a special case, but in general it can be more useful than margin. Finally, we show that the concept of boundary thickness is theoretically justified, by proving that boundary thickness reduces the worstcase boundary tilting and that mixup training achieves the minimax thickness. Having proved a strong connection between boundary thickness and robustness, we expect that further studies can be conducted with thickness and decision boundaries as their focus.
Acknowledgments
We would like to thank Zhewei Yao, Tianjun Zhang and Dan Hendrycks for their valuable feedback. MWM would like to acknowledge the UC Berkeley CLTC, ARO, IARPA, NSF, and ONR for providing partial support of this work. Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be inferred.
References
Appendix
Appendix A Mixup Increases Thickness
In this section, we show that mixup as well as the noisy mixup scheme studied in Section 4.1 increase boundary thickness.
Recall that and in (1) are not necessaraily from the training data. For example, and/or can be the noisy samples used in the noisy mixup (Section 4.1). We make the analysis more general here because in different extensions of mixup [zhang2017mixup, lamb2019interpolated, hendrycks2019augmix], the mixed samples can either come from the training set, from adversarial examples constructed from the training set, or from carefully augmented samples using various forms of image transforms.
We consider binary classification and study the unnormalized empirical version of (1) defined as follows:
(7) 
where the expectation in (1) is replaced by its empirical counterpart. We now show that the function which achieves the minimum mixup loss is also the one that achieves minimax thickness for binary classification.
Proposition A.1 (Mixup Increases Boundary Thickness).
For binary classification, suppose there exists a function that achieves exactly zero mixup loss, i.e., on all possible pairs of points , for all . Then, for an arbitrary fixed integer , is also a solution to the following minimax problem:
(8) 
where the boundary thickness is defined in Eqn. (7), the maximization is taken over all the 2D functions such that for all , and the minimization is taken over all pairs of such that .
Proof.
See Section B.4 for the proof. ∎
Remark 2 (Why Mixup is Preferred among Different Thickboundary Solutions).
Here, we only prove that mixup provides one solution, instead of the only solution. For example, between two samples and that have different labels, a piecewise 2D linear mapping that oscillates between and for more than once can achieve the same thickness as that of a linear mapping. However, a function that exhibits unnecessary oscillations becomes less robust and more sensitive to small input perturbations. Thus, the linear mapping achieved by mixup is preferred. According to [zhang2017mixup], mixup can also help reduce unnecessary oscillations.
Remark 3 (Zero Loss in Proposition a.1).
Note that the function in the proposition is the one that perfectly fits the mixup augmented dataset. In other words, the theorem above needs to have “infinite capacity,” in some sense, to match perfectly the response on line segments that connect pairs of points . If such does not exist, it is unclear if an approximate solution achieves minimax thickness, and it is also unclear if minimizing the crossentropy based mixup loss is exactly equivalent to minimizing the minimax boundary thickness for the same loss value. Nonetheless, our experiments show that mixup consistently achieves thicker decision boundaries than ordinary training (see Figure 2).
Appendix B Proofs
b.1 Proof of Proposition 2.1
Choose so that . The thickness of defined in (1) becomes
(9) 
Define a substitute variable as:
(10) 
Then,
(11) 
Further,
(12) 
Thus,
(13) 
where holds because , is from substituting and , and is from switching the upper and lower limit of the integral to get rid of the negative sign. Recall that is a monotonically increasing function in . Thus,
(14) 
Further, if is contained in , we have , and similarly, , and thus
(15) 
b.2 Proof of Proposition 2.2
The conclusion holds if equals the distance from to its projection for and . Note that when , , because . From the definition of projection, i.e., , we have that for all points on the segment from to , is only point with . Otherwise, is not the projection. Therefore, all points on the segment satisfy . Since is the output after the softmax layer, . Thus, the indicator function on the lefthandside of (4) takes value 1 always, and the integration reduces to calculating the distance from to .
b.3 Proof of Proposition 2.3
We rewrite the definition of as
(16) 
where
(17) 
To prove is a nonincreasing function in , we consider arbitrary so that , and we prove .
First, consider . Denote by the linear classifier that achieves the minimum value in the RHS of (16) when . From definition, . Now, if we increase the norm of to obtain a new classifier , it still satisfies the constraint because
(18) 
Thus, satisfies the constraints in (16) for , and being a linear scaling of , it has the same cosine similarity score with (17), which means the worstcase tilting should be smaller or equal to the tilting of .
b.4 Proof of Proposition a.1
We can rewrite (7) using
(19) 
where denotes the thickness measured on a single segment, i.e.,
(20) 
where recall that and .
Since the proposition is stated for the sum on all pairs of data, we can focus on the proof of an arbitrary pair of data such that .
Consider any 2D decision function such that (i.e., is a probability mass function). In the following, we consider the restriction of on a segment , which we denote as . Then, the proof relies on the following lemma, which states that the linear interpolation scheme in mixup training does maximize the boundary thickness on the segment.
Lemma B.1.
For any arbitrary fixed integer , the linear function
(21) 
defined for a given segment (, ) optimizes in (20) in the following minimax sense,
(22) 
where the maximization is over all the 2D functions such that the domain is restricted to the segment (, ) and such that and for all on the segment, and the minimization is taken over all pairs of such that .
Proof.
See Section B.5 for the proof.∎
b.5 Proof of Lemma b.1
In this proof, we simplify the notation and use to denote which represents restricted to the segment . This simple notation does not cause any confusion because we restrict to the segment in this proof.
We can simplify the proof by viewing the optimization over functions on the fixed segment as optimizing over the functions on , where .
Thus, we only need to find the function , when viewed as a onedimensional function , that solves the minimax problem (22) for the thickness defined as:
(23) 
where is the inverse function of . Note that . To prove the result, we only need to prove that the linear function , which is obtained from for defined in (21), solves the minimax problem
(24) 
where the maximization is taken over all , and the minimization is taken over all pairs of such that , for a fixed integer .
Now we prove a stronger statement.
Stronger statement:
(25) 
when the minimization is taken over all such that , and for any measurable function .
If we can prove this statement, then, since always achieves
We prove the stronger statement above by contradiction. Suppose that the statement is not true, i.e., for any and such that , we always have
(26) 
Then, the preimage of satisfies
(27) 
where the last inequality holds because of the inequality (26). This is clearly a contradiction, which means that the stronger statement is true.
Appendix C A Chessboard Toy Example
2D chessboard data  2D classifier  
3D Robust classifier  3D Nonrobust classifier  
Front view  Topdown view  Front view  Topdown view  
Interpolation between robust and nonrobust classifier  
First row: The 2D chessboard dataset with two classes and a 2D classifier that learns the correct pattern.
Second row: 3D visualization of decision boundaries of two different classifiers. (left) A classifier that uses robust and directions to classify, which preserves the complex chessboard pattern (see the topdown view which contains a chessboard pattern.) (right) A classifier that uses the nonrobust direction to classify. When the separable space on the nonrobust direction is large enough, the thin boundary squeezes in and generates a simple but nonrobust function.
Third row: Visulization of the decision boundary as we interpolate between the robust and nonrobust classifiers. There is a sharp transition from the fourth to the fifth figure.
In this section, we use a chessboard example in low dimensions to show that nonrobustness arises from thin boundaries. Being a toy setting, this is limited in the generality, but it can visually demonstrate our main message.
In Figure 6, the two subfigures shown on the first row represent the chessboard dataset and a robust 2D function that correctly classifies the data. Then, we project the 2D chessboard data to a 100dimensional space by padding noise to each 2D point. In this case, the neural network can still learn the chessboard pattern and preserve the robust decision boundary (see the 3D topdown view on the left part of the second row which contains the chessboard pattern).
However, if we randomly perturb each square of samples in the chessboard in the 3rd dimension (the axis) to change the space between these squares, such that the boundary has enough space on the axis to partition the two opposite classes, the boundary changes to a nonrobust one instead (see the right part on the second row of Figure 6). The shift value on the axis is which is much smaller than the distance between two adjacent squares, which is 0.6. The data are not linearly separable on the axis because each square on the chessboard is randomly shifted up or down independently of other squares.
A more interesting result can be obtained by varying the shift on the axis from 0.01 to 0.08. See the third row of Figure 6. The network undergoes a sharp transition from using robust decision boundaries to using nonrobust ones. This is consistent with the main message shown in Figure 1, i.e., that neural networks tend to generate thin and nonrobust boundaries to fit in the narrow space between opposite classes on the nonrobust direction, while a thick boundary mitigates this effect. On the third row, from left to right, the expanse of the data on axis increases, allowing the network to use only the axis to classify.
Details of 3D figure generation: For the visualization in Figure 6, each 3D figure is generated by plotting 17 consecutive level sets of neural network prediction values (after the softmax layer) from 0 to 1. The prediction on each level set is the same, and each level set is represented by a colored continuous surface in the 3D figure. The yellow end of the color bar represents a function value of 1, and the purple end represents 0. The visualization is displayed in a 3D orthogonal subspace of the whole input space. The three axes are the first three vectors in the natural basis. They represent the and directions that contain the chessboard pattern, and the axis that contains the direction of shift values.
Details of the chessboard toy example: The chessboard data contains 2 classes of 2D points arranged in squares. Each square contains 100 randomly generated 2D points uniformly distributed in the square. The length of each square is 0.4, and the separation between two adjacent squares is 0.6. The shift direction (up or down) and value on the axis are the same for all 2D points in a single square, and the shift value is much smaller than 1 (which is the distance between the centers of two squares). See the third row on Figure 6 for different shift values ranging from 0.01 to 0.08. The shift value is, however, independent across different squares, i.e., these squares cannot be easily separated by a linear classifier using information on the axis only. The classifier is a neural network with 9 fullyconnected layers and a residual link on each layer. The training has 100 epochs, an initial learning rate of 0.003, batch size 128, weight decay 5e4, and momentum 0.9.
Appendix D A Thick Boundary Mitigates Boundary Tilting
In this section, we generalize the observation of Proposition 2.3 to nonlinear classifiers. Recall that in Proposition 2.3, we use Cosine Similarity between the classifier and the maxmargin classifier to measure boundary tilting. To measure boundary tilting in the nonlinear case, we use of random sample pairs (, ) from the training set to replace the normal direction of the maxmargin solution , and use to replace the normal direction of a linear classifier , where are the predicted labels of and , respectively, and is a random point on the line segment . Then, the cosine similarity generalizes to
(28) 
In Figure 7, we show the intuition underlying the use of (28). The smaller the cosine similarity is, the more severe the impact of boundary tilting becomes.
We also measure boundary tilting in various settings of adversarial training, and we choose the same set of hyperparameters that are used to generate Figure 3. See the results in Figure 7. When measuring cosine similarity, we average the results over 6400 training sample pairs. From the results shown in Figure 7, a thick boundary mitigates boundary tilting by increasing the cosine similarity.
Appendix E Additional Experiments on Nonadversarial Training
In this section, we provide more details and additional experiments extending the results of Section 3.1 on nonadversarially trained neural networks. We demonstrate that a thick boundary improves OOD robustness when the thickness is measured using different choices of hyperparameters. We also show that the same conclusion holds on two other datasets, namely CIFAR100 and SVHN, in addition to CIFAR10 used in the main paper.
e.1 Details of measuring boundary thickness
Boundary thickness is calculated by integrating on the segments that connect a sample with its corresponding adversarial sample. We find the adversarial sample by using an PGD attack of size 1.0, step size 0.2, and number of attack steps 20. We measure both thickness and margin on the normalized images in CIFAR10, which introduces a multiplicity factor of approximately 5 when using the standard deviations , respectively, for the RGB channels compared to measuring thickness on unnormalized images.
To compute the integral in (1), we connect the segment from to and evaluate the neural network response on 128 evenly spaced points on the segment. Then, we compute the cumulative distance of the parts on this segment for which the prediction value is between , which measures the distance between two level sets and on this segment (see equation (1)). Finally, we report the average thickness obtained by repeated runs on 320 segments, i.e., 320 random samples with their adversarial examples.
e.2 Comparing different measuring methods: tradeoff between complexity and accuracy
In this part, we discuss the choice of distribution when selecting segments to measure thickness. Recall that in the main paper, we choose as an adversarial example of . Another way, which is computationally cheaper, is to measure thickness on the segment directly between pairs of samples in the training dataset, i.e., sample randomly from the training data, and sample as a random data point with a different label.
Although computationally cheaper, this way of measuring boundary thickness is more prone to the “boundarytilting” effect, because the connection between a pair of samples is not guaranteed to be orthogonal to the decision boundary. Thus, the boundary tilting effect can inflate the value of boundary thickness. This effect only happens when we measure thickness on pairs of samples instead of measuring it in the adversarial direction, which we have shown to be able to mitigate boundary tilting when the thickness is large (see Section D).
In Figure 8, we show how this method affects the measurement of thickness. The thickness is measured for the same set of models and training procedures as those shown in Figure 2, but on random segments that connect pairs of samples. We use and to match Figure 2. In Figure 8, although the trend remains the same (i.e., mixupnormaltraining without weight decay), all the measurement values of boundary thickness become much bigger than that of Figure 2, indicating boundary tilting in all the measured networks.
Remark 4 (An Oscillating 1D Example Motivates the Adversarial Direction).
Obviously, the distribution in Definition 1 is vital in dictating robustness. Similar to Remark 2, one can consider an example of 2D piecewise linear mapping on a segment (, ) that oscillates between the response and . If one measures the thickness on this particular segment, the measured thickness remains the same if the number of oscillations increases in the piecewise linear mapping, but the robustness reduces with more oscillations. Thus, the example motivates the measurement on the direction of an adversarial attack, because an adversarial attack tends to find the closest “peak” or “valley” and can thus faithfully recover the correct value of boundary thickness unaffected by the oscillation.
e.3 Ablation study
In this section, we provide an extensive ablation study on the different choices of hyperparameters used in the experiments. We show that our main conclusion about the positive correlation between robustness and thickness remains the same for a wide range of hyperparameters obviating the need to finetune these. We study the adversarial attack direction used to measure thickness, the parameters and , as well as reproducing the results on two other datasets, namely CIFAR100 and SVHN, in addition to CIFAR10.
e.3.1 Different choices of adversarial attack in measuring boundary thickness
To measure boundary thickness on the adversarial direction, we have to specify a way to implement the adversarial attack. To generate Figure 2, we used attack with attack range 1.0, step size 0.2, and number of attack steps 20. We show that the results and more importantly our conclusions do not change by perturbing a little. See Figure 9 and compare it with the corresponding results presented in Figure 2. We see that the change in the size of the adversarial attack does not alter the trend. However, the measured thickness value does shrink if the becomes too small, which is expected.
e.3.2 Different choices of and in measuring boundary thickness
In this subsection, we present an ablation study on the choice of hyperparameters ’s and in (1). We show that the conclusions in Section 3.1 remain unchanged for a wide range of choices of and . See Figure 10. From the results, we can see that the trend remains the same, i.e., mixupnormal trainingtraining without weight decay. However, when and become close to each other, the magnitude of boundary thickness also reduces, which is expected.
Remark 5 (Choosing the Best hyperparameters).
From Proposition 2.2, we know that the margin has particular values of the hyperparameters and . Allowing different values of these hyperparameters allows us the flexibility to better capture the robustness than margin. The best choices of these hyperparameters might be different for different neural networks, and ideally one could do small validation based studies to tune these hyperparameters, but our ablation study in this section shows that for a large regime of values, the exact search for the best choices is not required. We noticed, for example, setting and works well in practice, and much better than the standard definition of margin that has been equated to robustness in past studies.
Remark 6 (Choosing Asymmetric and ).
We use asymmetric parameters and mainly because, due to symmetry, the measured thickness when is half in expectation of that when .
We have discussed alternative ways of adversarial attacks to measure boundary thickness on sample pairs in Section E.2. For completeness, we also do an ablation study for choice of hyperparameters and for this case. The results in Figure 11, and this study also reinforces the same conclusion – that the particular choice of , matters less than the fact that they are not set to and respectively.
e.3.3 Additional datasets
We repeat the experiments in Section 3.1 on two more datasets, namely CIFAR100 and SVHN. See Figure 12. In this figure, we used the same experimental setting as in Section 3.1, except that we train with a different initial learning rate 0.01 on SVHN, following convention. We reach the same conclusion as in Section 3.1, i.e., that mixup increases boundary thickness, while training without weight decay reduces boundary thickness.
e.4 Visualizing neural network boundaries
In this section, we show a qualitative comparison between a neural network trained using mixup and another one trained in a standard way without mixup. See Figure 13. In the left figure, we can see that different level sets are spaced apart, while the level sets in the right figure are hardly distinguishable. Thus, the mixup model has a larger boundary thickness than the naturally trained model for this setting.
For the visualization shown in Figure 13, we use 17 different colors to represent the 17 level sets. The origin represents a randomly picked CIFAR10 image. The axis represents a direction of adversarial perturbation found using the projected gradient descent method [madry2017towards]. The axis and the axis represent two random directions that are orthogonal to the perturbation direction. Each CIFAR10 input image has been normalized using standard routines during training, e.g., using the standard deviations , respectively, for the RGB channels, so the scale of the figure may not represent the true scale in the original space of CIFAR10 input images.
Appendix F Additional Experiments on Adversarial Training
In this section, we provide additional details and analyses for the experiments in Section 3.2 on adversarially trained neural networks. We demonstrate that a thick boundary improves adversarial robustness for wide range of hyperparameters, including those used during adversarial training and those used to measure boundary thickness.
f.1 Details of experiments in Section 3.2
We use ResNet18 on CIFAR10 for all the experiments in Section 3.2. We first choose a standard setting that trains with learning rate 0.1, no learning rate decay, weight decay 5e4, attack range pixels, 10 iterations for each attack, and 2 pixels for the stepsize. Then, for each set of experiments, we change one parameter based on the standard setting. We tune the parameters to achieve a natural training accuracy larger than 90%. For the experiment on early stopping, we use a learning rate 0.01 instead of 0.1 to achieve 90% training accuracy. We train the neural network for 400 epochs without learning rate decay to filter out the effect of early stopping. The results with learning rate decay and early stopping are reported in Section F.2 which show the same trend.
When measuring boundary thickness, we select segments on the adversarial direction, and we find the adversarial direction by using an PGD attack of size 2.0, step size 0.2, and number of attack steps 20.
Changed parameter  Learning rate  Weight decay  L1  Cutout  Early stopping 
Learning rate  3e3  5e4  0  0  None 
1e2  5e4  0  0  None  
3e2  5e4  0  0  None  
Weight decay  1e1  0e4  0  0  None 
1e1  1e4  0  0  None  
L1  1e1  0  5e7  0  None 
1e1  0  2e6  0  None  
1e1  0  5e6  0  None  
Cutout  1e1  0  0  4  None 
1e1  0  0  8  None  
1e1  0  0  12  None  
1e1  0  0  16  None  
Early stopping  1e2  5e4  0  0  50 
1e2  5e4  0  0  100  
1e2  5e4  0  0  200  
1e2  5e4  0  0  400 
Note that boundary thickness indeed increases with heavier regularization or data augmentation. See Figure 14 on the thickness of the models trained with the parameters reported in Table 2.
f.2 Adversarial training with learning rate decay
In this section, we reimplement the experiments shown in Figure 3 and 3 but with learning rate decay and early stopping, which is reported by [rice2020overfitting] to improve the robust accuracy of adversarial training. We still use ResNet18 on CIFAR10. However, instead of training for 400 epochs, we train for only 120 epochs, with a learning rate decay of 0.1 at epoch 100. The adversarial training still uses 8pixel PGD attack with 10 steps and step size 2 pixel.
The set of training hyperparameters that we use are shown in Table 3. Similar to Figure 3, we tune hyperparameters such that the training accuracy on natural data reaches 90%. The results are reported in Figure 15. We do not separately test early stopping because all experiments follow the same early stopping procedure.
Changed parameter  Learning rate  Weight decay  L1  Cutout  Early stopping 

Learning rate  1e2  5e4  0  0  120 
3e2  5e4  0  0  120  
1e1  5e4  0  0  120  
Weight decay  1e1  0e4  0  0  120 
1e1  1e4  0  0  120  
1e1  5e4  0  0  120  
L1  1e1  0  5e7  0  120 
1e1  0  2e6  0  120  
1e1  0  5e6  0  120  
Cutout  1e1  0  0  4  120 
1e1  0  0  8  120  
1e1  0  0  12  120 
f.3 Different choices of the hyperparameters in measuring adversarially trained networks
Here, we study the connection between boundary thickness and robustness under different choices of hyperparameters used to measure thickness. Specifically, we use three different sets of hyperparameters to reimplement the experiments that obtained Figure 3 and Figure 3. These parameters are provided in Table 4. The first row represents the base parameters used in Figure 3 and Figure 3. Then, the second row changes . The third and the fourth row change the attack size and step size. The changes in the hyperparameters maintains our conclusion regarding the relationship between thickness and robustness. See Figure 16.
Appendix G More Details of Noisymixup
In this section, we provide more details for the experiments on noisy mixup. In the experiments, we use ResNet18 on both CIFAR10 and CIFAR100. For OOD, we use the datasets from [hendrycks2019using] and evaluate on 15 different types of corruptions in CIFAR10C and CIFAR100C, including noise, blur, weather, and digital corruption.
The probability to replace a clean image with a noise image is 0.5. The model is trained for 200 epochs, and learning rate decay with a factor of 10 at epochs 100 and 150, respectively. For both mixup and noisy mixup, we train using three learning rates and report the best results. The weight decay is set to be 1e4, which follows the recommendation in [zhang2017mixup]. For noisy mixup, each pixel in a noise image is sampled independently from a standard Gaussian distribution with mean 0 and variance 1, and processed using the same standard training image transforms applied on the ordinary image inputs.
When testing the robustness of the ordinary mixup and noisy mixup models, we used both black/whitebox attacks and OOD samples. For whitebox attack, we use an PGD attack with 20 steps. The attack size can take values in 8pixel (0.031), 6pixel (0.024) and 4pixel (0.0157), respectively. The step size is 1/10 of the attack size . For blackbox attacks, we use ResNet110 to generate the transfer attack. The other parameters are the same with the 8pixel whitebox attack.
Appendix H More Details of the Experiment on Nonrobust Features in Section 4.2
In this part, we provide more details of the experiment on nonrobust features. First, we discuss the background on the discovery in [ilyas2019adversarial]. Using a dataset , and a class neural network classifier , a new dataset is generated as follows:

(attackandrelabel) Generate an adversarial example from each training sample such that the prediction of the neural network is not equal to . Then, label the new sample as . The target class can either be a fixed value for each class , or a random class that is different from . In this paper, we use random target classes.

(testoncleandata) Train a new classifier on the new dataset , evaluate on the original clean testset , and obtain a test accuracy ACC.
The observation in [ilyas2019adversarial] is that by training on the completely mislabeled dataset , the new classifier still achieves a high ACC on . The explanation in [ilyas2019adversarial] is that each adversarial example contains “nonrobust features” of the target label , which are useful for generalization, and ACC measures the reliance on these nonrobust features. The test accuracy ACC obtained in this way is the generalization accuracy reported in Figure 5.
In Figure 5, the axis means different epochs in training a source model. Each error bar represents the variance of nonrobust feature scores measured in 8 repeated runs. Thus, each point in this figure represents 8 runs of the same procedures of a nonrobust feature experiment for a different source network, and each curve in Figure 5 contains multiple experiments using different source networks, instead of a single trainingtesting round. It is interesting to see that source networks trained for different number of epochs can achieve different nonrobust feature scores, which suggests that when the decision boundary changes between epochs, the properties of the nonrobust features also change.
In the experiments to generate Figure 5, we use a ResNet56 model as the source network, and a ResNet20 model as the target network. These two ResNet models are standard for classification tasks on CIFAR10. The source network is trained for 500 epochs, with an initial learning rate of 0.1, weight decay 5e4, and learning rate decay 0.1 respectively at epoch 150, 300, and 450. When training with a small learning rate, the initial learning rate is set as 0.003. When training with mixup, the weight decay is 1e4, following the initial setup in [zhang2017mixup]. The adversarial attack uses PGD with 100 iterations, an attack range of = 2.0, and an attack stepsize of 0.4.
Remark 7 (Why Thick Boundaries Reduce Nonrobust Features).
Our explanation on why a thick boundary reduces nonrobust feature score is that a thicker boundary is potentially more ‘‘complex’’^{3}^{3}3Note that, although various complexity measures are associated with generalization in classical theory, and the inductive bias towards simplicity may explain generalization of neural networks [de2019random, valle2018deep], it has been pointed out that simplicity may be at odds with robustness[nakkiran2019adversarial].. Then, in the attackandrelabel step, the adversarial perturbations are generated in a relatively “random” way, independent of the true data distribution, making the “nonrobust features” preserved by adversarial examples disappear. Studying the inner mechanism of the generation of nonrobust features and the connection to boundary thickness is a meaningful future work.