5.5 KiB
| authors | citekey | alias | publish_date | last_import | |||
|---|---|---|---|---|---|---|---|
|
tjengEvaluatingRobustnessNeural2019 | tjengEvaluatingRobustnessNeural2019 | 2019-02-18 | 2025-07-30 |
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Indexing Information
Published: 2019-02
DOI 10.48550/arXiv.1711.07356 #Computer-Science---Cryptography-and-Security, #Computer-Science---Machine-Learning, #Computer-Science---Computer-Vision-and-Pattern-Recognition
#InSecondPass
[!Abstract] Neural networks have demonstrated considerable success on a wide variety of real-world problems. However, networks trained only to optimize for training accuracy can often be fooled by adversarial examples - slightly perturbed inputs that are misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art. We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup allows us to verify properties on convolutional networks with an order of magnitude more ReLUs than networks previously verified by any complete verifier. In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded
l_\inftynorm\epsilon=0.1: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness (to perturbations with bounded norm) for the remainder. Across all robust training procedures and network architectures considered, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.>[!note] Markdown Notes Comment: Accepted as a conference paper at ICLR 2019}>[!seealso] Related Papers
Annotations
Notes
Highlights From Zotero
[!tip] Brilliant In particular, we determine for the first time the exact adversarial accuracy ofan MNIST classifier to perturbations with bounded l∞ norm = 0.1: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness to norm-bounded perturbations for the remainder. Across all robusttraining procedures and network architectures considered, and for both the MNISTand CIFAR-10 datasets, we are able to certify more samples than the state-of-the-artand find more adversarial examples than a strong first-order attack. 2025-07-09 7:56 am
[!tip] Brilliant Second, since the predicted label is determined bythe unit in the final layer with the maximum activation, proving that a unit never has themaximum activation over all bounded perturbations eliminates it from consideration. Weexploit both phenomena, reducing the overall number of non-linearities considered. 2025-07-09 9:09 am
Can this he used to say that safely controller has no false positives for a region??
[!highlight] Highlight Verification as solving an MILP. The general problem of verification is to determine whether someproperty P on the output of a neural network holds for all input in a bounded input domain C ⊆ Rm.For the verification problem to be expressible as solving an MILP, P must be expressible as theconjunction or disjunction of linear properties Pi,j over some set of polyhedra Ci, where C = ∪Ci. 2025-07-09 9:16 am
[!highlight] Highlight Let G(x) denote the region in the input domain corresponding to all allowable perturbations of a particular input x. 2025-07-14 9:01 pm
[!highlight] Highlight As in Madry et al. (2018), we say that a neural network is robust toperturbations on x if the predicted probability of the true label λ(x) exceeds that of every other labelfor all perturbations: 2025-07-09 9:19 am
[!highlight] Highlight As long as G(x) ∩ Xvalid can be expressed as the union of a set of polyhedra, the feasibility problem can be expressed as an MILP. 2025-07-14 9:01 pm
[!highlight] Highlight Let d(·, ·) denote a distance metric that measures the perceptual similarity between two input images 2025-07-14 9:01 pm
[!tip] Brilliant Determining tight bounds is critical for problem tractability: tight bounds strengthen the problem formulation and thus improve solve times (Vielma, 2015). For instance, if we can prove that the phase of a ReLU is stable, we can avoid introducing a binary variable. More generally, loose bounds on input to some unit will propagate downstream, leading to units in later layers having looser bounds. 2025-07-14 9:23 pm
[!tip] Brilliant The key observation is that, for piecewise-linear non-linearities, there are thresholds beyond which further refining a bound will not improve the problem formulation. With this in mind, we adopt a progressive bounds tightening approach: we begin by determining coarse bounds using fast procedures and only spend time refining bounds using procedures with higher computational complexity if doing so could provide additional information to improve the problem formulation.4 2025-07-14 9:28 pm