vault backup: 2025-05-12 11:36:59

This commit is contained in:
Dane Sabo 2025-05-12 11:36:59 -04:00
parent 6351232ed1
commit 394653f6ca
10 changed files with 392 additions and 7 deletions

View File

@ -254,7 +254,24 @@
"model": 242,
"Modelling": 243,
"Nonlinear-control": 244,
"Wavelets": 245
"Wavelets": 245,
"Airborne-Collision-Avoidance-System": 246,
"Deep-Neural-Networks-DNNs": 247,
"Rectified-Linear-Unit-ReLU": 248,
"ReLU-Function": 249,
"Satisfiability-Modulo-Theories-SMT": 250,
"Abstract-Domain": 251,
"Generalization-Error": 252,
"Hide-Layer": 253,
"Input-Vector": 254,
"Root-Mean-Square-Error": 255,
"Computer-Science---Computer-Vision-and-Pattern-Recognition": 256,
"Abstract-Interpretation": 257,
"avionics-software": 258,
"development-process": 259,
"safety": 260,
"static-analysis": 261,
"Computer-Science---Cryptography-and-Security": 262
},
"_version": 3
}

View File

@ -0,0 +1,56 @@
---
authors:
- "Bunel, Rudy R"
- "Turkaslan, Ilker"
- "Torr, Philip"
- "Kohli, Pushmeet"
- "Mudigonda, Pawan K"
citekey: "bunelUnifiedViewPiecewise2018"
publish_date: 2018-01-01
volume: 31
publisher: "Curran Associates, Inc."
last_import: 2025-05-12
---
# Indexing Information
Published: 2018-01
#ToRead
>[!Abstract]
>The success of Deep Learning and its potential use in many safety-critical
applications has motivated research on formal verification of Neural Network
(NN) models. Despite the reputation of learned NN models to behave as black
boxes and the theoretical hardness of proving their properties, researchers
have been successful in verifying some classes of models by exploiting their
piecewise linear structure and taking insights from formal methods such as
Satisifiability Modulo Theory. These methods are however still far from
scaling to realistic neural networks. To facilitate progress on this crucial
area, we make two key contributions. First, we present a unified framework
that encompasses previous methods. This analysis results in the identification
of new methods that combine the strengths of multiple existing approaches,
accomplishing a speedup of two orders of magnitude compared to the previous
state of the art. Second, we propose a new data set of benchmarks which
includes a collection of previously released testcases. We use the benchmark
to provide the first experimental comparison of existing algorithms and
identify the factors impacting the hardness of verification problems.>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/A Unified View of Piecewise Linear Neural Network Verification.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -0,0 +1,46 @@
---
authors:
- "Pulina, Luca"
- "Tacchella, Armando"
- "Touili, Tayssir"
- "Cook, Byron"
- "Jackson, Paul"
citekey: "pulinaAbstractionRefinementApproachVerification2010"
publish_date: 2010-01-01
publisher: "Springer"
location: "Berlin, Heidelberg"
pages: 243-257
last_import: 2025-05-12
---
# Indexing Information
Published: 2010-01
**DOI**
[10.1007/978-3-642-14295-6_24](https://doi.org/10.1007/978-3-642-14295-6_24)
**ISBN**
[978-3-642-14295-6](https://www.isbnsearch.org/isbn/978-3-642-14295-6)
#Abstract-Domain, #Generalization-Error, #Hide-Layer, #Input-Vector, #Root-Mean-Square-Error
#ToRead
>[!Abstract]
>A key problem in the adoption of artificial neural networks in safety-related applications is that misbehaviors can be hardly ruled out with traditional analytical or probabilistic techniques. In this paper we focus on specific networks known as Multi-Layer Perceptrons (MLPs), and we propose a solution to verify their safety using abstractions to Boolean combinations of linear arithmetic constraints. We show that our abstractions are consistent, i.e., whenever the abstract MLP is declared to be safe, the same holds for the concrete one. Spurious counterexamples, on the other hand, trigger refinements and can be leveraged to automate the correction of misbehaviors. We describe an implementation of our approach based on the HySAT solver, detailing the abstraction-refinement process and the automated correction strategy. Finally, we present experimental results confirming the feasibility of our approach on a realistic case study.>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/An Abstraction-Refinement Approach to Verification of Artificial Neural Networks.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -0,0 +1,38 @@
---
authors:
- "Tjeng, Vincent"
- "Xiao, Kai"
- "Tedrake, Russ"
citekey: "tjengEvaluatingRobustnessNeural2019"
publish_date: 2019-02-18
last_import: 2025-05-12
---
# Indexing Information
Published: 2019-02
**DOI**
[10.48550/arXiv.1711.07356](https://doi.org/10.48550/arXiv.1711.07356)
#Computer-Science---Cryptography-and-Security, #Computer-Science---Machine-Learning, #Computer-Science---Computer-Vision-and-Pattern-Recognition
#ToRead
>[!Abstract]
>Neural networks have demonstrated considerable success on a wide variety of real-world problems. However, networks trained only to optimize for training accuracy can often be fooled by adversarial examples - slightly perturbed inputs that are misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art. We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup allows us to verify properties on convolutional networks with an order of magnitude more ReLUs than networks previously verified by any complete verifier. In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded $l_\infty$ norm $\epsilon=0.1$: for this classifier, we find an adversarial example for 4.38% of samples, and a certificate of robustness (to perturbations with bounded norm) for the remainder. Across all robust training procedures and network architectures considered, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.>[!note] Markdown Notes
>Comment: Accepted as a conference paper at ICLR 2019}>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/Evaluating Robustness of Neural Networks with Mixed Integer Programming.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -0,0 +1,48 @@
---
authors:
- "Souyris, Jean"
- "Wiels, Virginie"
- "Delmas, David"
- "Delseny, Hervé"
- "Cavalcanti, Ana"
- "Dams, Dennis R."
citekey: "souyrisFormalVerificationAvionics2009"
publish_date: 2009-01-01
publisher: "Springer"
location: "Berlin, Heidelberg"
pages: 532-546
last_import: 2025-05-12
---
# Indexing Information
Published: 2009-01
**DOI**
[10.1007/978-3-642-05089-3_34](https://doi.org/10.1007/978-3-642-05089-3_34)
**ISBN**
[978-3-642-05089-3](https://www.isbnsearch.org/isbn/978-3-642-05089-3)
#formal-verification, #verification, #Abstract-Interpretation, #avionics-software, #development-process, #safety, #static-analysis
#ToRead
>[!Abstract]
>This paper relates an industrial experience in the field of formal verification of avionics software products. Ten years ago we presented our very first technological research results in [18]. What was just an idea plus some experimental results at that time is now an industrial reality. Indeed, since 2001, Airbus has been integrating several tool supported formal verification techniques into the development process of avionics software products. Just like all aspects of such processes, the use of formal verification techniques must comply with DO-178B [9] objectives and Airbus has been a pioneer in this domain.>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/Formal Verification of Avionics Software Products.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -0,0 +1,42 @@
---
authors:
- "Ehlers, Rüdiger"
- "D'Souza, Deepak"
- "Narayan Kumar, K."
citekey: "ehlersFormalVerificationPieceWise2017"
publish_date: 2017-01-01
publisher: "Springer International Publishing"
location: "Cham"
pages: 269-286
last_import: 2025-05-12
---
# Indexing Information
Published: 2017-01
**DOI**
[10.1007/978-3-319-68167-2_19](https://doi.org/10.1007/978-3-319-68167-2_19)
**ISBN**
[978-3-319-68167-2](https://www.isbnsearch.org/isbn/978-3-319-68167-2)
#ToRead
>[!Abstract]
>We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers.>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -0,0 +1,50 @@
---
authors:
- "Dutta, Souradeep"
- "Jha, Susmit"
- "Sankaranarayanan, Sriram"
- "Tiwari, Ashish"
- "Dutle, Aaron"
- "Muñoz, César"
- "Narkawicz, Anthony"
citekey: "duttaOutputRangeAnalysis2018"
publish_date: 2018-01-01
publisher: "Springer International Publishing"
location: "Cham"
pages: 121-138
last_import: 2025-05-12
---
# Indexing Information
Published: 2018-01
**DOI**
[10.1007/978-3-319-77935-5_9](https://doi.org/10.1007/978-3-319-77935-5_9)
**ISBN**
[978-3-319-77935-5](https://www.isbnsearch.org/isbn/978-3-319-77935-5)
#ToRead
>[!Abstract]
>Given a neural network (NN) and a set of possible inputs to the network described by polyhedral constraints, we aim to compute a safe over-approximation of the set of possible output values. This operation is a fundamental primitive enabling the formal analysis of neural networks that are extensively used in a variety of machine learning tasks such as perception and control of autonomous systems. Increasingly, they are deployed in high-assurance applications, leading to a compelling use case for formal verification approaches. In this paper, we present an efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate applications of our approach to computing flowpipes for neural network-based feedback controllers. We show that the use of local search in conjunction with mixed-integer linear programming solvers effectively reduces the combinatorial search over possible combinations of active neurons in the network by pruning away suboptimal nodes.>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/Output Range Analysis for Deep Feedforward Neural Networks.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -0,0 +1,38 @@
---
authors:
- "Ruan, Wenjie"
- "Huang, Xiaowei"
- "Kwiatkowska, Marta"
citekey: "ruanReachabilityAnalysisDeep2018"
publish_date: 2018-05-06
last_import: 2025-05-12
---
# Indexing Information
Published: 2018-05
**DOI**
[10.48550/arXiv.1805.02242](https://doi.org/10.48550/arXiv.1805.02242)
#Computer-Science---Machine-Learning, #Statistics---Machine-Learning, #Computer-Science---Computer-Vision-and-Pattern-Recognition
#ToRead
>[!Abstract]
>Verifying correctness of deep neural networks (DNNs) is challenging. We study a generic reachability problem for feed-forward DNNs which, for a given set of inputs to the network and a Lipschitz-continuous function over its outputs, computes the lower and upper bound on the function values. Because the network and the function are Lipschitz continuous, all values in the interval between the lower and upper bound are reachable. We show how to obtain the safety verification problem, the output range analysis problem and a robustness measure by instantiating the reachability problem. We present a novel algorithm based on adaptive nested optimisation to solve the reachability problem. The technique has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.>[!note] Markdown Notes
>Comment: This is the long version of the conference paper accepted in IJCAI-2018. Github: https://github.com/TrustAI/DeepGO}>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/Reachability Analysis of Deep Neural Networks with Provable Guarantees.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -0,0 +1,50 @@
---
authors:
- "Katz, Guy"
- "Barrett, Clark"
- "Dill, David L."
- "Julian, Kyle"
- "Kochenderfer, Mykel J."
- "Majumdar, Rupak"
- "Kunčak, Viktor"
citekey: "katzReluplexEfficientSMT2017"
publish_date: 2017-01-01
publisher: "Springer International Publishing"
location: "Cham"
pages: 97-117
last_import: 2025-05-12
---
# Indexing Information
Published: 2017-01
**DOI**
[10.1007/978-3-319-63387-9_5](https://doi.org/10.1007/978-3-319-63387-9_5)
**ISBN**
[978-3-319-63387-9](https://www.isbnsearch.org/isbn/978-3-319-63387-9)
#Airborne-Collision-Avoidance-System, #Deep-Neural-Networks-DNNs, #Rectified-Linear-Unit-ReLU, #ReLU-Function, #Satisfiability-Modulo-Theories-SMT
#ToRead
>[!Abstract]
>Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.>[!seealso] Related Papers
>
# Annotations
## Notes
![[Paper Notes/Reluplex- An Efficient SMT Solver for Verifying Deep Neural Networks.md]]
## Highlights From Zotero
## Follow-Ups

View File

@ -1,6 +1,6 @@
**Waiting for first pass:**
```dataview
table file.name as "Paper", authors as "Authors", publish_date as "Date Published", last_import as "Date Imported"
table authors as "Authors", publish_date as "Date Published", last_import as "Date Imported"
from #ToRead
sort publish_date desc
where file.name != "Literature Note"
@ -8,7 +8,7 @@ where file.name != "Literature Note"
**Waiting for second pass:**
```dataview
table file.name as "Paper", authors as "Authors", publish_date as "Date Published", last_import as "Date Read"
table authors as "Authors", publish_date as "Date Published", last_import as "Date Read"
from #InFirstPass
sort publish_date desc
where file.name != "Literature Note"
@ -16,7 +16,7 @@ where file.name != "Literature Note"
**Waiting for third pass:**
```dataview
table file.name as "Paper", authors as "Authors", publish_date as "Date Published", last_import as "Date Read"
table authors as "Authors", publish_date as "Date Published", last_import as "Date Read"
from #InSecondPass
sort publish_date desc
where file.name != "Literature Note"
@ -24,8 +24,8 @@ where file.name != "Literature Note"
**Recently read:**
```dataview
table file.name as "Paper", authors as "Authors", publish_date as "Date Published", last_import as "Date Read"
from "Literature Notes"
table authors as "Authors", publish_date as "Date Published", last_import as "Date Read"
from "Literature Notes" and -#ToRead
sort publish_date desc
where last_import >= date(today) - dur(7day)
where last_import >= date(today) - dur(7day)
```