6.0 KiB
| authors | citekey | publish_date | publisher | location | pages | last_import | |||
|---|---|---|---|---|---|---|---|---|---|
|
sunFormalVerificationNeural2019 | 2019-04-16 | Association for Computing Machinery | New York, NY, USA | 147–156 | 2025-05-12 |
Indexing Information
Published: 2019-04
DOI 10.1145/3302504.3311802 ISBN 978-1-4503-6282-5
#InFirstPass
[!Abstract] In this paper, we consider the problem of formally verifying the safety of an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given a workspace that is characterized by a set of polytopic obstacles, our objective is to compute the set of safe initial states such that a robot trajectory starting from these initial states is guaranteed to avoid the obstacles. Our approach is to construct a finite state abstraction of the system and use standard reachability analysis over the finite state abstraction to compute the set of safe initial states. To mathematically model the imaging function, that maps the robot position to the LiDAR image, we introduce the notion of imaging-adapted partitions of the workspace in which the imaging function is guaranteed to be affine. Given this workspace partitioning, a discrete-time linear dynamics of the robot, and a pre-trained NN controller with Rectified Linear Unit (ReLU) non-linearity, we utilize a Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible assignments of different ReLUs. To accelerate this process, we develop a pre-processing algorithm that could rapidly prune the space of feasible ReLU assignments. Finally, we demonstrate the efficiency of the proposed algorithms using numerical simulations with the increasing complexity of the neural network controller.>[!seealso] Related Papers
Annotations
Notes
!Formal verification of neural network controlled autonomous systems-Note
Highlights From Zotero
[!highlight] Highlight our objective is to compute the set of safe initialstates such that a robot trajectory starting from these initial states is guaranteed to avoid the obstacles. 2025-05-09 1:23 pm
[!done] Important Toaccelerate this process, we develop a pre-processing algorithm thatcould rapidly prune the space of feasible ReLU assignments. 2025-05-09 1:25 pm
[!highlight] Highlight the safety and reliabilityof these AI-enabled CPS is still an under-studied problem. It is thenunsurprising that the failure of these AI-controlled CPS in several,safety-critical, situations leads to human fatalities [1]. 2025-05-09 1:26 pm
[!tip] Brilliant Representatives of the first class, namely testing of neural net-works, are the works reported in [9–18] in which the neural networkis treated as a white box, and test cases are generated to maximizedifferent coverage criteria. Such coverage criteria include neuroncoverage, condition/decision coverage, and multi-granularity test-ing criteria. On the one hand, maximizing test coverage gives systemdesigners confidence that the networks are reasonably free fromdefect. On the other hand, testing does not formally guarantee thata neural network satisfies a formal specification. 2025-05-09 1:27 pm
[!highlight] Highlight To take into consideration the effect of the neural network de-cisions on the entire system behavior, several researchers focusedon the falsification (or semi-formal verification) of autonomoussystems that include machine learning components [19–21]. Insuch falsification frameworks, the objective is to generate cornertest cases that forces a violation of system-level specifications 2025-05-09 1:28 pm
[!highlight] Highlight Unfortunately, the input-output range properties are simplistic and fail to capture the safety and reliability of cyber-physical systems when controlled by a neural network. Recent works showed how to perform reachability-based verification of closed-loop systems in the presence of learning components [38–40]. Reachabilityanalysis is performed by either separately estimating the output setof the neural network and the reachable set of continuous dynamics [38], or by translating the neural network controlled system into a hybrid system [39]. Once the neural network controlled system is translated into a hybrid system, off-the-shelf existing verificationtools of hybrid systems, such as SpaceEx [41] for piecewise affinedynamics and Flow∗ [42] for nonlinear dynamics, can be used to verify safety properties of the system. Another related technique is the safety verification using barrier certificates [43]. In suchapproach, a barrier function is searched using several simulationtraces to provide a certificate that unsafe states are not reachablefrom a given set of initial states. 2025-05-09 1:33 pm
[!done] Important Arguably, the ability of neural networks toprocess high-bandwidth sensory signals (e.g., cameras and LiDARs)is one of the main motivations behind the current explosion in theuse of machine learning in robotics and CPS. 2025-05-09 1:35 pm
Follow-Ups
[!example] we utilize a Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible assignments of different ReLUs.
- #Follow-Up ✅ 2025-05-12
[!example] Therefore, recent works focused en-tirely on verifying neural networks against simple input-output specifications [28–33]. Such input-output techniques compute aguaranteed range for the output of a deep neural network givena set of inputs represented as a convex polyhedron.
- #Follow-Up ✅ 2025-05-12
[!example] For example, by using binary variables to en-code piecewise linear functions, the constraints of ReLU functions are encoded as a Mixed-Integer Linear Programming (MILP). Com-bining output specifications that are expressed in terms of Linear Programming (LP), the verification problem eventually turns to aMILP feasibility problem [32, 34].
- #Follow-Up ✅ 2025-05-12