5.4 KiB
| authors | citekey | publish_date | last_import | ||
|---|---|---|---|---|---|
|
urbanReviewFormalMethods2021 | 2021-04-21 | 2025-05-12 |
Indexing Information
DOI 10.48550/arXiv.2104.02466 #Computer-Science---Machine-Learning, #Computer-Science---Logic-in-Computer-Science, #Computer-Science---Programming-Languages
#InFirstPass
[!Abstract] We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems. Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability. We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software, with a focus on abstract interpretation based techniques as they provide a high level of scalability. This provides a golden standard and sets high expectations for machine learning verification. We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained neural networks and employ either SMT, optimization, or abstract interpretation techniques. We also discuss methods for support vector machines and decision tree ensembles, as well as methods targeting training and data preparation, which are critical but often neglected aspects of machine learning. Finally, we offer perspectives for future research directions towards the formal verification of machine learning systems.>[!seealso] Related Papers
Annotations
Notes
!Paper Notes/A Review of Formal Methods applied to Machine Learning.md
Highlights From Zotero
[!tip] Brilliant Recent advances in artificial intelligence and machine learning and the availability of vast amounts of data allow us to develop computer software that efficiently and autonomously perform complex tasks that are difficult or even impossible to design using traditional explicit programming (e.g., image classification, speech recognition, etc.). This makes machine-learned software particularly attractive even in safetycritical applications, as it enables performing a whole world of new functions that could not be envisioned before 2025-05-06 5:03 pm
[!warning] Dubious Safety-critical applications require an extremely high level of insurance that the software systems behave correctly and reliably. 2025-05-06 5:04 pm
[!highlight] Highlight In contrast, research in formal methods for the development and assurance of machine learning systems remains today extremely limited. It is however rapidly gaining interest and popularity, mostly driven by the growing needs of the automotive and avionics industries. 2025-05-06 5:07 pm
[!highlight] Highlight Specifically, in the following, we give an introduction to formal methods (Section 2) with a particular focus on static analysis by abstract interpretation [37], as it offers a unifying theory for reasoning about disparate formal verification approaches (Section 2.3). 2025-05-06 5:07 pm
[!highlight] Highlight Another interesting possibility is to develop approaches to determine constraints on the training process that enforce a desired property on the trained model. For instance, one could enforce a specific behavior for a given input range of interest, or a particular structure of the trained model that facilitates its subsequent verification [171] (or its abstraction for verification [125, 145, 62]). 2025-05-06 5:10 pm
[!highlight] Highlight More generally, verification methods should allow verifying the behavior of trained models under all circumstances and not just over the expected safe input space. This is particularly useful in cases in which the trained model is part of a larger machine learning-based system [53] and is thus more susceptible to receive unexpected inputs, e.g., from malfunctioning sensors 2025-05-06 5:11 pm
[!highlight] Highlight In this case, it would be useful to design verification methods able to relate the behavior of the inexact model to the optimal expected behavior, e.g., by determining the worst-case approximation of the trained model. 2025-05-06 5:12 pm
[!highlight] Highlight Finally, there is a need for further approaches that support diverse model structures, such as recurrent neural networks [95, 175], and that can verify model implementations by taking floating-point rounding errors into account [140, 141, 105]. 2025-05-06 5:13 pm
Follow-Ups
[!example] For instance, they are used at an industrial level in avionics [146], where the development processes of aircraft software systems have very stringent assurance and verification requirements mandated by international standards (i.e., DO-178C).
- #Follow-Up