Obsidian/Zettelkasten/Permanent Notes/Literature Notes/LIT-20250822120822-a-review-of-formal-methods-applied-to-machine-learning.md
Dane Sabo 8824324149 Auto sync: 2025-08-26 11:21:28 (90 files changed)
R  "Zettelkasten/Literature Notes/.archive/CH4System_Representation_S2020pdf2254.md" -> ".archive/Literature Notes/.archive/CH4System_Representation_S2020pdf2254.md"

R  "Zettelkasten/Literature Notes/.archive/IntroductionDiffusionModels2022.md" -> ".archive/Literature Notes/.archive/IntroductionDiffusionModels2022.md"

R  "Zettelkasten/Literature Notes/.archive/Kry10TechnicalOverview.md" -> ".archive/Literature Notes/.archive/Kry10TechnicalOverview.md"

R  "Zettelkasten/Literature Notes/.archive/ME2046_Sampled_Data_Analysis_Reading_Chapter_2pdf2254ME.md" -> ".archive/Literature Notes/.archive/ME2046_Sampled_Data_Analysis_Reading_Chapter_2pdf2254ME.md"

R  "Zettelkasten/Literature Notes/.archive/ME2046_The_z_transform_Chapter_3pdf2254ME.md" -> ".archive/Literature Notes/.archive/ME2046_The_z_transform_Chapter_3pdf2254ME.md"

R  "Zettelkasten/Literature Notes/.archive/My Library.bib" -> ".archive/Literature Notes/.archive/My Library.bib"

R  "Zettelkasten/Literature Notes/.archive/aModeladoNucleoAnalisis2023.md" -> ".archive/Literature Notes/.archive/aModeladoNucleoAnalisis2023.md"

R  "Zettelkasten/Literature Notes/.archive/atsumiModifiedBodePlots2012.md" -> ".archive/Literature Notes/.archive/atsumiModifiedBodePlots2012.md"
2025-08-26 11:21:28 -04:00

98 lines
3.9 KiB
Markdown

---
id: LIT-20250822120822
title: A Review of Formal Methods applied to Machine Learning
type: literature
created: 2025-08-22T16:08:22Z
modified: 2025-08-22T16:11:25Z
citekey:
---
# A Review of Formal Methods applied to Machine Learning
**Category:** This is a review paper.
**Context:** This paper tries to keep up with where formal
methods are for machine learning based systems. It is not
necessarily a controls paper, but a broader machine learning
paper in general.
**Correctness:** Really well written on the first pass. Easy
to understand and things seem well cited.
**Contributions:** Great citations showing links to
different papers and also provides nice spots for forward
research. Talks about how verification needs to be done
along the whole pipeline: from data prep to training to
implementation. There needs to be more work on proving
things about model behavior, but in general this review has
a positive outlook on the field.
**Clarity:** Very well written, easy to understand. Except,
what is abstractification of a network?
## Second Pass
This review gives an overview of formal
methods applied to machine learning. Formal methods has been
used for ML to test for robustness for various perturbations
on inputs. They start by talking about several types of
formal methods, including deductive verification, design by
refinement, proof assistants, model checking, and semantic
static analysis, and then finally, abstract interpretation
of semantic static analysis. The last one has been used the
most in FM of ML.
A large part of the paper focuses on formal methods for
neural network perturbation robustness. They split up the
methods into two types: complete and incomplete formal
methods. Complete formal methods are *sound* and *complete*.
This means they do not have false positives or false
negatives. These methods generally do not apply well to
large networks (beyond hundreds gets rough), but give the
highest level of assurance. Here, there are two more
subcategories: SMT based solvers and MILP-based solvers.
SMT solvers are Satisfiability Modulo Theory solvers. These
solvers a set of constraints and check to see whether or not
all of the constraints can be satisfied. Safety conditions
are encoded as the *nA Review of Formal Methods applied to Machine Learningegation* of the safety constraint. That
way, if a safety condition is violated, the SMT solver will
pick it up as a counter example.
MILP based solvers are Mixed Integer Linear Programming
solvers. MILPs use linear programming where certain
constraints are integers to reduce the solving space.
Instead of having an explicity satisffiability condition on
safety or not, MILPs instead can use a minimizing function
to generate counter examples. For example, a MILP saftey
condition might be formalized as: $$ \text{min}\space
\bf{x}_n$$ where a negative value of $\bf{x}_n$ is 'a valid
counter example'.
There are also 'incomplete formal methods'. These incomplete
methods are sound, meaning they will not produce false
negatives, but they may not be complete, meaning they may
produce false positives. They scale better, but the false
positives can be pretty annoying.
One of the main incomplete methods is abstract
interpretation. Things get very weird very fast--this is all
about zonotopes and using over approximation of sets to say
things about stability. There is weird stuff going on with
cell splitting and merging that seems like a to grasp. But,
apparently it works.
Next, they spend some time talking about other machine
learning methods and the formal methods that have been
applied to them. Of particular interest to me is the formal
method application to support vector machines and decision
trees. There are some useful methods for detecting
robustness to adversarial inputs, and knowing what the
nearest misclassification adversarial case is.
## Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**