Obsidian/.archive/Thesis/5 Resources/Literature_Reviews/A Review of Formal Methods applied to Machine Learning.md

34 lines
1.5 KiB
Markdown

The quick brown fox jumps over the lazy dog. The dog stays blissfully asleep. :)
Robert recommended this topic space to me: formal methods applied to machine
learning algorithims. The purpose is clear-cut: if we're able to use formal
methods to prove things about machine learning based systems, we can make
claims that their implementation in critical systems is safe and viable.
This review seems to talk a good bit about that.
# First Pass
- Machine learned software is especially viable because it is more general
than other types of programming. Things such as computer vision and decision
making are very difficult to program manually relative to their ML counterpart.
- Abstract interpretation can connect different formal methods that otherwise
would seem disjoint, according to these authors.
- They spend a lot of pages (~15) on *formal methods for neural networks*.
There are two kinds.
- **Complete Formal Methods** are sound and complete, and can provide
counterexamples where appropriate. That being said, they are not usually
sound on floating point arithmetic and often neglect rounding errors.
- **Incomplete Formal Methods** are generally able to scale to larger
neural networks, and are often sound, but generally suffer from false
positives.
- In comparison, they spend 2 pages on support vector machines and decision
trees :( What did they ever do to you man!
- They also mention that formal methods for data preparation and training are
very much works in progress.