37 lines
1.5 KiB
Markdown
37 lines
1.5 KiB
Markdown
# First Pass
|
|
**Category:** Experimental Results
|
|
|
|
**Context:** They used a F1/10 model of an autonomous car to test out a simplex
|
|
safety structure on a neural network based controller. They try several
|
|
different neural net types. Their trick is they use real time reachability to tell
|
|
when to switch between an optimal controller and the simplex guard.
|
|
|
|
**Correctness:**
|
|
They seem to do things pretty well and by the book. All of their explanations
|
|
make sense and they do a good job citing sources. They do punt when it comes
|
|
to talking about the formal verification of the switching mechanism, but they
|
|
do make note that's future work.
|
|
|
|
**Contributions:**
|
|
They show how a simplex system can work and the difficulties of the *sim2real* transition for machine learning controllers.
|
|
|
|
**Clarity:**
|
|
Really nicely written.
|
|
|
|
# Second Pass
|
|
**What is the main thrust?**
|
|
They use a simplex style controller setup with real time reachability to know when to use a optimal ML based controller vs. a safety oriented controller. They use the reachability to do this in real time, and demonstrate how different ML models line up against one another.
|
|
|
|
**What is the supporting evidence?**
|
|
They ran a whole bunch of experiments. They published all of the results, with their main metrics being Mean ML usage.
|
|
|
|
**What are the key findings?**
|
|
The biggest findings are that when obstacles are introduced (or other general advesary behavior),
|
|
|
|
# Third Pass
|
|
**Recreation Notes:**
|
|
|
|
**Hidden Findings:**
|
|
|
|
**Weak Points? Strong Points?**
|