vault backup: 2025-07-22 11:36:46

This commit is contained in:
Dane Sabo 2025-07-22 11:36:46 -04:00
parent 2228e03790
commit c6490ee275
2 changed files with 33 additions and 5 deletions

View File

@ -0,0 +1,24 @@
# First Pass
**Category:**
Soapbox kinda paper.
**Context:**
Robert Woods is a professor at the University of Tennessee who previously was
a senior researcher at Oak Ridge National Laboratory.
**Correctness:**
**Contributions:**
**Clarity:**
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -9,19 +9,23 @@ Drones
**Correctness:**
Not very. They do really well in the intro and methodology, but shit hits the fan
when it comes to the results.
when it comes to the results. They don't explain things well, and also don't
really establish why their learned boundary between nominal and recovery
controllers should be successful.
**Contributions:**
The biggest contributions these guys make is demonstrating feasibility of their
reinforcement learned switching from the recovery controller and the nominal
controller.
**Clarity:**
Well written until the whole thing came apart at the end.
# Second Pass
**What is the main thrust?**
I read this a second time but I don't think it's worth it.
**What is the supporting evidence?**
**What are the key findings?**
Their main contribution is trying to use RL to learn when to switch to a
recovery controller.
# Third Pass
**Recreation Notes:**