Auto sync: 2025-08-26 11:21:28 (90 files changed)

R  "Zettelkasten/Literature Notes/.archive/CH4System_Representation_S2020pdf2254.md" -> ".archive/Literature Notes/.archive/CH4System_Representation_S2020pdf2254.md"

R  "Zettelkasten/Literature Notes/.archive/IntroductionDiffusionModels2022.md" -> ".archive/Literature Notes/.archive/IntroductionDiffusionModels2022.md"

R  "Zettelkasten/Literature Notes/.archive/Kry10TechnicalOverview.md" -> ".archive/Literature Notes/.archive/Kry10TechnicalOverview.md"

R  "Zettelkasten/Literature Notes/.archive/ME2046_Sampled_Data_Analysis_Reading_Chapter_2pdf2254ME.md" -> ".archive/Literature Notes/.archive/ME2046_Sampled_Data_Analysis_Reading_Chapter_2pdf2254ME.md"

R  "Zettelkasten/Literature Notes/.archive/ME2046_The_z_transform_Chapter_3pdf2254ME.md" -> ".archive/Literature Notes/.archive/ME2046_The_z_transform_Chapter_3pdf2254ME.md"

R  "Zettelkasten/Literature Notes/.archive/My Library.bib" -> ".archive/Literature Notes/.archive/My Library.bib"

R  "Zettelkasten/Literature Notes/.archive/aModeladoNucleoAnalisis2023.md" -> ".archive/Literature Notes/.archive/aModeladoNucleoAnalisis2023.md"

R  "Zettelkasten/Literature Notes/.archive/atsumiModifiedBodePlots2012.md" -> ".archive/Literature Notes/.archive/atsumiModifiedBodePlots2012.md"
This commit is contained in:
Dane Sabo 2025-08-26 11:21:28 -04:00
parent 64f1617e9d
commit 8824324149
90 changed files with 909 additions and 1 deletions

View File

@ -3,7 +3,7 @@ id: 2025-08-20
title: Daily — 2025-08-20
type: daily
created: 2025-08-20T13:54:13Z
modified: 2025-08-20T19:10:43Z
modified: 2025-08-22T15:06:52Z
tags: [daily]
---

View File

@ -0,0 +1,24 @@
---
id: 2025-08-22
title: Daily — 2025-08-22
type: daily
created: 2025-08-22T15:41:02Z
modified: 2025-08-22T16:03:15Z
tags: [daily]
---
# Daily — 2025-08-22
## Quick capture
- Got to work kind of late. Feeling sicky af. Probably no
darts tonight with PSM.
- Working on moving all my notes into zettels
- Lunch w AS today
- Okay i'm starting to feel a little better (12:03PM)
## Tasks
- [ ]
## Notes

View File

@ -0,0 +1,29 @@
---
id: 2025-08-25
title: Daily — 2025-08-25
type: daily
created: 2025-08-25T16:02:00Z
modified: 2025-08-26T13:56:53Z
tags: [daily]
---
# Daily — 2025-08-25
## Quick capture
- Got in a little later today, but the bike is here! Gotta
text Alisa about it.
- I have 2 classes today (grumble grumble grumble).
- ME 3100 ERLM is Monday and Wednesday at 1PM-2:15PM
- NUCE 2101 - Nuclear Core Dynamics is tonight
6PM-8:30PM, like usual
- I also enrolled in my health insurance.
## Tasks
- [ ]
## Notes

View File

@ -0,0 +1,19 @@
---
id: 2025-08-26
title: Daily — 2025-08-26
type: daily
created: 2025-08-26T15:05:38Z
modified: 2025-08-26T15:06:58Z
tags: [daily]
---
# Daily — 2025-08-26
## Quick Capture
- I started this morning doing some email
- Now I'm hangin out in starbucks doing some work for the
quad chart n'at
- First, what the fuck is a quad chart??
## Notes

View File

@ -0,0 +1,38 @@
---
id: HUB-20250822135454
title: Formal Methods
type: hub
created: 2025-08-22T17:54:54Z
modified: 2025-08-22T17:57:39Z
tags: [hub]
aliases: []
---
# Formal Methods
> The purpose of this hub is to make a high level hub of
> everything formal methods related (which is a lot!!!)
## Related hubs
- [[formal-methods-in-machine-learning]]
## Related Zettels
### Topics
[[Abstract Interpretation for Formal Methods]]
[[Satisfiability Modulo Theory]]
[[Mixed Integer Linear Programming]]
[[Complete vs. Incomplete Formal Methods]]
### Tools
[[Strix]]
## Sources / Literature
- [[a-review-of-formal-methods-applied-to-machine-learning]]
## Open questions / Next actions
- [ ]

View File

@ -0,0 +1,10 @@
---
id: 20250822134355
title: Formal Methods in Machine Learning
type: permanent
created: 2025-08-22T17:43:55Z
modified: 2025-08-22T17:43:55Z
tags: []
---
# Formal Methods in Machine Learning

View File

@ -0,0 +1,15 @@
---
id: 20250822135802
title: Abstract Interpretation for Formal Methods
type: permanent
created: 2025-08-22T17:58:02Z
modified: 2025-08-25T16:01:46Z
tags: []
---
# Abstract Interpretation for Formal Methods
Abstract interpretation in a formal methods context takes
something about a system and makes an approximation of it in
order to make reasoning about it easier.

View File

@ -0,0 +1,58 @@
---
id: 20250826111220
title: Quad Charts
type: permanent
created: 2025-08-26T15:12:20Z
modified: 2025-08-26T15:20:50Z
tags: []
---
# Quad Charts
Quad charts are the briefest of research proposals, and are
a way of conveying a research idea in just one
slide.
The audience for a quad chart is usually non-technical. The
content should reflect this.
Quad charts consist of quadrants:
## Quadrant 1 (Upper Left): Goals and Objectives
This quadrant states the goal of the research in a single
sentence, and lists the outcomes underneath it. Often you
can start with "The goal of this research is to..."
Then, the outcomes follow: "If this research is
successful, we should be able to do the following:
1. ...
2. ...
3. ...
(4.?)
It is critical that no jargon is used here. It is very
tempting to do so.
## Quadrant 2 (Bottom Left): Research Approach
This quadrant answers the question: "What's new in your
approach?"
## Quadrant 3 (Top Right): Pretty Picture
This is a break for the reader, and likely the first thing
they'll actually look at on the chart. This figure should
illustrate a key idea about the work that you want the
reader to take away.
No personal information should be included.
## Quadrant 4 (Bottom Right): Impact and Contact Info
This quadrant answers the question of "Who cares? What
difference will it make if you're successful?"
This quadrant should also contain contact info of the
principal investigator, and if applicable, their advisor.

View File

@ -0,0 +1,97 @@
---
id: LIT-20250822120822
title: A Review of Formal Methods applied to Machine Learning
type: literature
created: 2025-08-22T16:08:22Z
modified: 2025-08-22T16:11:25Z
citekey:
---
# A Review of Formal Methods applied to Machine Learning
**Category:** This is a review paper.
**Context:** This paper tries to keep up with where formal
methods are for machine learning based systems. It is not
necessarily a controls paper, but a broader machine learning
paper in general.
**Correctness:** Really well written on the first pass. Easy
to understand and things seem well cited.
**Contributions:** Great citations showing links to
different papers and also provides nice spots for forward
research. Talks about how verification needs to be done
along the whole pipeline: from data prep to training to
implementation. There needs to be more work on proving
things about model behavior, but in general this review has
a positive outlook on the field.
**Clarity:** Very well written, easy to understand. Except,
what is abstractification of a network?
## Second Pass
This review gives an overview of formal
methods applied to machine learning. Formal methods has been
used for ML to test for robustness for various perturbations
on inputs. They start by talking about several types of
formal methods, including deductive verification, design by
refinement, proof assistants, model checking, and semantic
static analysis, and then finally, abstract interpretation
of semantic static analysis. The last one has been used the
most in FM of ML.
A large part of the paper focuses on formal methods for
neural network perturbation robustness. They split up the
methods into two types: complete and incomplete formal
methods. Complete formal methods are *sound* and *complete*.
This means they do not have false positives or false
negatives. These methods generally do not apply well to
large networks (beyond hundreds gets rough), but give the
highest level of assurance. Here, there are two more
subcategories: SMT based solvers and MILP-based solvers.
SMT solvers are Satisfiability Modulo Theory solvers. These
solvers a set of constraints and check to see whether or not
all of the constraints can be satisfied. Safety conditions
are encoded as the *nA Review of Formal Methods applied to Machine Learningegation* of the safety constraint. That
way, if a safety condition is violated, the SMT solver will
pick it up as a counter example.
MILP based solvers are Mixed Integer Linear Programming
solvers. MILPs use linear programming where certain
constraints are integers to reduce the solving space.
Instead of having an explicity satisffiability condition on
safety or not, MILPs instead can use a minimizing function
to generate counter examples. For example, a MILP saftey
condition might be formalized as: $$ \text{min}\space
\bf{x}_n$$ where a negative value of $\bf{x}_n$ is 'a valid
counter example'.
There are also 'incomplete formal methods'. These incomplete
methods are sound, meaning they will not produce false
negatives, but they may not be complete, meaning they may
produce false positives. They scale better, but the false
positives can be pretty annoying.
One of the main incomplete methods is abstract
interpretation. Things get very weird very fast--this is all
about zonotopes and using over approximation of sets to say
things about stability. There is weird stuff going on with
cell splitting and merging that seems like a to grasp. But,
apparently it works.
Next, they spend some time talking about other machine
learning methods and the formal methods that have been
applied to them. Of particular interest to me is the formal
method application to support vector machines and decision
trees. There are some useful methods for detecting
robustness to adversarial inputs, and knowing what the
nearest misclassification adversarial case is.
## Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,77 @@
# First Pass
**Category:**
This is a review paper.
**Context:**
This paper tries to keep up with where formal methods are for machine
learning based systems. It is not necessarily a controls paper, but a broader
machine learning paper in general.
**Correctness:**
Really well written on the first pass. Easy to understand and things seem
well cited.
**Contributions:**
Great citations showing links to different papers and also provides nice spots
for forward research. Talks about how verification needs to be done along the
whole pipeline: from data prep to training to implementation. There needs to
be more work on proving things about model behavior, but in general this
review has a positive outlook on the field.
**Clarity:**
Very well written, easy to understand. Except, what is abstractification of a
network?
# Second Pass
This review gives an overview of formal methods applied to machine learning.
Formal methods has been used for ML to test for robustness for various
perturbations on inputs. They start by talking about several types of formal
methods, including deductive verification, design by refinement, proof
assistants, model checking, and semantic static analysis, and then finally,
abstract interpretation of semantic static analysis. The last one has been
used the most in FM of ML.
A large part of the paper focuses on formal methods for neural network
perturbation robustness. They split up the methods into two types: complete
and incomplete formal methods. Complete formal methods are *sound* and
*complete*. This means they do not have false positives or false negatives.
These methods generally do not apply well to large networks (beyond
hundreds gets rough), but give the highest level of assurance. Here, there
are two more subcategories: SMT based solvers and MILP-based solvers.
SMT solvers are Satisfiability Modulo Theory solvers. These solvers a set
of constraints and check to see whether or not all of the constraints can be
satisfied. Safety conditions are encoded as the *negation* of the safety constraint.
That way, if a safety condition is violated, the SMT solver will pick it up as a
counter example.
MILP based solvers are Mixed Integer Linear Programming solvers. MILPs use
linear programming where certain constraints are integers to reduce the
solving space. Instead of having an explicity satisffiability condition on safety
or not, MILPs instead can use a minimizing function to generate counter
examples. For example, a MILP saftey condition might be formalized as:
$$ \text{min}\space \bf{x}_n$$
where a negative value of $\bf{x}_n$ is 'a valid counter example'.
There are also 'incomplete formal methods'. These incomplete methods are
sound, meaning they will not produce false negatives, but they may not be
complete, meaning they may produce false positives. They scale better, but
the false positives can be pretty annoying.
One of the main incomplete methods is abstract interpretation. Things get very
weird very fast--this is all about zonotopes and using over approximation of sets
to say things about stability. There is weird stuff going on with cell splitting
and merging that seems like a to grasp. But, apparently it works.
Next, they spend some time talking about other machine learning methods and
the formal methods that have been applied to them. Of particular interest to me
is the formal method application to support vector machines and decision
trees. There are some useful methods for detecting robustness to adversarial
inputs, and knowing what the nearest misclassification adversarial case is.
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,32 @@
# First Pass
**Category:**
This paper is a literature review
**Context:**
Mukul says that there are a whole bunch of thrusts out there about neural
network based controllers, and that there needs to be a better way to
parse what research is what
**Correctness:**
Looks good to me I suppose.
**Contributions:**
Here's the most important figure in the whole thing:
![[Pasted image 20250721095146.png]]
**Clarity:**
Well written.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,59 @@
---
alias: tee-hee-hee
---
# An autonomous control framework for advanced reactors notes
## First Pass
**Category:**
This is a vision-like paper.
**Context:**
Dr. Wood is a professor of nuclear engineering at the University of Tennesse.
He started working at UT after working at ORNL for 20+ years where he was
a senior researcher. He works on common cause failures.
**Correctness:**
There are not many citations, and a lot of the paper seems like it is off-the-cuff.
This was an invited article though in the *Nuclear Engineering and Technology*
journal. He's also got quite the resume.
**Contributions:**
Dr. Wood gives a good overview of where we are (automated control), and
where we need to go (autonomous control). Comparisons are made to other
industries, and how they've addressed these challenges. Clarifications are made
with respect to how the nuclear industry is unique.
**Clarity:**
Well written and clear, but a bit wordy at times. Examples are used but
sometimes it just feels like a mouthful.
## Second Pass
**What is the main thrust?**
Wood talks a lot about what it means for a system to be autonomous. He
actually starts with a etymological argument- "*automatos* means self-acting,
whereas *autonomos* means independent." The point of an autonomous system
is that it must be able to make intelligent decisions that a human would
otherwise make. Knowing how to handle degraded or faulty sensors or
actuators, switching controller trains, and knowing when scramming is
necessary are all decisions that an autonomous system would make, that
a simple automated system could not. The key is in the decision making.
**What is the supporting evidence?**
Dr. Wood compares a lot to NASA control systems for the Mars rovers. These
systems use two tiers of controllers: a decision layer and a functionality layer.
The functionality layer does what you think--it manages the physical
functioning. The decision layer is different. It decides on controller modes and
works instead with operational level goals.
**What are the key findings?**
The biggest findings are that an autonomous control system for a nuclear
power plant is possible, but by far the biggest roadblock is a high assurance
decision making system.
## Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,36 @@
# First Pass
**Category:**
This is a paper following a workshop. It is kind of a big picture / idea
type paper.
**Context:**
This paper was written after a workshop took place to talk about recent NRC
regulations. They talk about how different industries categorize the level
of autonomy in systems, and create autonomous levels of reactor control.
**Correctness:**
Vivek Agarwal chaired ANS in Chicago this past year. He's the real deal
**Contributions:**
They contribute a first thought on how to go about creating milestones for
autonomous systems, and give us a language to use when describing what level
of autonomy a system is gunning for.
**Clarity:**
Clearly written. Very nice!
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**
<!The quick brown fox jumps over the lazy dog. The dog takes a nice nap. :)>

View File

@ -0,0 +1,32 @@
# First Pass
**Category:**
Systematic Review
**Context:**
This paper is from the civil engineering school at the University of Leeds.
They are doing an overview of economics and finance research of small
modular reactors.
**Correctness:**
Can not say. Is a review!
**Contributions:**
They do a good break down of the research we have done ('what we know') and
what needs done ('what we do not know'). They point out a big discrepancy that we do not have a lot of research on operations and maintenance costs.
**Clarity:**
Nicely written. Easy to understand.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,33 @@
# First Pass
**Category:** This paper is an experimental paper.
**Context:** The context of this paper is using a couple different techniques
synthesize control programs using different formal methods tools.
**Correctness:** I trust Max's integrity. Some of the citations were funky and
the intro was fluffy though.
**Contributions:**
This paper shows that using LTL and other formal methods still struggles to
generate controllers for significantly challenging systems. The high bay
warehouse used to create this paper had to be simplified. Then, they introduce
an SMT solver based method to try to generate code.
**Clarity:**
The intro and conclusion do not provide very strong clarity on what they
actually did other than "high assurance is high difficulty", and that we're not
really there yet on HACPS.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,42 @@
# First Pass
**Category:**
This is a methods paper.
**Context:**
This paper proposes a way of using mixed integer linear programming (MILP)
to evaluate properties of neural networks.
**Correctness:**
Formal
**Contributions:**
They do nifty things with bounds tightening and presolving that makes their
solver very fast compared to the state of the art or Reluplex. They also talk
about stable and unstable neurons.
**Clarity:**
They have a really good explanation of what a MILP problem is and how one
might encode a neural network as one.
# Second Pass
**What is the main thrust?**
The main thrust is their new solving method of MILPs for neural networks.
With their method, neural networks can have their neurons analyzed to
prove whether or not the network is robust to input perturbations. This is
especially important for classifiers, who need to know if there are sneaky
nonlinearities that can be harmful to a built system (like a glitch). This
method of bounds tightening and MILP usage makes their solver much faster
and therein more capable to handle large networks.
**What is the supporting evidence?**
They have a whole bunch of experimental results.
**What are the key findings?**
MILPs and bound tightening is very good!
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,31 @@
# First Pass
**Category:**
This is a weird one. This paper is essentially just a 'state-of-the-FRET' paper
**Context:**
FRET comes from NASA and is a tool to line up requirements in a temporal
language.
**Correctness:**
Formal. But they don't really go into details.
**Contributions:**
Really I don't think they contribute much here than an explanation of what's
going on.
**Clarity:**
Easy to read.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,31 @@
# First Pass
**Category:**
Methods paper
**Context:**
Using NN w/ LiDAR to say safety reliability claims using FM
**Correctness:**
Assumptions seem good, other than anticipating LiDAR as a
continuous data stream.
**Contributions:**
1. Framewok for formally proving safety properties of autonomous robots using LiDAR and NNs
2. Something something imaging-adapted partitions. A 'notion'
**Clarity:**
Very well written and clear.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,50 @@
---
creation date: 2024-08-08
modification date: Thursday 8th August 2024 09:57:16
tags:
- Research
- "#Survey"
---
# Preamble
### Notable Links
[[ARCADE Implementation at the University of Pittsburgh]]
[[1. A shortlist of ARCADE Experiments]]
## What are we doing?
Collecting and organizing papers about HiTL testing. Making notes about 'em. Stuff like that.
## Why are we doing it?
08-08-2024: I'm writing a paper for [[2. (SR-CIST) Workshop on Security and Resiliency of Critical Infrastructure and Space Technologies]] and really need to know about hardware in the loop testing.
## Who cares?
Me. Andrew, probably. Dan, a lot.
# Papers
## Summary Table
```dataview
table without id
link(file.path, title) as "Title", year as "Year", authors as "Authors", readstatus as "Read Status"
from outgoing([[]])
where readstatus or readstatus=False
sort readstatus desc, year desc
```
## Papers about ARCADE
[[maccaroneADVANCEDREACTORCYBER]]
[[chenFullscopeHighfidelitySimulatorbased2024]]
## Hardware In The Loop Papers
[[mihalicHardwareintheLoopSimulationsHistorical2022]]
[[bullockHardwareintheloopSimulation2004]]
[[puysHardwareInTheLoopLabsSCADA2021]]
[[choiRealTimeHardwareintheLoopHIL2021]]
## Cybersecurity of OT Papers
[[sunSoKAttacksIndustrial2021]]
[[mclaughlinControllerawareFalseData2014]]
[[chekoleEnforcingMemorySafety2018]]
[[kushnerRealStoryStuxnet2013]]
[[CyberAttackGerman2015]]
[[sguegliaFederalOfficialsInvestigating2023]]
## Formal Methods and PLCs
[[fernandezadiegoApplyingModelChecking2015]]
[[lopez-miguelPLCverifStatusFormal2022]]
[[kleinFormallyVerifiedSoftware2018]] -- Littlebird paper
## Kry10
[[Kry10TechnicalOverview]]

View File

@ -0,0 +1,36 @@
# First Pass
**Category:** Experimental Results
**Context:** They used a F1/10 model of an autonomous car to test out a simplex
safety structure on a neural network based controller. They try several
different neural net types. Their trick is they use real time reachability to tell
when to switch between an optimal controller and the simplex guard.
**Correctness:**
They seem to do things pretty well and by the book. All of their explanations
make sense and they do a good job citing sources. They do punt when it comes
to talking about the formal verification of the switching mechanism, but they
do make note that's future work.
**Contributions:**
They show how a simplex system can work and the difficulties of the *sim2real* transition for machine learning controllers.
**Clarity:**
Really nicely written.
# Second Pass
**What is the main thrust?**
They use a simplex style controller setup with real time reachability to know when to use a optimal ML based controller vs. a safety oriented controller. They use the reachability to do this in real time, and demonstrate how different ML models line up against one another.
**What is the supporting evidence?**
They ran a whole bunch of experiments. They published all of the results, with their main metrics being Mean ML usage.
**What are the key findings?**
The biggest findings are that when obstacles are introduced (or other general advesary behavior), the amount that the safety controller kicks in goes up by a lot. Sims can use around about 50+% ML controller based on the speed of the car, but when using the real track, the ML useage goes down below 20%. They also note their real track is much smaller than the simulation track.
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,29 @@
# First Pass
**Category:**
Method
**Context:**
Expanding SMT solvers to work with ReLU functions. They then applied this technology to a prototype DNN for a ACAS Xu (aircraft collision avoidance system for Autonomous Aircraft) system
**Correctness:**
Seems good to me. Need to do deeper dive, can't really say correct or not at this point.
**Contributions:**
Reluplex: a SMT algorithim that incorporates functionality for ReLU functions commonly found in DNN. This drastically impacts the types of networks that can be verified in scale.
**Clarity:**
Well written and clear.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,34 @@
# First Pass
**Category:**
This is an economics research paper looking at how costs of microreactors will
change with remote operation centers being introduced, and the proliferation
of Digital Twins (DT)
**Context:**
These are INL Researchers doing an LDRD.
**Correctness:**
A lot of dubious statements. Suspicious about their hailing of a digital twin as a
savior but they don't really address a lot of the cybersecurity concerns of
remote operation.
**Contributions:**
Someone had to say the obvious that less control rooms will be less expensive
to operate.
**Clarity:**
Mediocre writing.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,35 @@
# First Pass
**Category:**
Simulation paper that talks about using a recovery controller in combination
with a nominal controller that is learned. The boundary to swtich between
the two is a learned boundary.
**Context:**
Drones
**Correctness:**
Not very. They do really well in the intro and methodology, but shit hits the fan
when it comes to the results. They don't explain things well, and also don't
really establish why their learned boundary between nominal and recovery
controllers should be successful.
**Contributions:**
The biggest contributions these guys make is demonstrating feasibility of their
reinforcement learned switching from the recovery controller and the nominal
controller.
**Clarity:**
Well written until the whole thing came apart at the end.
# Second Pass
I read this a second time but I don't think it's worth it.
Their main contribution is trying to use RL to learn when to switch to a
recovery controller.
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,29 @@
# First Pass
**Category:**
Method
**Context:**
Making RL agents safe by 'shielding' - baking in specification checking into training by providing feedback to the network when an output (even while the system is live) produces an unsafe control
**Correctness:**
Cited 1000+ times. I think it's correct.
**Contributions:**
Shielding, a way to use RL in high assurance systems.
**Clarity:**
Well written and easy to understand.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,29 @@
# First Pass
**Category:**
Framework / Modelling
**Context:**
Showing that CPS can't be modeled like linear systems.
**Correctness:**
Who knows.
**Contributions:**
A big focus on **deterministic** models
**Clarity:**
Some grammar mistakes and significant jabbering.
# Second Pass
**What is the main thrust?**
**What is the supporting evidence?**
**What are the key findings?**
# Third Pass
**Recreation Notes:**
**Hidden Findings:**
**Weak Points? Strong Points?**

View File

@ -0,0 +1,3 @@
# tee hee
there is something here