Compare commits
No commits in common. "b69dc3c7feffd8c2435294b91e9865fe51b2ac39" and "751a25780fc6861313a9284ddb9c488554f5aa00" have entirely different histories.
b69dc3c7fe
...
751a25780f
@ -1,35 +1,47 @@
|
||||
% GOAL PARAGRAPH
|
||||
This research develops autonomous control systems with mathematical guarantees of safe and correct behavior.
|
||||
This research develops a methodology for creating autonomous control systems
|
||||
with event-driven control laws that guarantee safe and correct behavior.
|
||||
|
||||
% INTRODUCTORY PARAGRAPH Hook
|
||||
Extensively trained operators manage nuclear reactors by following detailed written procedures. Plant conditions guide their decisions when they switch between control objectives.
|
||||
Nuclear power relies on extensively trained operators who follow detailed
|
||||
written procedures to manage reactor control. Operators interpret plant
|
||||
conditions and make critical decisions about when to switch between control
|
||||
objectives.
|
||||
% Gap
|
||||
Small modular reactors face a fundamental economic challenge: their per-megawatt staffing costs significantly exceed those of conventional plants, threatening their viability. Autonomous control systems must manage complex operational sequences safely—without constant supervision—while providing assurance equal to or exceeding that of human-operated systems.
|
||||
This reliance on human operators creates an economic challenge for
|
||||
next-generation nuclear power plants. Small modular reactors face per-megawatt
|
||||
staffing costs that significantly exceed those of conventional plants. These
|
||||
economic constraints demand autonomous control systems that safely manage
|
||||
complex operational sequences with the same assurance as human-operated systems,
|
||||
but without constant supervision.
|
||||
|
||||
% APPROACH PARAGRAPH Solution
|
||||
We combine formal methods from computer science with control theory to
|
||||
We will combine formal methods from computer science with control theory to
|
||||
build hybrid control systems that are correct by construction.
|
||||
% Rationale
|
||||
Hybrid systems mirror how operators work: discrete
|
||||
logic switches between continuous control modes. Existing formal methods
|
||||
Hybrid systems use discrete logic to switch between continuous control modes,
|
||||
mirroring how operators change control strategies. Existing formal methods
|
||||
generate provably correct switching logic but cannot handle continuous dynamics
|
||||
during transitions. Control theory verifies continuous behavior but
|
||||
cannot prove the correctness of discrete switching decisions.
|
||||
during transitions. Traditional control theory verifies continuous behavior but
|
||||
lacks tools for proving discrete switching correctness.
|
||||
% Hypothesis and Technical Approach
|
||||
Our methodology bridges this gap in three stages. First, we translate written
|
||||
A three-stage methodology will bridge this gap. First, we translate written
|
||||
operating procedures into temporal logic specifications using NASA's Formal
|
||||
Requirements Elicitation Tool (FRET). FRET structures requirements into scope,
|
||||
condition, component, timing, and response elements. Realizability
|
||||
checking identifies conflicts and ambiguities before implementation.
|
||||
Second, reactive synthesis generates deterministic automata that are provably
|
||||
correct by construction.
|
||||
Third, we design continuous controllers for each discrete mode using standard
|
||||
control theory and verify them using reachability analysis. We classify continuous modes by their transition objectives. Assume-guarantee contracts and barrier certificates prove that mode transitions occur safely. This approach enables local verification of continuous modes
|
||||
without requiring global trajectory analysis across the entire hybrid system. An
|
||||
Emerson Ovation control system demonstrates this methodology.
|
||||
condition, component, timing, and response elements, enabling realizability
|
||||
checking that identifies conflicts and ambiguities before implementation.
|
||||
Second, we synthesize discrete mode switching logic using reactive synthesis to
|
||||
generate deterministic automata that are provably correct by construction.
|
||||
Third, we develop continuous controllers for each discrete mode using standard
|
||||
control theory and reachability analysis. We classify continuous modes based on
|
||||
their transition objectives, then employ assume-guarantee contracts and barrier
|
||||
certificates to prove that mode transitions occur safely and as the
|
||||
deterministic automata specify. Local verification of continuous modes becomes
|
||||
possible without global trajectory analysis across the entire hybrid system. An
|
||||
Emerson Ovation control system will demonstrate this methodology.
|
||||
% Pay-off
|
||||
Autonomous control therefore manages complex nuclear
|
||||
power operations while maintaining safety guarantees, directly addressing the economic constraints threatening small modular reactor viability.
|
||||
This approach demonstrates that autonomous control can manage complex nuclear
|
||||
power operations while maintaining safety guarantees.
|
||||
|
||||
% OUTCOMES PARAGRAPHS
|
||||
If this research is successful, we will be able to do the following:
|
||||
@ -38,32 +50,32 @@ If this research is successful, we will be able to do the following:
|
||||
\item \textit{Synthesize written procedures into verified control logic.}
|
||||
% Strategy
|
||||
We will develop a methodology for converting written operating procedures
|
||||
into formal specifications. Reactive synthesis tools generate
|
||||
into formal specifications. Reactive synthesis tools will then generate
|
||||
discrete control logic from these specifications.
|
||||
% Outcome
|
||||
Control engineers generate mode-switching controllers from regulatory
|
||||
Control engineers will generate mode-switching controllers from regulatory
|
||||
procedures with minimal formal methods expertise, reducing barriers to
|
||||
high-assurance control systems.
|
||||
|
||||
% OUTCOME 2 Title
|
||||
\item \textit{Verify continuous control behavior across mode transitions.}
|
||||
% Strategy
|
||||
Reachability analysis verifies that continuous control modes satisfy discrete
|
||||
Reachability analysis will ensure continuous control modes satisfy discrete
|
||||
transition requirements.
|
||||
% Outcome
|
||||
Engineers design continuous controllers using standard practices while
|
||||
maintaining formal correctness guarantees. Mode transitions provably occur safely and at
|
||||
the correct times.
|
||||
Engineers will design continuous controllers using standard practices while
|
||||
ensuring system correctness, proving that mode transitions occur safely at
|
||||
the right times.
|
||||
|
||||
% OUTCOME 3 Title
|
||||
\item \textit{Demonstrate autonomous reactor startup control with safety
|
||||
guarantees.}
|
||||
% Strategy
|
||||
A small modular reactor simulation using industry-standard control hardware
|
||||
implements this methodology.
|
||||
will implement this methodology.
|
||||
% Outcome
|
||||
Control engineers implement high-assurance autonomous controls on
|
||||
Control engineers will implement high-assurance autonomous controls on
|
||||
industrial platforms they already use, enabling autonomy without retraining
|
||||
costs or new equipment development.
|
||||
costs or developing new equipment.
|
||||
|
||||
\end{enumerate}
|
||||
|
||||
@ -1,41 +1,55 @@
|
||||
\section{Goals and Outcomes}
|
||||
|
||||
% GOAL PARAGRAPH
|
||||
This research develops autonomous hybrid control
|
||||
This research develops a methodology for creating autonomous hybrid control
|
||||
systems with mathematical guarantees of safe and correct behavior.
|
||||
|
||||
% INTRODUCTORY PARAGRAPH Hook
|
||||
Nuclear power plants require the highest levels of control system reliability.
|
||||
Control system failures risk significant economic losses, service interruptions,
|
||||
Nuclear power plants require the highest levels of control system reliability,
|
||||
where failures can result in significant economic losses, service interruptions,
|
||||
or radiological release.
|
||||
% Known information
|
||||
Nuclear plant operations rely on extensively trained human operators
|
||||
Currently, nuclear plant operations rely on extensively trained human operators
|
||||
who follow detailed written procedures and strict regulatory requirements to
|
||||
manage reactor control. Plant conditions and procedural guidance inform their decisions when they switch between different control modes.
|
||||
manage reactor control. These operators make critical decisions about when to
|
||||
switch between different control modes based on their interpretation of plant
|
||||
conditions and procedural guidance.
|
||||
% Gap
|
||||
This reliance on human operators prevents autonomous control and
|
||||
This reliance on human operators prevents autonomous control capabilities and
|
||||
creates a fundamental economic challenge for next-generation reactor designs.
|
||||
Small modular reactors face per-megawatt staffing costs that far
|
||||
exceed those of conventional plants, threatening their economic viability.
|
||||
The nuclear industry needs autonomous control systems that manage complex
|
||||
operational sequences safely without constant human supervision while providing
|
||||
assurance equal to or exceeding human-operated systems.
|
||||
Small modular reactors, in particular, face per-megawatt staffing costs far
|
||||
exceeding those of conventional plants and threaten their economic viability.
|
||||
|
||||
% Critical Need
|
||||
The nuclear industry needs autonomous control systems that safely manage complex
|
||||
operational sequences with the same assurance as human-operated systems, but
|
||||
without constant human supervision.
|
||||
% APPROACH PARAGRAPH Solution
|
||||
We combine formal methods with control theory to build hybrid control
|
||||
We will combine formal methods with control theory to build hybrid control
|
||||
systems that are correct by construction.
|
||||
% Rationale
|
||||
Hybrid systems mirror how operators work: discrete
|
||||
logic switches between continuous control modes. Existing formal methods
|
||||
generate provably correct switching logic from written requirements but cannot handle continuous dynamics during transitions between modes.
|
||||
Control theory verifies continuous behavior but cannot prove the correctness of discrete switching decisions. This gap prevents end-to-end correctness guarantees.
|
||||
Hybrid systems use discrete logic to switch between continuous control modes,
|
||||
mirroring how operators change control strategies. Existing formal methods
|
||||
generate provably correct switching logic from written requirements but cannot
|
||||
handle the continuous dynamics that occur during transitions between modes.
|
||||
Traditional control theory verifies continuous behavior but lacks tools for
|
||||
proving correctness of discrete switching decisions. This gap between discrete
|
||||
and continuous verification prevents end-to-end correctness guarantees.
|
||||
% Hypothesis
|
||||
Our approach closes this gap by synthesizing discrete mode transitions directly
|
||||
from written operating procedures and verifying continuous behavior between
|
||||
transitions. We formalize existing procedures into logical
|
||||
specifications and verify continuous dynamics against transition requirements. This approach produces autonomous controllers that are provably free from design
|
||||
defects. We conduct this work within the University of Pittsburgh Cyber Energy Center,
|
||||
which provides access to industry collaboration and Emerson control hardware. Solutions developed here align with practical implementation
|
||||
transitions. If existing procedures can be formalized into logical
|
||||
specifications and continuous dynamics verified against transition requirements,
|
||||
then autonomous controllers can be built that are provably free from design
|
||||
defects.
|
||||
% Pay-off
|
||||
This approach will enable autonomous control in nuclear power plants while
|
||||
maintaining the high safety standards required by the industry.
|
||||
|
||||
% Qualifications
|
||||
This work is conducted within the University of Pittsburgh Cyber Energy Center,
|
||||
which provides access to industry collaboration and Emerson control hardware,
|
||||
ensuring that developed solutions align with practical implementation
|
||||
requirements.
|
||||
|
||||
% OUTCOMES PARAGRAPHS
|
||||
@ -46,63 +60,65 @@ If this research is successful, we will be able to do the following:
|
||||
% OUTCOME 1 Title
|
||||
\item \textbf{Translate written procedures into verified control logic.}
|
||||
% Strategy
|
||||
We develop a methodology for converting existing written operating
|
||||
We will develop a methodology for converting existing written operating
|
||||
procedures into formal specifications that can be automatically synthesized
|
||||
into discrete control logic. This process uses structured intermediate
|
||||
into discrete control logic. This process will use structured intermediate
|
||||
representations to bridge natural language procedures and mathematical
|
||||
logic.
|
||||
% Outcome
|
||||
Control system engineers generate verified mode-switching controllers
|
||||
Control system engineers will generate verified mode-switching controllers
|
||||
directly from regulatory procedures without formal methods expertise,
|
||||
lowering the barrier to high-assurance control systems.
|
||||
|
||||
% OUTCOME 2 Title
|
||||
\item \textbf{Verify continuous control behavior across mode transitions.}
|
||||
% Strategy
|
||||
We establish methods for analyzing continuous control modes to verify
|
||||
We will establish methods for analyzing continuous control modes to ensure
|
||||
they satisfy discrete transition requirements. Classical control theory for
|
||||
linear systems and reachability analysis for nonlinear dynamics verify
|
||||
that each continuous mode reaches its intended transitions safely.
|
||||
linear systems and reachability analysis for nonlinear dynamics will verify
|
||||
that each continuous mode safely reaches its intended transitions.
|
||||
% Outcome
|
||||
Engineers design continuous controllers using standard practices while
|
||||
maintaining formal correctness guarantees. Mode transitions occur safely and at the correct times, provably.
|
||||
Engineers will design continuous controllers using standard practices while
|
||||
iterating to ensure broader system correctness, proving that mode
|
||||
transitions occur safely and at the correct times.
|
||||
|
||||
% OUTCOME 3 Title
|
||||
\item \textbf{Demonstrate autonomous reactor startup control with safety
|
||||
guarantees.}
|
||||
% Strategy
|
||||
We apply this methodology to develop an autonomous controller for
|
||||
We will apply this methodology to develop an autonomous controller for
|
||||
nuclear reactor startup procedures, implementing it on a small modular
|
||||
reactor simulation using industry-standard control hardware. This
|
||||
demonstration proves correctness across multiple coordinated control
|
||||
demonstration will prove correctness across multiple coordinated control
|
||||
modes from cold shutdown through criticality to power operation.
|
||||
% Outcome
|
||||
We demonstrate that autonomous hybrid control can be realized in the
|
||||
We will demonstrate that autonomous hybrid control can be realized in the
|
||||
nuclear industry with current equipment, establishing a path toward reduced
|
||||
operator staffing while maintaining safety.
|
||||
|
||||
\end{enumerate}
|
||||
|
||||
% IMPACT PARAGRAPH Innovation
|
||||
These three outcomes—procedure translation, continuous verification, and hardware demonstration—establish a complete methodology from regulatory documents to deployed systems.
|
||||
These three outcomes—procedure translation, continuous verification, and
|
||||
hardware demonstration—together establish a complete methodology from regulatory
|
||||
documents to deployed systems.
|
||||
|
||||
\textbf{What is new:} We unify discrete synthesis with continuous verification to enable end-to-end correctness guarantees for hybrid systems.
|
||||
Formal methods verify discrete logic. Control theory verifies
|
||||
continuous dynamics. No existing methodology bridges both with compositional
|
||||
\textbf{The key innovation} unifies discrete synthesis with continuous
|
||||
verification to enable end-to-end correctness guarantees for hybrid systems.
|
||||
While formal methods can verify discrete logic and control theory can verify
|
||||
continuous dynamics, no existing methodology bridges both with compositional
|
||||
guarantees. This work establishes that bridge by treating discrete specifications
|
||||
as contracts that continuous controllers must satisfy, enabling independent
|
||||
verification of each layer while guaranteeing correct composition.
|
||||
as contracts that continuous controllers must satisfy, enabling verification of
|
||||
each layer independently while guaranteeing correct composition.
|
||||
|
||||
% Outcome Impact
|
||||
If successful, control engineers create autonomous controllers from
|
||||
existing procedures with mathematical proofs of correct behavior. High-assurance
|
||||
autonomous control becomes practical for safety-critical applications.
|
||||
If successful, control engineers will create autonomous controllers from
|
||||
existing procedures with mathematical proof of correct behavior. High-assurance
|
||||
autonomous control will become practical for safety-critical applications.
|
||||
% Impact/Pay-off
|
||||
This capability is essential for the economic viability of next-generation
|
||||
nuclear power. Small modular reactors offer a promising solution to growing
|
||||
energy demands, but their success depends on reducing per-megawatt operating
|
||||
costs through increased autonomy. This research provides the tools to
|
||||
costs through increased autonomy. This research will provide the tools to
|
||||
achieve that autonomy while maintaining the exceptional safety record the
|
||||
nuclear industry requires.
|
||||
|
||||
The following sections systematically answer the Heilmeier Catechism questions that define this research: Section 2 examines the state of the art, establishing what has been done and what remains impossible with current approaches. Section 3 presents our hybrid control synthesis methodology, demonstrating what is new and why it will succeed where prior work has not. Section 4 defines how we measure success through Technology Readiness Level advancement from analytical concepts to hardware demonstration. Section 5 identifies risks that could prevent success and establishes contingencies. Section 6 addresses who cares and why now, examining the economic imperative driving autonomous nuclear control and the broader impact on safety-critical systems. Section 8 provides the research schedule and deliverables, answering how long this work will take.
|
||||
|
||||
@ -1,11 +1,29 @@
|
||||
\section{State of the Art and Limits of Current Practice}
|
||||
|
||||
\textbf{What has been done? What are the limits of current practice?} This section answers these Heilmeier questions by examining how nuclear reactors operate today and why current approaches—both human-centered and formal methods—cannot provide autonomous control with end-to-end correctness guarantees. We examine reactor operators and their operating procedures, investigate the fundamental limitations of human-based operation, and review formal methods approaches that verify discrete logic or continuous dynamics but not both together. Understanding these limits establishes the verification gap our work addresses.
|
||||
This research aims to create autonomous reactor control systems that are
|
||||
tractably safe. Understanding what we automate requires first understanding how
|
||||
nuclear reactors operate today. This section examines reactor operators and the
|
||||
operating procedures we leverage, investigates limitations of human-based
|
||||
operation, and concludes with current formal methods approaches to reactor
|
||||
control systems.
|
||||
|
||||
\subsection{Current Reactor Procedures and Operation}
|
||||
|
||||
Nuclear plant procedures form a hierarchy: normal operating procedures govern routine operations, abnormal operating procedures handle off-normal conditions, Emergency Operating Procedures (EOPs) manage design-basis accidents, Severe Accident Management Guidelines (SAMGs) address beyond-design-basis events, and Extensive Damage Mitigation Guidelines (EDMGs) cover catastrophic damage scenarios. These procedures must comply with 10 CFR 50.34(b)(6)(ii); NUREG-0899
|
||||
provides development guidance~\cite{NUREG-0899, 10CFR50.34}. Their development relies on expert judgment and simulator validation—not formal verification. Technical evaluation, simulator validation testing, and biennial review under 10 CFR 55.59~\cite{10CFR55.59} assess procedures rigorously. However, this rigor cannot provide formal verification of key safety properties. No mathematical proof exists that procedures cover all possible plant states, that required actions complete within available timeframes, or that transitions between procedure sets maintain safety invariants.
|
||||
Nuclear plant procedures exist in a hierarchy: normal operating procedures for
|
||||
routine operations, abnormal operating procedures for off-normal conditions,
|
||||
Emergency Operating Procedures (EOPs) for design-basis accidents, Severe
|
||||
Accident Management Guidelines (SAMGs) for beyond-design-basis events, and
|
||||
Extensive Damage Mitigation Guidelines (EDMGs) for catastrophic damage
|
||||
scenarios. These procedures must comply with 10 CFR 50.34(b)(6)(ii). NUREG-0899
|
||||
provides guidance for their development~\cite{NUREG-0899, 10CFR50.34}, but this
|
||||
development relies fundamentally on expert judgment and simulator validation
|
||||
rather than formal verification. Procedures undergo technical evaluation,
|
||||
simulator validation testing, and biennial review as part of operator
|
||||
requalification under 10 CFR 55.59~\cite{10CFR55.59}. Despite this rigor,
|
||||
procedures fundamentally lack formal verification of key safety properties. No
|
||||
mathematical proof exists that procedures cover all possible plant states, that
|
||||
required actions can be completed within available timeframes, or that
|
||||
transitions between procedure sets maintain safety invariants.
|
||||
|
||||
\textbf{LIMITATION:} \textit{Procedures lack formal verification of correctness
|
||||
and completeness.} Current procedure development relies on expert judgment and
|
||||
@ -22,51 +40,53 @@ adjustment; manual control, where operators directly manipulate the reactor; and
|
||||
various intermediate modes. In typical pressurized water reactor operation, the
|
||||
reactor control system automatically maintains a floating average temperature
|
||||
and compensates for power demand changes through reactivity feedback loops
|
||||
alone. Safety systems, by contrast, already employ implemented automation. Reactor
|
||||
alone. Safety systems, by contrast, operate with implemented automation. Reactor
|
||||
Protection Systems trip automatically on safety signals with millisecond
|
||||
response times, and engineered safety features actuate automatically on accident
|
||||
signals without requiring operator action.
|
||||
signals without operator action required.
|
||||
|
||||
The division between automated and human-controlled functions reveals the
|
||||
fundamental challenge of hybrid control. Highly automated systems handle reactor
|
||||
protection---automatic trips on safety parameters, emergency core cooling
|
||||
actuation, containment isolation, and basic process
|
||||
control~\cite{WRPS.Description, gentillon_westinghouse_1999}. Human operators
|
||||
retain control of strategic decision-making: power level changes,
|
||||
control~\cite{WRPS.Description, gentillon_westinghouse_1999}. Human operators,
|
||||
however, retain control of strategic decision-making: power level changes,
|
||||
startup/shutdown sequences, mode transitions, and procedure implementation.
|
||||
|
||||
\subsection{Human Factors in Nuclear Accidents}
|
||||
|
||||
Procedures lack formal verification despite rigorous development. This subsection examines the second pillar of current practice: the human operators who execute these procedures. Procedures define what to do; human operators determine when and how to apply them. This approach introduces fundamental reliability limitations.
|
||||
|
||||
Current-generation nuclear power plants employ over 3,600 active NRC-licensed
|
||||
reactor operators in the United States~\cite{operator_statistics}. These
|
||||
Having established how nuclear plants currently operate through written
|
||||
procedures and human operators, we now examine why this human-centered approach
|
||||
poses fundamental limitations. Current-generation nuclear power plants employ
|
||||
over 3,600 active NRC-licensed reactor operators in the United
|
||||
States~\cite{operator_statistics}. These
|
||||
operators divide into Reactor Operators (ROs), who manipulate reactor controls,
|
||||
and Senior Reactor Operators (SROs), who direct plant operations and serve as
|
||||
shift supervisors~\cite{10CFR55}. Staffing typically requires at least two ROs
|
||||
and one SRO for current-generation units~\cite{10CFR50.54}. Becoming a reactor
|
||||
operator requires several years of training.
|
||||
|
||||
Human error persistently contributes to nuclear safety incidents despite decades
|
||||
of improvements in training and procedures, motivating formal automated control with mathematical safety guarantees.
|
||||
Under 10 CFR Part 55, operators hold legal authority to make critical decisions,
|
||||
The persistent role of human error in nuclear safety incidents---despite decades
|
||||
of improvements in training and procedures---provides the most compelling
|
||||
motivation for formal automated control with mathematical safety guarantees.
|
||||
Operators hold legal authority under 10 CFR Part 55 to make critical decisions,
|
||||
including departing from normal regulations during emergencies. The Three Mile
|
||||
Island (TMI) accident demonstrated how personnel error, design
|
||||
deficiencies, and component failures combined to cause partial meltdown. Operators
|
||||
misread confusing and contradictory indications, then shut off the emergency water
|
||||
Island (TMI) accident demonstrated how a combination of personnel error, design
|
||||
deficiencies, and component failures led to partial meltdown when operators
|
||||
misread confusing and contradictory readings and shut off the emergency water
|
||||
system~\cite{Kemeny1979}. The President's Commission on TMI identified a
|
||||
fundamental ambiguity: placing responsibility for safe power plant operations on
|
||||
the licensee without formally verifying that operators can fulfill this
|
||||
the licensee without formal verification that operators can fulfill this
|
||||
responsibility does not guarantee safety. This tension between operational
|
||||
flexibility and safety assurance remains unresolved: the person responsible for
|
||||
reactor safety often becomes the root cause of failures.
|
||||
reactor safety is often the root cause of failures.
|
||||
|
||||
Multiple independent analyses converge on a striking statistic: human error accounts for 70--80\% of
|
||||
nuclear power plant events, versus approximately
|
||||
20\% for equipment failures~\cite{WNA2020}. More significantly, poor safety management and safety
|
||||
culture—primarily human factors—caused
|
||||
all severe accidents at nuclear power plants: Three Mile Island, Chernobyl, and
|
||||
Fukushima Daiichi~\cite{hogberg_root_2013}. A detailed analysis
|
||||
Multiple independent analyses converge on a striking statistic: 70--80\% of
|
||||
nuclear power plant events are attributed to human error, versus approximately
|
||||
20\% to equipment failures~\cite{WNA2020}. More significantly, the root cause of
|
||||
all severe accidents at nuclear power plants---Three Mile Island, Chernobyl, and
|
||||
Fukushima Daiichi---has been identified as poor safety management and safety
|
||||
culture: primarily human factors~\cite{hogberg_root_2013}. A detailed analysis
|
||||
of 190 events at Chinese nuclear power plants from
|
||||
2007--2020~\cite{zhang_analysis_2025} found that 53\% of events involved active
|
||||
errors, while 92\% were associated with latent errors---organizational and
|
||||
@ -74,13 +94,16 @@ systemic weaknesses that create conditions for failure.
|
||||
|
||||
|
||||
\textbf{LIMITATION:} \textit{Human factors impose fundamental reliability limits
|
||||
that training alone cannot overcome.} Four decades of improvements have failed to eliminate human
|
||||
error. These
|
||||
limitations are fundamental to human-driven control, not remediable defects.
|
||||
that cannot be overcome through training alone.} The persistent human
|
||||
error contribution despite four decades of improvements demonstrates that these
|
||||
limitations are fundamental rather than a remediable part of human-driven control.
|
||||
|
||||
\subsection{Formal Methods}
|
||||
|
||||
Procedures lack formal verification, and human operators introduce persistent reliability issues. Formal methods offer an alternative: mathematical guarantees of correctness that eliminate both human error and procedural ambiguity. This subsection examines recent formal methods applications in nuclear control and identifies the verification gap that remains for autonomous hybrid systems.
|
||||
The persistent human error problem motivates exploration of formal methods to
|
||||
provide mathematical guarantees of correctness that human-centered approaches
|
||||
cannot achieve. This subsection examines recent formal methods work in nuclear
|
||||
control and identifies their limitations for autonomous hybrid systems.
|
||||
|
||||
\subsubsection{HARDENS}
|
||||
|
||||
@ -120,7 +143,7 @@ project did not address continuous dynamics of nuclear reactor physics. Real
|
||||
reactor safety depends on the interaction between continuous
|
||||
processes---temperature, pressure, neutron flux---evolving in response to
|
||||
discrete control decisions. HARDENS verified the discrete controller in
|
||||
isolation, not the closed-loop hybrid system behavior.
|
||||
isolation but not the closed-loop hybrid system behavior.
|
||||
|
||||
\textbf{LIMITATION:} \textit{HARDENS addressed discrete control logic without
|
||||
continuous dynamics or hybrid system verification.} Verifying discrete control
|
||||
@ -128,7 +151,7 @@ logic alone provides no guarantee that the closed-loop system exhibits desired
|
||||
continuous behavior such as stability, convergence to setpoints, or maintained
|
||||
safety margins.
|
||||
|
||||
HARDENS also faced deployment maturity constraints beyond the technical limitation of omitting continuous dynamics. The project produced a demonstrator system at Technology Readiness Level 2--3
|
||||
HARDENS produced a demonstrator system at Technology Readiness Level 2--3
|
||||
(analytical proof of concept with laboratory breadboard validation) rather than
|
||||
a deployment-ready system validated through extended operational testing. The
|
||||
NRC Final Report explicitly notes~\cite{Kiniry2024} that all material is
|
||||
@ -152,8 +175,9 @@ operational contexts, and regulatory acceptance of formal methods as
|
||||
primary assurance evidence.
|
||||
|
||||
\subsubsection{Sequent Calculus and Differential Dynamic Logic}
|
||||
|
||||
HARDENS verified discrete control logic but omitted continuous dynamics—a fundamental limitation for hybrid systems. Other researchers pursued a complementary approach: extending temporal logics to handle hybrid systems directly. This work produced differential dynamic logic (dL). dL introduces two additional operators
|
||||
There has been additional work to do verification of hybrid systems by extending
|
||||
the temporal logics directly. The result has been the field of differential
|
||||
dynamic logic (dL). dL introduces two additional operators
|
||||
into temporal logic: the box operator and the diamond operator. The box operator
|
||||
\([\alpha]\phi\) states that for some region \(\phi\), the hybrid system
|
||||
\(\alpha\) always remains within that region. In this way, it is a safety
|
||||
@ -167,8 +191,9 @@ liveness property.
|
||||
While dL allows for the specification of these liveness and safety properties,
|
||||
actually proving them for a given hybrid system is difficult. Automated proof
|
||||
assistants such as KeYmaera X exist to help develop proofs of systems using dL,
|
||||
but fail for reasonably complex hybrid systems. State space explosion and
|
||||
non-terminating solutions prevent creating system proofs using dL.
|
||||
but have been insufficient for reasonably complex hybrid systems. The main issue
|
||||
behind creating system proofs using dL is state space explosion and
|
||||
non-terminating solutions.
|
||||
%Source: that one satellite tracking paper that has the problem with the
|
||||
%gyroscopes overloding and needing to dump speed all the time
|
||||
Approaches have been made to alleviate
|
||||
@ -186,8 +211,13 @@ design loop for complex systems like nuclear reactor startup procedures.
|
||||
|
||||
\subsection{Summary: The Verification Gap}
|
||||
|
||||
\textbf{What has been done?} Human operators provide operational flexibility but introduce persistent reliability limitations—four decades of training improvements have failed to eliminate them. Formal methods provide correctness guarantees but have not scaled to complete hybrid control design. HARDENS verified discrete logic without continuous dynamics. Differential dynamic logic expresses hybrid properties but requires post-design expert analysis and does not scale to system synthesis.
|
||||
|
||||
\textbf{What are the limits of current practice?} No existing methodology synthesizes provably correct hybrid controllers from operational procedures with verification integrated into the design process. This gap—between discrete-only formal methods and post-hoc hybrid verification—prevents autonomous nuclear control with end-to-end correctness guarantees. Closing this gap enables autonomous control that addresses the economic constraints threatening small modular reactor viability while maintaining the safety assurance the nuclear industry requires.
|
||||
|
||||
Section 3 presents our methodology for bridging this gap through compositional hybrid verification, establishing what is new and why it will succeed.
|
||||
Current practice reveals a fundamental gap: human operators provide operational
|
||||
flexibility but introduce persistent reliability limitations, while formal
|
||||
methods provide correctness guarantees but have not scaled to complete hybrid
|
||||
control design. HARDENS verified discrete logic without continuous dynamics.
|
||||
Differential dynamic logic can express hybrid properties but requires
|
||||
post-design expert analysis. No existing methodology synthesizes provably
|
||||
correct hybrid controllers from operational procedures with verification
|
||||
integrated into the design process. This gap—between discrete-only formal
|
||||
methods and post-hoc hybrid verification—defines the challenge this research
|
||||
addresses.
|
||||
|
||||
@ -15,18 +15,38 @@
|
||||
% ----------------------------------------------------------------------------
|
||||
% 1. INTRODUCTION AND HYBRID SYSTEMS DEFINITION
|
||||
% ----------------------------------------------------------------------------
|
||||
Previous approaches to autonomous control verified either discrete switching logic or continuous control behavior—never both simultaneously. Continuous controllers rely on extensive simulation trials for validation. Discrete switching logic undergoes simulated control room testing and human factors research. Neither method provides rigorous guarantees of control system behavior despite consuming enormous resources. HAHACS bridges this gap by composing formal methods from computer science with control-theoretic verification, formalizing reactor operations using hybrid automata.
|
||||
Previous approaches to autonomous control verified discrete switching logic or
|
||||
continuous control behavior, but not both simultaneously. Today's continuous
|
||||
controller validation consists of extensive simulation trials. Human operators
|
||||
drive discrete switching logic for routine operation; their evaluation includes
|
||||
simulated control room testing and human factors research. Neither method
|
||||
provides rigorous guarantees of control system behavior, despite being
|
||||
extremely resource intensive. HAHACS bridges this gap by composing formal
|
||||
methods from computer science with control-theoretic verification, formalizing
|
||||
reactor operations using the framework of hybrid automata.
|
||||
|
||||
Hybrid system verification faces a fundamental challenge: the interaction between discrete and continuous dynamics. Discrete transitions change the governing vector field, creating discontinuities in system behavior. Traditional verification techniques—designed for purely discrete or purely continuous systems—cannot handle this interaction directly. Our methodology decomposes the problem: we verify discrete switching logic and continuous mode behavior separately, then compose them to reason about the complete hybrid system. This two-layer approach mirrors reactor operations, where discrete supervisory logic determines which control mode is active, and continuous controllers govern plant behavior within each mode.
|
||||
The challenge of hybrid system verification lies in the interaction between
|
||||
discrete and continuous dynamics. Discrete transitions change the governing
|
||||
vector field, creating discontinuities in the system's behavior. Traditional
|
||||
verification techniques designed for purely discrete or purely continuous
|
||||
systems cannot handle this interaction directly. Our methodology addresses this
|
||||
challenge through decomposition. We verify discrete switching logic and
|
||||
continuous mode behavior separately, then compose these guarantees to reason
|
||||
about the complete hybrid system. This two-layer approach mirrors the structure
|
||||
of reactor operations themselves: discrete supervisory logic determines which
|
||||
control mode is active, while continuous controllers govern plant behavior
|
||||
within each mode.
|
||||
|
||||
Building a high-assurance hybrid autonomous control system (HAHACS) requires
|
||||
a mathematical description of the system. This work draws on
|
||||
first establishing a mathematical description of the system. This work draws on
|
||||
automata theory, temporal logic, and control theory. A hybrid system is a
|
||||
dynamical system with both continuous and discrete states. This proposal
|
||||
discusses continuous autonomous hybrid systems specifically—systems with no external input where continuous
|
||||
states do not change instantaneously when discrete states change. These continuous states are physical quantities that remain
|
||||
Lipschitz continuous. We follow the nomenclature from the Handbook on
|
||||
Hybrid Systems Control~\cite{HANDBOOK ON HYBRID SYSTEMS}, redefined here
|
||||
dynamical system with both continuous and discrete states. The specific type
|
||||
of system discussed in this proposal is a continuous autonomous hybrid system.
|
||||
This means that the system does not have external input and that continuous
|
||||
states do not change instantaneously when discrete states change. For our
|
||||
systems of interest, the continuous states are physical quantities that are
|
||||
always Lipschitz continuous. This nomenclature is borrowed from the Handbook on
|
||||
Hybrid Systems Control \cite{HANDBOOK ON HYBRID SYSTEMS}, but is redefined here
|
||||
for convenience:
|
||||
|
||||
\begin{equation}
|
||||
@ -50,11 +70,20 @@ where:
|
||||
\item $Inv$: safety invariants on the continuous dynamics
|
||||
\end{itemize}
|
||||
|
||||
Creating a HAHACS requires constructing such a tuple together with proof artifacts that demonstrate the control system's actual implementation satisfies its intended behavior.
|
||||
The creation of a HAHACS amounts to the construction of such a tuple together
|
||||
with proof artifacts demonstrating that the intended behavior of the control
|
||||
system is satisfied by its actual implementation.
|
||||
|
||||
\textbf{What is new?} Reactive synthesis, reachability analysis, and barrier certificates each exist independently. Our contribution is the architecture that composes them into a complete methodology for hybrid control synthesis. The novelty lies in three innovations. First, we use discrete synthesis to define entry/exit/safety contracts that bound continuous verification, inverting the traditional approach of verifying a complete hybrid system globally. Second, we classify continuous modes by objective (transitory, stabilizing, expulsory) to select appropriate verification tools, enabling mode-local analysis with provable composition. Third, we leverage existing procedural structure to avoid global hybrid system analysis, making the approach tractable for complex systems like nuclear reactor startup. No prior work integrates these techniques into a systematic design methodology from procedures to verified implementation.
|
||||
|
||||
\textbf{Why will it succeed?} Three factors ensure practical feasibility. First, nuclear procedures already decompose operations into discrete phases with explicit transition criteria—we formalize existing structure rather than impose artificial abstractions. Second, mode-level verification avoids the state explosion that makes global hybrid system analysis intractable, keeping computational complexity bounded by verifying each mode against local contracts. Third, the Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility. We demonstrate feasibility on production control systems with realistic reactor models, not merely prove it in principle.
|
||||
\textbf{What is new:} This approach is tractable now because the infrastructure
|
||||
for each component has matured, but no existing work composes them for
|
||||
end-to-end hybrid system verification. The novelty lies in the architecture that
|
||||
connects discrete synthesis with continuous verification through well-defined
|
||||
interfaces. By defining
|
||||
entry, exit, and safety conditions at the discrete level first, we transform the
|
||||
intractable problem of global hybrid verification into a collection of local
|
||||
verification problems with clear interfaces. Verification is performed per mode
|
||||
rather than on the full hybrid system, keeping the analysis tractable even for
|
||||
complex reactor operations.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
@ -120,37 +149,40 @@ Creating a HAHACS requires constructing such a tuple together with proof artifac
|
||||
|
||||
\subsection{System Requirements, Specifications, and Discrete Controllers}
|
||||
|
||||
The eight-tuple definition formalizes discrete modes, continuous dynamics, guards, and invariants. This subsection shows how to construct such systems from existing operational knowledge. Nuclear operations already possess a natural hybrid structure that maps directly to this automaton formalism.
|
||||
Having defined the hybrid system mathematical framework, we now establish how to
|
||||
construct such systems from existing operational knowledge. The key insight is
|
||||
that nuclear operations already have a natural hybrid structure that maps
|
||||
directly to the automaton formalism.
|
||||
|
||||
Human control of nuclear power divides into three scopes: strategic,
|
||||
operational, and tactical. Strategic control represents high-level,
|
||||
operational, and tactical. Strategic control is high-level and
|
||||
long-term decision making for the plant. This level has objectives that are
|
||||
complex and economic in scale, such as managing labor needs and supply chains to
|
||||
optimize scheduled maintenance and downtime. The time scale at this level is
|
||||
long, often spanning months or years. The lowest level is the
|
||||
tactical level: the individual control of pumps, turbines, and
|
||||
chemistry. Nuclear power
|
||||
plants today have already automated tactical control somewhat, generally considered ``automatic control'' when autonomous.
|
||||
long, often spanning months or years. The lowest level of control is the
|
||||
tactical level. This is the individual control of pumps, turbines, and
|
||||
chemistry. Tactical control has already been somewhat automated in nuclear power
|
||||
plants today, and is generally considered ``automatic control'' when autonomous.
|
||||
These controls are almost always continuous systems with a direct impact on the
|
||||
physical state of the plant. Tactical control objectives include maintaining
|
||||
pressurizer level, maintaining core temperature, or adjusting reactivity with a
|
||||
chemical shim.
|
||||
|
||||
The operational control scope links these two extremes and represents the primary responsibility of human operators
|
||||
The level of control linking these two extremes is the operational control
|
||||
scope. Operational control is the primary responsibility of human operators
|
||||
today. Operational control takes the current strategic objective and implements
|
||||
tactical control objectives to drive the plant toward strategic goals, bridging high-level and low-level goals.
|
||||
|
||||
Consider an example: a strategic goal may be to
|
||||
tactical control objectives to drive the plant towards strategic goals. In this
|
||||
way, it bridges high-level and low-level goals. A strategic goal may be to
|
||||
perform refueling at a certain time, while the tactical level of the plant is
|
||||
currently focused on maintaining a certain core temperature. The operational
|
||||
level issues the shutdown procedure, using several smaller tactical goals along
|
||||
the way to achieve this objective.
|
||||
the way to achieve this objective. Thus, the combination of the operational and
|
||||
tactical levels fundamentally forms a hybrid controller. The tactical level is
|
||||
the continuous evolution of the plant according to the control input and control
|
||||
law, while the operational level is a discrete state evolution that determines
|
||||
which tactical control law to apply.
|
||||
|
||||
This structure reveals why the operational and
|
||||
tactical levels fundamentally form a hybrid controller. The tactical level represents
|
||||
continuous evolution of the plant according to the control input and control
|
||||
law, while the operational level represents discrete state evolution that determines
|
||||
which tactical control law to apply. We target this operational level for autonomous control.
|
||||
%Say something about autonomous control systems near here?
|
||||
|
||||
|
||||
\begin{figure}
|
||||
@ -186,22 +218,22 @@ which tactical control law to apply. We target this operational level for autono
|
||||
\end{figure}
|
||||
|
||||
|
||||
This operational control level is the main reason human
|
||||
operators are required in nuclear control today. The hybrid nature of this control system
|
||||
makes it difficult to prove a controller will perform according to
|
||||
strategic requirements; unified infrastructure for building and verifying
|
||||
hybrid systems does not currently exist. Humans fill this layer
|
||||
because their general intelligence provides a safe way to manage
|
||||
the hybrid nature of the system. These operators use prescriptive operating
|
||||
This operational control level is the main reason for the requirement of human
|
||||
operators in nuclear control today. The hybrid nature of this control system
|
||||
makes it difficult to prove that a controller will perform according to
|
||||
strategic requirements, as unified infrastructure for building and verifying
|
||||
hybrid systems does not currently exist. Humans have been used for this layer
|
||||
because their general intelligence has been relied upon as a safe way to manage
|
||||
the hybrid nature of this system. But these operators use prescriptive operating
|
||||
manuals to perform their control with strict procedures on what control to
|
||||
implement at a given time. These procedures are the key to the operational
|
||||
control scope.
|
||||
|
||||
Constructing a HAHACS leverages two key
|
||||
observations about current practice. First, operational scope control is
|
||||
effectively discrete control. Second, operating procedures describe the rules for implementing this control
|
||||
before implementation. We must completely describe a HAHACS's intended behavior before
|
||||
constructing it. The
|
||||
The method of constructing a HAHACS in this proposal leverages two key
|
||||
observations about current practice. First, the operational scope control is
|
||||
effectively discrete control. Second, the rules for implementing this control
|
||||
are described prior to their implementation in operating procedures. Before
|
||||
constructing a HAHACS, we must completely describe its intended behavior. The
|
||||
behavior of any control system originates in requirements: statements about what
|
||||
the system must do, must not do, and under what conditions. For nuclear systems,
|
||||
these requirements derive from multiple sources including regulatory mandates,
|
||||
@ -210,28 +242,29 @@ these requirements with sufficient precision that they can serve as the
|
||||
foundation for autonomous control system synthesis and verification. We can
|
||||
build these requirements using temporal logic.
|
||||
|
||||
Temporal logic provides powerful semantics for building systems with complex
|
||||
but deterministic behavior. It extends classical propositional logic
|
||||
Temporal logic is a powerful set of semantics for building systems with complex
|
||||
but deterministic behavior. Temporal logic extends classical propositional logic
|
||||
with operators that express properties over time. Using temporal logic, we can
|
||||
relate discrete control modes to one another and define all
|
||||
make statements relating discrete control modes to one another and define all
|
||||
the requirements of a HAHACS. The guard conditions $\mathcal{G}$ are defined by
|
||||
determining boundary conditions between discrete states and specifying their
|
||||
behavior. Continuous mode invariants can also be expressed as temporal
|
||||
behavior, while continuous mode invariants can also be expressed as temporal
|
||||
logic statements. These specifications form the basis of any proofs about a
|
||||
HAHACS and constitute the fundamental truth statements about what the behavior
|
||||
of the system is designed to be.
|
||||
|
||||
Discrete mode transitions include predicates—Boolean functions over the
|
||||
Discrete mode transitions include predicates that are Boolean functions over the
|
||||
continuous state space: $p_i: \mathcal{X} \rightarrow \{\text{true},
|
||||
\text{false}\}$. These predicates formalize conditions like ``coolant
|
||||
temperature exceeds 315°C'' or ``pressurizer level is between 30\% and 60\%.''
|
||||
We do not impose this discrete abstraction artificially. Operating
|
||||
Critically, we do not impose this discrete abstraction artificially. Operating
|
||||
procedures for nuclear systems already define go/no-go conditions as discrete
|
||||
predicates. Design basis safety analysis determined these thresholds; decades of operational experience have
|
||||
validated them. Our methodology assumes
|
||||
this domain knowledge exists and provides a framework to formalize it. The approach is feasible for nuclear applications because generations
|
||||
of nuclear engineers have already done the hard
|
||||
work of defining safe operating boundaries.
|
||||
predicates. These thresholds come from design basis safety analysis and have
|
||||
been validated over decades of operational experience. Our methodology assumes
|
||||
this domain knowledge exists and provides a framework to formalize it. This is
|
||||
why the approach is feasible for nuclear applications specifically: the hard
|
||||
work of defining safe operating boundaries has already been done by generations
|
||||
of nuclear engineers.
|
||||
|
||||
Linear temporal logic (LTL) is particularly well-suited for
|
||||
specifying reactive systems. LTL formulas are built from atomic propositions
|
||||
@ -249,7 +282,15 @@ eventually reaches operating temperature''), and response properties (``if
|
||||
coolant pressure drops, the system initiates shutdown within bounded time'').
|
||||
|
||||
|
||||
We will use FRET (Formal Requirements Elicitation Tool) to build these temporal logic statements. NASA developed FRET for high-assurance timed systems. FRET provides an intermediate language between temporal logic and natural language, enabling rigid definitions of temporal behavior through syntax accessible to engineers without formal methods expertise. This accessibility is crucial for industrial feasibility: reducing required expert knowledge makes these tools adoptable by the current workforce.
|
||||
To build these temporal logic statements, an intermediary tool called FRET is
|
||||
planned to be used. FRET stands for Formal Requirements Elicitation Tool, and
|
||||
was developed by NASA to build high-assurance timed systems. FRET is an
|
||||
intermediate language between temporal logic and natural language that allows
|
||||
for rigid definitions of temporal behavior while using a syntax accessible to
|
||||
engineers without formal methods expertise. This benefit is crucial for the
|
||||
feasibility of this methodology in industry. By reducing the expert knowledge
|
||||
required to use these tools, their adoption with the current workforce becomes
|
||||
easier.
|
||||
|
||||
A key feature of FRET is the ability to start with logically imprecise
|
||||
statements and consecutively refine them into well-posed specifications. We can
|
||||
@ -272,13 +313,14 @@ room for interpretation is a weakness that must be addressed.
|
||||
% 3. DISCRETE CONTROLLER SYNTHESIS
|
||||
% ----------------------------------------------------------------------------
|
||||
|
||||
With system requirements defined as temporal logic specifications, we now build
|
||||
the discrete control system through reactive synthesis.
|
||||
Reactive synthesis automates the creation of reactive programs from temporal logic specifications.
|
||||
A reactive program takes an input for a given state and produces
|
||||
an output. Our systems fit this model: the current discrete state and
|
||||
status of guard conditions form the input; the next discrete
|
||||
state forms the output.
|
||||
Once system requirements are defined as temporal logic specifications, we use
|
||||
them to build the discrete control system. To do this, reactive synthesis tools
|
||||
are employed. Reactive synthesis is a field in computer science that deals with
|
||||
the automated creation of reactive programs from temporal logic specifications.
|
||||
A reactive program is one that, for a given state, takes an input and produces
|
||||
an output. Our systems fit exactly this mold: the current discrete state and
|
||||
status of guard conditions are the input, while the output is the next discrete
|
||||
state.
|
||||
|
||||
Reactive synthesis solves the following problem: given an LTL formula $\varphi$
|
||||
that specifies desired system behavior, automatically construct a finite-state
|
||||
@ -290,12 +332,18 @@ controller can exist. This realizability check is itself valuable: an
|
||||
unrealizable specification indicates conflicting or impossible requirements in
|
||||
the original procedures.
|
||||
|
||||
Reactive synthesis offers a decisive advantage: the discrete automaton requires no human engineering of its implementation. The resultant automaton is correct by construction, eliminating human error at the implementation stage entirely. Human designers focus their effort where it belongs: on specifying system behavior. This has two critical implications. First, it makes discrete controller creation tractable. The reasons the controller
|
||||
The main advantage of reactive synthesis is that at no point in the production
|
||||
of the discrete automaton is human engineering of the implementation required.
|
||||
The resultant automaton is correct by construction. This method of construction
|
||||
eliminates the possibility of human error at the implementation stage entirely.
|
||||
Instead, the effort on the human designer is directed at the specification of
|
||||
system behavior itself. This has two critical implications. First, it makes the
|
||||
creation of the discrete controller tractable. The reasons the controller
|
||||
changes between modes can be traced back to the specification and thus to any
|
||||
requirements, providing a trace for liability and justification of system
|
||||
behavior. Second, discrete control decisions made by humans depend on the
|
||||
requirements, which provides a trace for liability and justification of system
|
||||
behavior. Second, discrete control decisions made by humans are reliant on the
|
||||
human operator operating correctly. Humans are intrinsically probabilistic
|
||||
and cannot eliminate human error. By defining the behavior of this
|
||||
creatures who cannot eliminate human error. By defining the behavior of this
|
||||
system using temporal logics and synthesizing the controller using deterministic
|
||||
algorithms, we are assured that strategic decisions will always be made
|
||||
according to operating procedures.
|
||||
@ -316,31 +364,40 @@ according to operating procedures.
|
||||
|
||||
\subsection{Continuous Control Modes}
|
||||
|
||||
Reactive synthesis produces a provably correct discrete controller from operating procedures. This discrete controller determines when to switch between modes—but hybrid control requires more. The continuous dynamics executing within each discrete mode must also be verified to ensure the complete system behaves correctly.
|
||||
|
||||
This subsection describes the continuous control modes that execute within each discrete state and explains how we verify they satisfy the requirements imposed by the discrete layer. Three mode types—transitory, stabilizing, and expulsory—require different verification approaches.
|
||||
|
||||
This methodology's scope regarding continuous controller design requires clarification. This work verifies continuous controllers—it does not synthesize them. The distinction parallels model checking in software verification: model checking verifies whether a given implementation satisfies its specification without prescribing how to write the software. We assume engineers can design continuous controllers using standard control theory techniques. Our contribution is the verification framework that confirms candidate controllers compose correctly with the discrete layer to produce a safe hybrid system.
|
||||
With the discrete controller synthesized and provably correct, we turn to the
|
||||
continuous dynamics that execute within each discrete mode. The synthesis of the
|
||||
discrete operational controller is only half of an autonomous controller. These control systems are hybrid, with both discrete and
|
||||
continuous components. This section describes the continuous control modes that
|
||||
execute within each discrete state, and how we verify that they satisfy the
|
||||
requirements imposed by the discrete layer. It is important to clarify the scope
|
||||
of this methodology with respect to continuous controller design. This work
|
||||
verifies continuous controllers; it does not synthesize them. The distinction
|
||||
parallels model checking in software verification: model checking does not tell
|
||||
engineers how to write correct software, but it verifies whether a given
|
||||
implementation satisfies its specification. Similarly, we assume that continuous
|
||||
controllers can be designed using standard control theory techniques. Our
|
||||
contribution is a verification framework that confirms candidate controllers
|
||||
compose correctly with the discrete layer to produce a safe hybrid system.
|
||||
|
||||
The operational control scope defines go/no-go decisions that determine what
|
||||
kind of continuous control to implement. The entry or exit conditions of a
|
||||
discrete state are the guard conditions $\mathcal{G}$ that define the
|
||||
discrete state are themselves the guard conditions $\mathcal{G}$ that define the
|
||||
boundaries for each continuous controller's allowed state-space region. These
|
||||
continuous controllers all share a common state space, but each individual
|
||||
continuous control mode operates within its own partition—defined by the
|
||||
continuous control mode operates within its own partition defined by the
|
||||
discrete state $q_i$ and the associated guards. This partitioning of the
|
||||
continuous state space among several discrete vector fields has traditionally
|
||||
posed a difficult problem for validation and verification. The discontinuity of
|
||||
been a difficult problem for validation and verification. The discontinuity of
|
||||
the vector fields at discrete state interfaces makes reachability analysis
|
||||
computationally expensive, and analytic solutions often become intractable
|
||||
\cite{MANYUS THESIS}.
|
||||
|
||||
We circumvent these issues by designing our hybrid system from the bottom up
|
||||
with verification in mind. The discrete transitions define each continuous
|
||||
control mode's input and output sets clearly \textit{a priori}.
|
||||
|
||||
Each discrete mode $q_i$ provides
|
||||
three key pieces of information for continuous controller design:
|
||||
with verification in mind. Each continuous control mode has an input set and
|
||||
output set clearly defined by our discrete transitions \textit{a priori}.
|
||||
Consider that we define the continuous state space as $\mathcal{X}$. Each
|
||||
discrete mode $q_i$ then provides three key pieces of information for continuous
|
||||
controller design:
|
||||
\begin{enumerate}
|
||||
\item \textbf{Entry conditions:} $\mathcal{X}_{entry,i} \subseteq \mathcal{X}$,
|
||||
the set of possible initial states when entering this mode
|
||||
@ -358,7 +415,7 @@ state in $\mathcal{X}_{exit,i}$ while remaining within $\mathcal{X}_{safe,i}$.
|
||||
|
||||
We classify continuous controllers into three types based on their objectives:
|
||||
transitory, stabilizing, and expulsory. Each type has distinct verification
|
||||
requirements that determine which formal methods tools are appropriate. The following subsections detail each mode type and its verification approach.
|
||||
requirements that determine which formal methods tools are appropriate.
|
||||
|
||||
%%% NOTES (Section 4):
|
||||
% - Add figure showing the relationship between entry/exit/safety sets
|
||||
@ -371,7 +428,11 @@ requirements that determine which formal methods tools are appropriate. The foll
|
||||
|
||||
\subsubsection{Transitory Modes}
|
||||
|
||||
Transitory modes move the plant from one discrete operating condition to another. Their purpose is to execute transitions: start from entry conditions, reach exit conditions, and maintain safety invariants throughout. Examples include power ramp-up sequences, cooldown procedures, and load-following maneuvers.
|
||||
Transitory modes are continuous controllers designed to move
|
||||
the plant from one discrete operating condition to another. Their purpose is to
|
||||
execute transitions: starting from entry conditions, reach exit conditions,
|
||||
and maintain safety invariants throughout. Examples include power ramp-up sequences,
|
||||
cooldown procedures, and load-following maneuvers.
|
||||
|
||||
The control objective for a transitory mode can be stated
|
||||
formally. Given entry conditions $\mathcal{X}_{entry}$, exit conditions
|
||||
@ -401,11 +462,11 @@ reachable within time horizon $T$:
|
||||
\text{Reach}(\mathcal{X}_{entry}, f_i, [0,T]) \cap \mathcal{X}_{exit} \neq \emptyset
|
||||
\]
|
||||
|
||||
The discrete controller defines clear boundaries in continuous state
|
||||
space, making the verification problem for each transitory mode well-posed. We know
|
||||
the possible initial conditions, the target conditions, and the
|
||||
safety envelope. The verification task confirms that the candidate
|
||||
continuous controller achieves the objective from all possible starting points.
|
||||
\textcolor{blue}{Because the discrete controller defines clear boundaries in continuous state
|
||||
space, the verification problem for each transitory mode is well-posed. We know
|
||||
the possible initial conditions, we know the target conditions, and we know the
|
||||
safety envelope. The verification task is to confirm that the candidate
|
||||
continuous controller achieves the objective from all possible starting points.}
|
||||
|
||||
Several tools exist for computing reachable sets of hybrid
|
||||
systems, including CORA, Flow*, SpaceEx, and JuliaReach. The choice of tool
|
||||
@ -427,15 +488,22 @@ appropriate to the fidelity of the reactor models available.
|
||||
|
||||
\subsubsection{Stabilizing Modes}
|
||||
|
||||
Transitory modes drive the system toward exit conditions. Stabilizing modes, in contrast, maintain the system within a desired operating region indefinitely rather than drive it toward an exit condition. Examples include steady-state power operation, hot standby, and load-following at constant power level. Reachability analysis may not suit validation of stabilizing modes. Instead, we use barrier certificates.
|
||||
While transitory modes drive the system toward exit conditions, stabilizing
|
||||
modes maintain the system within a desired operating region. Stabilizing modes
|
||||
are continuous controllers with an objective of maintaining a particular
|
||||
discrete state indefinitely. Rather than driving the system toward an
|
||||
exit condition, they keep the system within a safe operating region. Examples
|
||||
include steady-state power operation, hot standby, and load-following at
|
||||
constant power level. Reachability analysis for stabilizing modes may not be a
|
||||
suitable approach to validation. Instead, we plan to use barrier certificates.
|
||||
Barrier certificates analyze the dynamics of the system to determine whether
|
||||
flux across a given boundary exists. They evaluate whether any trajectory leaves
|
||||
a given boundary. This definition exactly matches what defines the validity of a
|
||||
a given boundary. This definition is exactly what defines the validity of a
|
||||
stabilizing continuous control mode.
|
||||
|
||||
Formally, a barrier certificate (or control barrier function) is a
|
||||
A barrier certificate (or control barrier function) is a
|
||||
scalar function $B: \mathcal{X} \rightarrow \mathbb{R}$ that certifies forward
|
||||
invariance of a safe set. The idea parallels Lyapunov functions for
|
||||
invariance of a safe set. The idea is analogous to Lyapunov functions for
|
||||
stability: rather than computing trajectories explicitly, we find a certificate
|
||||
function whose properties guarantee the desired behavior. For a safe set
|
||||
$\mathcal{C} = \{x : B(x) \geq 0\}$ and dynamics $\dot{x} = f(x,u)$, the
|
||||
@ -479,9 +547,22 @@ controller.
|
||||
|
||||
\subsubsection{Expulsory Modes}
|
||||
|
||||
Transitory and stabilizing modes handle nominal operations. Expulsory modes handle off-nominal conditions. When the plant deviates from expected behavior, expulsory modes take over to ensure safety. These continuous controllers are designed for robustness rather than optimality. The control objective is to drive the plant to a safe shutdown state from potentially anywhere in the state space, under degraded or uncertain dynamics. Examples include emergency core cooling, reactor SCRAM sequences, and controlled depressurization procedures.
|
||||
Transitory and stabilizing modes handle nominal operations. When the plant
|
||||
deviates from expected behavior, expulsory modes take over. Expulsory modes are
|
||||
continuous controllers responsible for ensuring safety when failures occur. They are designed for robustness rather
|
||||
than optimality. The control objective is to drive the plant to a safe shutdown
|
||||
state from potentially anywhere in the state space, under degraded or uncertain
|
||||
dynamics. Examples include emergency core cooling, reactor SCRAM sequences, and
|
||||
controlled depressurization procedures.
|
||||
|
||||
Proving controller correctness through reachability and barrier certificates makes detecting physical failures straightforward. The controller cannot be incorrect for the nominal plant model. When an invariant is violated, the plant dynamics must have changed. The HAHACS identifies faults when continuous controllers violate discrete boundary conditions—a direct consequence of verified nominal control modes. Unexpected behavior implies off-nominal conditions.
|
||||
We can detect that physical failures exist because our physical controllers have
|
||||
been previously proven correct by reachability and barrier certificates. We know
|
||||
our controller cannot be incorrect for the nominal plant model, so if an
|
||||
invariant is violated, we know the plant dynamics have changed. The HAHACS can
|
||||
identify that a fault occurred because a discrete boundary condition was
|
||||
violated by the continuous physical controller. This is a direct consequence of
|
||||
having verified the nominal continuous control modes: unexpected behavior
|
||||
implies off-nominal conditions.
|
||||
|
||||
The mathematical formulation for expulsory mode verification
|
||||
differs from transitory modes in two key ways. First, the entry conditions may
|
||||
@ -526,7 +607,8 @@ safety margins will matter more than performance during emergency shutdown.
|
||||
|
||||
\subsection{Industrial Implementation}
|
||||
|
||||
The complete methodology—procedure formalization, discrete synthesis, and continuous verification across three mode types—must now be validated on realistic systems using industrial-grade hardware to demonstrate practical feasibility and advance from analytical concepts (TRL 2-3) to laboratory demonstration (TRL 5).
|
||||
The methodology described above must be validated on realistic
|
||||
systems using industrial-grade hardware to demonstrate practical feasibility.
|
||||
This research will leverage the University of Pittsburgh Cyber Energy Center's
|
||||
partnership with Emerson to implement and test the HAHACS methodology on
|
||||
production control equipment. Emerson's Ovation distributed control system is widely deployed
|
||||
@ -550,14 +632,12 @@ the success and impact of this work. We will directly address the gap of
|
||||
verification and validation methods for these systems and industry adoption by
|
||||
forming a two-way exchange of knowledge between the laboratory and commercial
|
||||
environments. This work stands to be successful with Emerson implementation
|
||||
because we will have access to system experts at Emerson to help with the fine
|
||||
because we will have excess to system experts at Emerson to help with the fine
|
||||
details of using the Ovation system. At the same time, we will have the benefit
|
||||
of transferring technology directly to industry with a direct collaboration in
|
||||
this research, while getting an excellent perspective of how our research
|
||||
outcomes can align best with customer needs.
|
||||
|
||||
This methodology—from procedure formalization through discrete synthesis to continuous verification and hardware implementation—establishes the complete research approach. We have shown what is new (compositional hybrid verification integrated into the design process) and why it will succeed (leveraging existing procedural structure, mode-level analysis, and industrial collaboration). Section 4 defines how we measure success: not through theoretical contributions alone, but through Technology Readiness Level advancement that demonstrates both correctness and practical implementability.
|
||||
|
||||
%%% NOTES (Section 5):
|
||||
% - Get specific details on ARCADE interface from Emerson collaboration
|
||||
% - Mention what startup sequence will be demonstrated (cold shutdown →
|
||||
|
||||
@ -1,20 +1,18 @@
|
||||
\section{Metrics for Success}
|
||||
|
||||
\textbf{How do we measure success?} This research advances through
|
||||
Technology Readiness Levels, progressing from fundamental concepts (TRL 2--3) to validated
|
||||
prototype demonstration (TRL 5).
|
||||
|
||||
This work begins at TRL 2--3 and aims to reach TRL 5—where
|
||||
This research will be measured by advancement through Technology Readiness
|
||||
Levels, progressing from fundamental concepts to validated prototype
|
||||
demonstration. This work begins at TRL 2--3 and aims to reach TRL 5, where
|
||||
system components operate successfully in a relevant laboratory environment.
|
||||
This section explains why TRL advancement provides the most appropriate success
|
||||
metric and defines specific criteria for TRL 5.
|
||||
metric and defines the specific criteria required to achieve TRL 5.
|
||||
|
||||
Technology Readiness Levels provide the ideal success metric: they
|
||||
Technology Readiness Levels provide the ideal success metric because they
|
||||
explicitly measure the gap between academic proof-of-concept and practical
|
||||
deployment—precisely what this work aims to bridge. Academic metrics like
|
||||
deployment---precisely what this work aims to bridge. Academic metrics like
|
||||
papers published or theorems proved cannot capture practical feasibility.
|
||||
Empirical metrics like simulation accuracy or computational speed cannot
|
||||
demonstrate theoretical rigor. Only TRLs measure both simultaneously.
|
||||
demonstrate theoretical rigor. TRLs measure both dimensions simultaneously.
|
||||
Advancing from TRL 3 to TRL 5 requires maintaining theoretical rigor while
|
||||
progressively demonstrating practical feasibility. Formal verification must
|
||||
remain valid as the system moves from individual components to integrated
|
||||
@ -86,7 +84,5 @@ assumptions. This research succeeds if it achieves TRL 5 by demonstrating a
|
||||
complete autonomous hybrid controller with formal correctness guarantees
|
||||
operating on industrial control hardware through hardware-in-the-loop testing in
|
||||
a relevant laboratory environment. This establishes both theoretical validity
|
||||
and practical feasibility, proving the methodology produces verified
|
||||
controllers implementable with current technology.
|
||||
|
||||
TRL advancement provides clear success criteria, but reaching TRL 5 depends on several critical assumptions. If these assumptions prove false, the research may stall at lower readiness levels. The next section identifies the primary risks, their early warning indicators, and contingency plans that preserve research value even if core assumptions fail.
|
||||
and practical feasibility, proving that the methodology produces verified
|
||||
controllers and that implementation is achievable with current technology.
|
||||
|
||||
@ -1,26 +1,28 @@
|
||||
\section{Risks and Contingencies}
|
||||
|
||||
This research relies on several critical assumptions that, if invalidated, require
|
||||
scope adjustment or methodological revision. Four primary risks could prevent
|
||||
successful completion: computational tractability of
|
||||
This research relies on several critical assumptions that, if invalidated, would
|
||||
require scope adjustment or methodological revision. The primary risks to
|
||||
successful completion fall into four categories: computational tractability of
|
||||
synthesis and verification, complexity of the discrete-continuous interface,
|
||||
completeness of procedure formalization, and hardware-in-the-loop integration. Each risk has associated early warning indicators and
|
||||
completeness of procedure formalization, and hardware-in-the-loop integration
|
||||
challenges. Each risk has associated indicators for early detection and
|
||||
contingency plans that preserve research value even if core assumptions prove
|
||||
false. The staged project structure ensures that partial success yields
|
||||
publishable results while clearly identifying remaining barriers to deployment.
|
||||
publishable results and clear identification of remaining barriers to
|
||||
deployment.
|
||||
|
||||
\subsection{Computational Tractability of Synthesis}
|
||||
|
||||
The first major assumption is that formalized startup procedures will yield
|
||||
automata small enough for efficient synthesis and verification. Reactive
|
||||
synthesis scales exponentially with specification complexity, creating the risk that
|
||||
synthesis scales exponentially with specification complexity, creating risk that
|
||||
temporal logic specifications derived from complete startup procedures may
|
||||
produce automata with thousands of states. Such large automata would require
|
||||
synthesis times exceeding days or weeks—preventing us from demonstrating the
|
||||
synthesis times exceeding days or weeks, preventing demonstration of the
|
||||
complete methodology within project timelines. Reachability analysis for
|
||||
continuous modes with high-dimensional state spaces may similarly prove
|
||||
computationally intractable. Either barrier would constitute a fundamental
|
||||
obstacle to achieving our research objectives.
|
||||
obstacle to achieving the research objectives.
|
||||
|
||||
Several indicators would provide early warning of computational tractability
|
||||
problems. Synthesis times exceeding 24 hours for simplified procedure subsets
|
||||
@ -32,13 +34,20 @@ Reachability analysis failing to converge within reasonable time bounds would
|
||||
show that continuous mode verification cannot be completed with available
|
||||
computational resources.
|
||||
|
||||
If computational tractability becomes the limiting factor, we reduce scope to a
|
||||
minimal viable startup sequence covering only cold shutdown to criticality to low-power hold, omitting power ascension and other operational phases. This reduced sequence still demonstrates the complete methodology—procedure formalization, discrete synthesis, continuous verification, and hardware implementation—while reducing computational burden. The research contribution remains valid: we prove that formal hybrid control synthesis is achievable for safety-critical nuclear applications and clearly identify which operational complexities exceed current computational capabilities. We document the limitation as a scaling constraint requiring future work, not a methodological failure.
|
||||
The contingency plan for computational intractability is to reduce scope to a
|
||||
minimal viable startup sequence. This reduced sequence would cover only cold
|
||||
shutdown to criticality to low-power hold, omitting power ascension and other
|
||||
operational phases. The subset would still demonstrate the complete methodology
|
||||
while reducing computational burden. The research contribution would remain
|
||||
valid even with reduced scope, proving that formal hybrid control synthesis is
|
||||
achievable for safety-critical nuclear applications. The limitation to
|
||||
simplified operational sequences would be explicitly documented as a constraint
|
||||
rather than a failure.
|
||||
|
||||
\subsection{Discrete-Continuous Interface Formalization}
|
||||
|
||||
Computational tractability represents one dimension of risk. A more fundamental challenge represents the second critical assumption: mapping boolean guard
|
||||
conditions in temporal logic to continuous state boundaries required for mode
|
||||
The second critical assumption concerns the mapping between boolean guard
|
||||
conditions in temporal logic and continuous state boundaries required for mode
|
||||
transitions. This interface represents the fundamental challenge of hybrid
|
||||
systems: relating discrete switching logic to continuous dynamics. Temporal
|
||||
logic operates on boolean predicates, while continuous control requires
|
||||
@ -93,7 +102,7 @@ computational resources where they matter most for safety assurance.
|
||||
|
||||
\subsection{Procedure Formalization Completeness}
|
||||
|
||||
The previous two risks concern verification infrastructure—computational scaling and mathematical formalization. The third assumption addresses the source material itself: that existing startup procedures contain sufficient
|
||||
The third assumption is that existing startup procedures contain sufficient
|
||||
detail and clarity for translation into temporal logic specifications. Nuclear
|
||||
operating procedures, while extensively detailed, were written for human
|
||||
operators who bring contextual understanding and adaptive reasoning to their
|
||||
@ -147,6 +156,3 @@ ambiguities amenable to systematic resolution. This cross-design analysis would
|
||||
strengthen the generalizability of any proposed specification language
|
||||
extensions, ensuring they address industry-wide practices rather than specific
|
||||
quirks.
|
||||
|
||||
|
||||
These risks and contingencies demonstrate that while the research faces real challenges, each has identifiable early indicators and viable mitigation strategies. The staged approach ensures valuable contributions even if core assumptions prove invalid: partial success yields publishable results that clearly identify remaining barriers to deployment. With risks addressed and contingencies established, the next section examines broader impacts: who cares about this work and why it matters now.
|
||||
|
||||
@ -1,21 +1,19 @@
|
||||
\section{Broader Impacts}
|
||||
|
||||
\textbf{Who cares? Why now?} The nuclear industry, datacenter operators, and clean energy advocates all face the same economic constraint: high operating costs driven by staffing requirements. AI infrastructure demands—growing exponentially—have made this constraint urgent.
|
||||
|
||||
Nuclear power presents both a compelling application domain and an urgent
|
||||
economic challenge. Recent interest in powering artificial intelligence
|
||||
infrastructure has renewed focus on small modular reactors (SMRs), particularly
|
||||
for hyperscale datacenters requiring hundreds of megawatts of continuous power.
|
||||
Deploying SMRs at datacenter sites minimizes transmission losses and
|
||||
eliminates emissions from hydrocarbon-based alternatives. Nuclear power
|
||||
economics at this scale, however, demand careful attention to operating costs.
|
||||
Deploying SMRs at datacenter sites would minimize transmission losses and
|
||||
eliminate emissions from hydrocarbon-based alternatives. However, nuclear power
|
||||
economics at this scale demand careful attention to operating costs.
|
||||
|
||||
The U.S. Energy Information Administration's Annual Energy Outlook
|
||||
2022 projects advanced nuclear power entering service in 2027 will cost
|
||||
According to the U.S. Energy Information Administration's Annual Energy Outlook
|
||||
2022, advanced nuclear power entering service in 2027 is projected to cost
|
||||
\$88.24 per megawatt-hour~\cite{eia_lcoe_2022}. Datacenter electricity demand is
|
||||
projected to reach 1,050 terawatt-hours annually by
|
||||
2030~\cite{eesi_datacenter_2024}. Nuclear power supplying this demand would
|
||||
generate total annual costs exceeding \$92 billion. Within this
|
||||
2030~\cite{eesi_datacenter_2024}. If this demand were supplied by nuclear power,
|
||||
the total annual cost of power generation would exceed \$92 billion. Within this
|
||||
figure, operations and maintenance represents a substantial component. The EIA
|
||||
estimates that fixed O\&M costs alone account for \$16.15 per megawatt-hour,
|
||||
with additional variable O\&M costs embedded in fuel and operating
|
||||
@ -23,16 +21,16 @@ expenses~\cite{eia_lcoe_2022}. Combined, O\&M-related costs represent
|
||||
approximately 23--30\% of the total levelized cost of electricity, translating
|
||||
to \$21--28 billion annually for projected datacenter demand.
|
||||
|
||||
\textbf{What difference will it make?} This research directly addresses the
|
||||
\$21--28 billion annual O\&M cost challenge through high-assurance autonomous
|
||||
control, making small modular reactors economically viable for datacenter power.
|
||||
|
||||
Current nuclear operations require full control room staffing for each
|
||||
reactor—whether large conventional units or small modular designs. For large reactors producing 1,000+ MW, staffing costs spread across substantial output. Small modular reactors producing 50-300 MW face the same staffing requirements with far lower output, making the per-megawatt cost prohibitive. These staffing requirements drive the economic challenge
|
||||
that threatens SMR deployment for datacenter applications. Synthesizing provably correct hybrid controllers from formal
|
||||
specifications automates routine operational sequences that currently require
|
||||
constant human oversight. A fundamental shift from direct operator
|
||||
control to supervisory monitoring becomes possible, where operators oversee multiple autonomous
|
||||
This research directly addresses the multi-billion-dollar O\&M cost challenge
|
||||
through the high-assurance autonomous control methodology developed in this
|
||||
work. Current nuclear operations require full control room staffing for each
|
||||
reactor, whether large conventional units or small modular designs. These staffing requirements drive the high O\&M costs
|
||||
that make nuclear power economically challenging, particularly for smaller
|
||||
reactor designs where the same staffing overhead must be spread across lower
|
||||
power output. Synthesizing provably correct hybrid controllers from formal
|
||||
specifications can automate routine operational sequences that currently require
|
||||
constant human oversight. This enables a fundamental shift from direct operator
|
||||
control to supervisory monitoring, where operators oversee multiple autonomous
|
||||
reactors rather than manually controlling individual units.
|
||||
|
||||
The correct-by-construction methodology is critical for this transition.
|
||||
@ -71,14 +69,3 @@ autonomy with provable correctness guarantees. Demonstrating this approach in
|
||||
nuclear power---one of the most regulated and safety-critical domains---will
|
||||
establish both the technical feasibility and regulatory pathway for broader
|
||||
adoption across critical infrastructure.
|
||||
|
||||
|
||||
These broader impacts answer the final Heilmeier questions:
|
||||
|
||||
\textbf{Who cares?} The nuclear industry, datacenter operators, and anyone facing high operating costs from staffing-intensive safety-critical control.
|
||||
|
||||
\textbf{Why now?} AI infrastructure demands have made nuclear economics urgent, and formal methods tools have matured to the point where compositional hybrid verification is achievable.
|
||||
|
||||
\textbf{What difference will it make?} Enabling autonomous control with mathematical safety guarantees addresses a \$21--28 billion annual cost barrier while establishing a generalizable framework for safety-critical autonomous systems.
|
||||
|
||||
The next section presents the timeline for achieving these outcomes through a structured 24-month research plan.
|
||||
|
||||
@ -1,10 +1,7 @@
|
||||
\section{Schedule, Milestones, and Deliverables}
|
||||
|
||||
\textbf{How long will it take?} This research will be conducted over six
|
||||
trimesters (24 months) of full-time effort following the proposal defense in
|
||||
Spring 2026. All work uses existing computational and experimental resources
|
||||
available through the University of Pittsburgh Cyber Energy Center and NRC
|
||||
Fellowship funding. The work progresses
|
||||
This research will be conducted over six trimesters (24 months) of full-time
|
||||
effort following the proposal defense in Spring 2026. The work progresses
|
||||
sequentially through three main research thrusts before culminating in
|
||||
integrated demonstration and validation.
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user