Compare commits
No commits in common. "35ac7e498031fe64c49572c729cdf953866b6677" and "82c7fcbe221ca618de16dfa751458a168665d36c" have entirely different histories.
35ac7e4980
...
82c7fcbe22
@ -2,18 +2,34 @@
|
|||||||
This research develops autonomous control systems with mathematical guarantees of safe and correct behavior.
|
This research develops autonomous control systems with mathematical guarantees of safe and correct behavior.
|
||||||
|
|
||||||
% INTRODUCTORY PARAGRAPH Hook
|
% INTRODUCTORY PARAGRAPH Hook
|
||||||
Nuclear reactors require extensively trained operators who follow detailed written procedures, switching between control objectives based on plant conditions.
|
Extensively trained operators manage nuclear reactors by following detailed written procedures. Plant conditions guide their decisions when they switch between control objectives.
|
||||||
% Gap
|
% Gap
|
||||||
Small modular reactors face a fundamental economic challenge: per-megawatt staffing costs significantly exceed those of conventional plants, threatening economic viability. These reactors need autonomous control systems that manage complex operational sequences safely without constant supervision—systems providing assurance equal to or exceeding that of human-operated systems.
|
Small modular reactors face a fundamental economic challenge: their per-megawatt staffing costs significantly exceed those of conventional plants, threatening viability. Autonomous control systems must manage complex operational sequences safely—without constant supervision—while providing assurance equal to or exceeding that of human-operated systems.
|
||||||
|
|
||||||
% APPROACH PARAGRAPH Solution
|
% APPROACH PARAGRAPH Solution
|
||||||
Formal methods from computer science combine with control theory to build hybrid control systems correct by construction.
|
We combine formal methods from computer science with control theory to
|
||||||
|
build hybrid control systems that are correct by construction.
|
||||||
% Rationale
|
% Rationale
|
||||||
Hybrid systems mirror operator decision-making: discrete logic switches between continuous control modes. Existing formal methods generate provably correct switching logic but fail during transitions with continuous dynamics. Control theory verifies continuous behavior but fails to prove discrete switching correctness.
|
Hybrid systems mirror how operators work: discrete
|
||||||
|
logic switches between continuous control modes. Existing formal methods
|
||||||
|
generate provably correct switching logic but cannot handle continuous dynamics
|
||||||
|
during transitions. Control theory verifies continuous behavior but
|
||||||
|
cannot prove the correctness of discrete switching decisions.
|
||||||
% Hypothesis and Technical Approach
|
% Hypothesis and Technical Approach
|
||||||
Three stages bridge this gap. First, written operating procedures translate into temporal logic specifications using NASA's Formal Requirements Elicitation Tool (FRET), which structures requirements into scope, condition, component, timing, and response elements. Realizability checking identifies conflicts and ambiguities before implementation. Second, reactive synthesis generates deterministic automata—provably correct by construction. Third, standard control theory designs continuous controllers for each discrete mode while reachability analysis verifies them. Continuous modes classify by transition objectives. Assume-guarantee contracts and barrier certificates prove safe mode transitions. This enables local verification of continuous modes without global trajectory analysis across the entire hybrid system. An Emerson Ovation control system demonstrates the methodology.
|
Our methodology bridges this gap through three stages. First, we translate written
|
||||||
|
operating procedures into temporal logic specifications using NASA's Formal
|
||||||
|
Requirements Elicitation Tool (FRET). FRET structures requirements into scope,
|
||||||
|
condition, component, timing, and response elements. Realizability
|
||||||
|
checking identifies conflicts and ambiguities before implementation.
|
||||||
|
Second, reactive synthesis generates deterministic automata that are provably
|
||||||
|
correct by construction.
|
||||||
|
Third, we design continuous controllers for each discrete mode using standard
|
||||||
|
control theory and verify them using reachability analysis. We classify continuous modes by their transition objectives. Assume-guarantee contracts and barrier certificates prove that mode transitions occur safely. Local verification of continuous modes
|
||||||
|
becomes possible without requiring global trajectory analysis across the entire hybrid system. This methodology is demonstrated on an
|
||||||
|
Emerson Ovation control system.
|
||||||
% Pay-off
|
% Pay-off
|
||||||
This autonomous control approach manages complex nuclear power operations while maintaining safety guarantees, directly addressing the economic constraints threatening small modular reactor viability.
|
Autonomous control can then manage complex nuclear
|
||||||
|
power operations while maintaining safety guarantees, directly addressing the economic constraints threatening small modular reactor viability.
|
||||||
|
|
||||||
% OUTCOMES PARAGRAPHS
|
% OUTCOMES PARAGRAPHS
|
||||||
If this research is successful, we will be able to do the following:
|
If this research is successful, we will be able to do the following:
|
||||||
@ -21,11 +37,11 @@ If this research is successful, we will be able to do the following:
|
|||||||
% OUTCOME 1 Title
|
% OUTCOME 1 Title
|
||||||
\item \textit{Synthesize written procedures into verified control logic.}
|
\item \textit{Synthesize written procedures into verified control logic.}
|
||||||
% Strategy
|
% Strategy
|
||||||
We develop a methodology for converting written operating procedures
|
We will develop a methodology for converting written operating procedures
|
||||||
into formal specifications. Reactive synthesis tools then generate
|
into formal specifications. Reactive synthesis tools generate
|
||||||
discrete control logic from these specifications.
|
discrete control logic from these specifications.
|
||||||
% Outcome
|
% Outcome
|
||||||
Control engineers can generate mode-switching controllers from regulatory
|
Control engineers generate mode-switching controllers from regulatory
|
||||||
procedures with minimal formal methods expertise, reducing barriers to
|
procedures with minimal formal methods expertise, reducing barriers to
|
||||||
high-assurance control systems.
|
high-assurance control systems.
|
||||||
|
|
||||||
|
|||||||
@ -1,27 +1,44 @@
|
|||||||
\section{Goals and Outcomes}
|
\section{Goals and Outcomes}
|
||||||
|
|
||||||
% GOAL PARAGRAPH
|
% GOAL PARAGRAPH
|
||||||
This research develops autonomous hybrid control systems with mathematical guarantees of safe and correct behavior.
|
This research develops autonomous hybrid control
|
||||||
|
systems with mathematical guarantees of safe and correct behavior.
|
||||||
|
|
||||||
% INTRODUCTORY PARAGRAPH Hook
|
% INTRODUCTORY PARAGRAPH Hook
|
||||||
Nuclear power plants require the highest levels of control system reliability. Control system failures risk economic losses, service interruptions, or radiological release.
|
Nuclear power plants require the highest levels of control system reliability.
|
||||||
|
Control system failures risk economic losses, service interruptions,
|
||||||
|
or radiological release.
|
||||||
% Known information
|
% Known information
|
||||||
Nuclear plant operations rely on extensively trained human operators who follow detailed written procedures and strict regulatory requirements, switching between control modes based on plant conditions and procedural guidance.
|
Nuclear plant operations rely on extensively trained human operators
|
||||||
|
who follow detailed written procedures and strict regulatory requirements to
|
||||||
|
manage reactor control. Plant conditions and procedural guidance inform their decisions when they switch between different control modes.
|
||||||
% Gap
|
% Gap
|
||||||
Human operator reliance prevents autonomous control and creates a fundamental economic challenge for next-generation reactor designs. Small modular reactors face per-megawatt staffing costs far exceeding those of conventional plants, threatening economic viability. The nuclear industry needs autonomous control systems that safely manage complex operational sequences without constant human supervision—systems providing assurance equal to or exceeding that of human operators.
|
This reliance on human operators prevents autonomous control and
|
||||||
|
creates a fundamental economic challenge for next-generation reactor designs.
|
||||||
|
Small modular reactors face per-megawatt staffing costs that far
|
||||||
|
exceed those of conventional plants, threatening economic viability.
|
||||||
|
The nuclear industry needs autonomous control systems that can manage complex
|
||||||
|
operational sequences safely without constant human supervision while providing
|
||||||
|
assurance equal to or exceeding that of human-operated systems.
|
||||||
|
|
||||||
% APPROACH PARAGRAPH Solution
|
% APPROACH PARAGRAPH Solution
|
||||||
Formal methods combine with control theory to build hybrid control systems correct by construction.
|
We combine formal methods with control theory to build hybrid control
|
||||||
|
systems that are correct by construction.
|
||||||
% Rationale
|
% Rationale
|
||||||
Hybrid systems mirror how operators work: discrete logic switches between continuous control modes. Existing formal methods generate provably correct switching logic from written requirements but fail with continuous dynamics during transitions. Control theory verifies continuous behavior but fails to prove discrete switching correctness. No existing approach provides end-to-end correctness guarantees.
|
Hybrid systems mirror how operators work: discrete
|
||||||
|
logic switches between continuous control modes. Existing formal methods
|
||||||
|
generate provably correct switching logic from written requirements but cannot handle continuous dynamics during transitions.
|
||||||
|
Control theory verifies continuous behavior but cannot prove the correctness of discrete switching decisions. This gap prevents end-to-end correctness guarantees.
|
||||||
% Hypothesis
|
% Hypothesis
|
||||||
This approach closes the gap by synthesizing discrete mode transitions directly from written operating procedures and verifying continuous behavior between transitions. Existing procedures formalize into logical specifications while continuous dynamics verify against transition requirements, producing autonomous controllers provably free from design defects.
|
Our approach closes this gap by synthesizing discrete mode transitions directly
|
||||||
|
from written operating procedures and verifying continuous behavior between
|
||||||
The University of Pittsburgh Cyber Energy Center provides access to industry collaboration and Emerson control hardware, ensuring solutions align with practical implementation
|
transitions. We formalize existing procedures into logical
|
||||||
|
specifications and verify continuous dynamics against transition requirements. This approach produces autonomous controllers provably free from design
|
||||||
|
defects. The University of Pittsburgh Cyber Energy Center provides access to industry collaboration and Emerson control hardware, ensuring that solutions developed here align with practical implementation
|
||||||
requirements.
|
requirements.
|
||||||
|
|
||||||
% OUTCOMES PARAGRAPHS
|
% OUTCOMES PARAGRAPHS
|
||||||
This approach produces three concrete outcomes:
|
If this research is successful, we will be able to do the following:
|
||||||
|
|
||||||
\begin{enumerate}
|
\begin{enumerate}
|
||||||
|
|
||||||
@ -68,7 +85,12 @@ This approach produces three concrete outcomes:
|
|||||||
% IMPACT PARAGRAPH Innovation
|
% IMPACT PARAGRAPH Innovation
|
||||||
These three outcomes—procedure translation, continuous verification, and hardware demonstration—establish a complete methodology from regulatory documents to deployed systems.
|
These three outcomes—procedure translation, continuous verification, and hardware demonstration—establish a complete methodology from regulatory documents to deployed systems.
|
||||||
|
|
||||||
\textbf{What makes this research new?} This work unifies discrete synthesis with continuous verification to enable end-to-end correctness guarantees for hybrid systems. Formal methods verify discrete logic; control theory verifies continuous dynamics. No existing methodology bridges both with compositional guarantees. This work establishes that bridge by treating discrete specifications as contracts that continuous controllers must satisfy, enabling independent verification of each layer while guaranteeing correct composition.
|
\textbf{What is new?} We unify discrete synthesis with continuous verification to enable end-to-end correctness guarantees for hybrid systems.
|
||||||
|
Formal methods verify discrete logic; control theory verifies
|
||||||
|
continuous dynamics. No existing methodology bridges both with compositional
|
||||||
|
guarantees. This work establishes that bridge by treating discrete specifications
|
||||||
|
as contracts that continuous controllers must satisfy, enabling independent
|
||||||
|
verification of each layer while guaranteeing correct composition.
|
||||||
|
|
||||||
% Outcome Impact
|
% Outcome Impact
|
||||||
If successful, control engineers create autonomous controllers from
|
If successful, control engineers create autonomous controllers from
|
||||||
@ -82,7 +104,7 @@ costs through increased autonomy. This research provides the tools to
|
|||||||
achieve that autonomy while maintaining the exceptional safety record the
|
achieve that autonomy while maintaining the exceptional safety record the
|
||||||
nuclear industry requires.
|
nuclear industry requires.
|
||||||
|
|
||||||
These three outcomes establish a complete methodology from regulatory documents to deployed systems. The following sections systematically answer the Heilmeier Catechism questions that define this research:
|
The following sections systematically answer the Heilmeier Catechism questions that define this research:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item \textbf{Section 2 (State of the Art):} What has been done? What are the limits of current practice?
|
\item \textbf{Section 2 (State of the Art):} What has been done? What are the limits of current practice?
|
||||||
\item \textbf{Section 3 (Research Approach):} What is new? Why will it succeed where prior work has not?
|
\item \textbf{Section 3 (Research Approach):} What is new? Why will it succeed where prior work has not?
|
||||||
|
|||||||
@ -4,24 +4,40 @@
|
|||||||
|
|
||||||
\subsection{Current Reactor Procedures and Operation}
|
\subsection{Current Reactor Procedures and Operation}
|
||||||
|
|
||||||
Nuclear plant procedures form a hierarchy. Normal operating procedures govern routine operations; abnormal operating procedures handle off-normal conditions. Emergency Operating Procedures (EOPs) manage design-basis accidents. Severe Accident Management Guidelines (SAMGs) address beyond-design-basis events; Extensive Damage Mitigation Guidelines (EDMGs) cover catastrophic damage. These procedures must comply with 10 CFR 50.34(b)(6)(ii); NUREG-0899 provides development guidance~\cite{NUREG-0899, 10CFR50.34}. Expert judgment and simulator validation drive development, not formal verification. Technical evaluation, simulator validation testing, and biennial review under 10 CFR 55.59~\cite{10CFR55.59} assess procedures rigorously, yet this rigor cannot provide formal verification of key safety properties. No mathematical proof exists that procedures cover all possible plant states, that required actions complete within available timeframes, or that transitions between procedure sets maintain safety invariants.
|
Nuclear plant procedures form a hierarchy: normal operating procedures govern routine operations, abnormal operating procedures handle off-normal conditions, Emergency Operating Procedures (EOPs) manage design-basis accidents, Severe Accident Management Guidelines (SAMGs) address beyond-design-basis events, and Extensive Damage Mitigation Guidelines (EDMGs) cover catastrophic damage scenarios. These procedures must comply with 10 CFR 50.34(b)(6)(ii); NUREG-0899
|
||||||
|
provides development guidance~\cite{NUREG-0899, 10CFR50.34}. However, their development relies on expert judgment and simulator validation—not formal verification. Technical evaluation, simulator validation testing, and biennial review under 10 CFR 55.59~\cite{10CFR55.59} assess procedures rigorously, but this rigor cannot provide formal verification of key safety properties. No mathematical proof exists that procedures cover all possible plant states, that required actions complete within available timeframes, or that transitions between procedure sets maintain safety invariants.
|
||||||
|
|
||||||
\textbf{LIMITATION:} \textit{Procedures lack formal verification of correctness
|
\textbf{LIMITATION:} \textit{Procedures lack formal verification of correctness
|
||||||
and completeness.} Current procedure development relies on expert judgment and
|
and completeness.} Current procedure development relies on expert judgment and
|
||||||
simulator validation—not mathematical proof. No proof exists that procedures cover all
|
simulator validation. No mathematical proof exists that procedures cover all
|
||||||
possible plant states, that required actions complete within available
|
possible plant states, that required actions can be completed within available
|
||||||
timeframes, or that transitions between procedure sets maintain safety
|
timeframes, or that transitions between procedure sets maintain safety
|
||||||
invariants. Paper-based procedures cannot ensure correct application. Even
|
invariants. Paper-based procedures cannot ensure correct application. Even
|
||||||
computer-based procedure systems lack the formal guarantees automated reasoning
|
computer-based procedure systems lack the formal guarantees automated reasoning
|
||||||
could provide.
|
could provide.
|
||||||
|
|
||||||
Nuclear plants operate with multiple control modes: automatic control maintains target parameters through continuous reactivity adjustment, manual control allows operators to directly manipulate the reactor, and various intermediate modes bridge these extremes. In typical pressurized water reactor operation, the reactor control system automatically maintains a floating average temperature and compensates for power demand changes through reactivity feedback loops alone. Safety systems already employ extensive automation. Reactor Protection Systems trip automatically on safety signals with millisecond response times; engineered safety features actuate automatically on accident signals without requiring operator action.
|
Nuclear plants operate with multiple control modes: automatic control, where the
|
||||||
|
reactor control system maintains target parameters through continuous reactivity
|
||||||
|
adjustment; manual control, where operators directly manipulate the reactor; and
|
||||||
|
various intermediate modes. In typical pressurized water reactor operation, the
|
||||||
|
reactor control system automatically maintains a floating average temperature
|
||||||
|
and compensates for power demand changes through reactivity feedback loops
|
||||||
|
alone. Safety systems, by contrast, already employ automation. Reactor
|
||||||
|
Protection Systems trip automatically on safety signals with millisecond
|
||||||
|
response times, and engineered safety features actuate automatically on accident
|
||||||
|
signals without requiring operator action.
|
||||||
|
|
||||||
The division between automated and human-controlled functions reveals the fundamental challenge of hybrid control. Highly automated systems handle reactor protection: automatic trips on safety parameters, emergency core cooling actuation, containment isolation, and basic process control~\cite{WRPS.Description, gentillon_westinghouse_1999}. Human operators retain control of strategic decision-making: power level changes, startup/shutdown sequences, mode transitions, and procedure implementation.
|
The division between automated and human-controlled functions reveals the
|
||||||
|
fundamental challenge of hybrid control. Highly automated systems handle reactor
|
||||||
|
protection---automatic trips on safety parameters, emergency core cooling
|
||||||
|
actuation, containment isolation, and basic process
|
||||||
|
control~\cite{WRPS.Description, gentillon_westinghouse_1999}. Human operators
|
||||||
|
retain control of strategic decision-making: power level changes,
|
||||||
|
startup/shutdown sequences, mode transitions, and procedure implementation.
|
||||||
|
|
||||||
\subsection{Human Factors in Nuclear Accidents}
|
\subsection{Human Factors in Nuclear Accidents}
|
||||||
|
|
||||||
Procedures lack formal verification despite rigorous development—this represents only half the reliability challenge. The second pillar of current practice—human operators executing these procedures—introduces additional reliability limitations independent of procedure quality. Procedures define what to do; human operators determine when and how. Even perfect procedures cannot eliminate human error.
|
Procedures lack formal verification despite rigorous development, but this represents only half the reliability challenge. The second pillar of current practice—human operators who execute these procedures—introduces additional reliability limitations. Procedures define what to do; human operators determine when and how to apply them. Even perfectly written procedures cannot eliminate human error.
|
||||||
|
|
||||||
Current-generation nuclear power plants employ over 3,600 active NRC-licensed
|
Current-generation nuclear power plants employ over 3,600 active NRC-licensed
|
||||||
reactor operators in the United States~\cite{operator_statistics}. These
|
reactor operators in the United States~\cite{operator_statistics}. These
|
||||||
@ -31,18 +47,26 @@ shift supervisors~\cite{10CFR55}. Staffing typically requires at least two ROs
|
|||||||
and one SRO for current-generation units~\cite{10CFR50.54}. Becoming a reactor
|
and one SRO for current-generation units~\cite{10CFR50.54}. Becoming a reactor
|
||||||
operator requires several years of training.
|
operator requires several years of training.
|
||||||
|
|
||||||
Human error persistently contributes to nuclear safety incidents despite decades of improvements in training and procedures. This persistence motivates formal automated control with mathematical safety guarantees. Under 10 CFR Part 55, operators hold legal authority to make critical decisions, including authority to depart from normal regulations during emergencies. The Three Mile
|
Human error persistently contributes to nuclear safety incidents despite decades
|
||||||
|
of improvements in training and procedures. This persistence motivates formal automated control with mathematical safety guarantees.
|
||||||
|
Under 10 CFR Part 55, operators hold legal authority to make critical decisions,
|
||||||
|
including departing from normal regulations during emergencies. The Three Mile
|
||||||
Island (TMI) accident demonstrated how personnel error, design
|
Island (TMI) accident demonstrated how personnel error, design
|
||||||
deficiencies, and component failures can combine to cause disaster. Operators
|
deficiencies, and component failures combine to cause disaster. Operators
|
||||||
misread confusing and contradictory indications, then shut off the emergency water
|
misread confusing and contradictory indications, then shut off the emergency water
|
||||||
system~\cite{Kemeny1979}. The President's Commission on TMI identified a
|
system~\cite{Kemeny1979}. The President's Commission on TMI identified a
|
||||||
fundamental ambiguity: placing responsibility for safe power plant operations on
|
fundamental ambiguity: placing responsibility for safe power plant operations on
|
||||||
the licensee without formally verifying that operators can fulfill this
|
the licensee without formally verifying that operators can fulfill this
|
||||||
responsibility does not guarantee safety. This tension between operational
|
responsibility does not guarantee safety. This tension between operational
|
||||||
flexibility and safety assurance remains unresolved. The person responsible for
|
flexibility and safety assurance remains unresolved—the person responsible for
|
||||||
reactor safety often becomes the root cause of failures.
|
reactor safety often becomes the root cause of failures.
|
||||||
|
|
||||||
Multiple independent analyses converge on a striking statistic: human error accounts for 70--80\% of nuclear power plant events, compared to approximately 20\% for equipment failures~\cite{WNA2020}. More significantly, human factors—poor safety management and safety culture—caused all severe accidents at nuclear power plants: Three Mile Island, Chernobyl, and Fukushima Daiichi~\cite{hogberg_root_2013}. A detailed analysis
|
Multiple independent analyses converge on a striking statistic: human error accounts for 70--80\% of
|
||||||
|
nuclear power plant events, compared to approximately
|
||||||
|
20\% for equipment failures~\cite{WNA2020}. More significantly, poor safety management and safety
|
||||||
|
culture—primarily human factors—caused
|
||||||
|
all severe accidents at nuclear power plants: Three Mile Island, Chernobyl, and
|
||||||
|
Fukushima Daiichi~\cite{hogberg_root_2013}. A detailed analysis
|
||||||
of 190 events at Chinese nuclear power plants from
|
of 190 events at Chinese nuclear power plants from
|
||||||
2007--2020~\cite{zhang_analysis_2025} found that 53\% of events involved active
|
2007--2020~\cite{zhang_analysis_2025} found that 53\% of events involved active
|
||||||
errors, while 92\% were associated with latent errors---organizational and
|
errors, while 92\% were associated with latent errors---organizational and
|
||||||
@ -51,14 +75,12 @@ systemic weaknesses that create conditions for failure.
|
|||||||
|
|
||||||
\textbf{LIMITATION:} \textit{Human factors impose fundamental reliability limits
|
\textbf{LIMITATION:} \textit{Human factors impose fundamental reliability limits
|
||||||
that training alone cannot overcome.} Four decades of improvements have failed to eliminate human
|
that training alone cannot overcome.} Four decades of improvements have failed to eliminate human
|
||||||
error—these
|
error. These
|
||||||
limitations are fundamental to human-driven control, not remediable defects.
|
limitations are fundamental to human-driven control, not remediable defects.
|
||||||
|
|
||||||
\subsection{Formal Methods}
|
\subsection{Formal Methods}
|
||||||
|
|
||||||
Current practice reveals two critical limitations: procedures lack formal verification, and human operators introduce persistent reliability issues despite four decades of training improvements. Formal methods offer an alternative—mathematical guarantees of correctness that eliminate both human error and procedural ambiguity.
|
The limitations are now clear: procedures lack formal verification, and human operators introduce persistent reliability issues despite four decades of training improvements. Formal methods offer an alternative approach: mathematical guarantees of correctness that eliminate both human error and procedural ambiguity. This subsection examines recent formal methods applications in nuclear control and identifies the verification gap that remains for autonomous hybrid systems.
|
||||||
|
|
||||||
Yet even the most advanced formal methods applications in nuclear control leave a critical verification gap for autonomous hybrid systems. The following subsections examine why.
|
|
||||||
|
|
||||||
\subsubsection{HARDENS: The State of Formal Methods in Nuclear Control}
|
\subsubsection{HARDENS: The State of Formal Methods in Nuclear Control}
|
||||||
|
|
||||||
@ -66,7 +88,14 @@ The High Assurance Rigorous Digital Engineering for Nuclear Safety (HARDENS)
|
|||||||
project represents the most advanced application of formal methods to nuclear
|
project represents the most advanced application of formal methods to nuclear
|
||||||
reactor control systems to date~\cite{Kiniry2024}.
|
reactor control systems to date~\cite{Kiniry2024}.
|
||||||
|
|
||||||
HARDENS addressed a fundamental dilemma: existing U.S. nuclear control rooms rely on analog technologies from the 1950s--60s, which incur significant risk and cost compared to modern control systems. The NRC contracted Galois, a formal methods firm, to demonstrate that Model-Based Systems Engineering and formal methods could design, verify, and implement a complex protection system meeting regulatory criteria at a fraction of typical cost. The project delivered a Reactor Trip System (RTS) implementation with full traceability from NRC Request for Proposals and IEEE standards through formal architecture specifications to verified software.
|
HARDENS addressed a fundamental dilemma: existing U.S. nuclear control
|
||||||
|
rooms rely on analog technologies from the 1950s--60s. This technology incurs significant risk and
|
||||||
|
cost compared to modern control systems. The NRC contracted Galois, a formal methods firm, to demonstrate that
|
||||||
|
Model-Based Systems Engineering and formal methods could design, verify, and
|
||||||
|
implement a complex protection system meeting regulatory criteria at a fraction
|
||||||
|
of typical cost. The project delivered a Reactor Trip System (RTS)
|
||||||
|
implementation with full traceability from NRC Request for Proposals and IEEE
|
||||||
|
standards through formal architecture specifications to verified software.
|
||||||
|
|
||||||
HARDENS employed formal methods tools and techniques across the verification
|
HARDENS employed formal methods tools and techniques across the verification
|
||||||
hierarchy. High-level specifications used Lando, SysMLv2, and FRET (NASA Formal
|
hierarchy. High-level specifications used Lando, SysMLv2, and FRET (NASA Formal
|
||||||
@ -80,11 +109,13 @@ synthesis generated verifiable C implementations and SystemVerilog hardware
|
|||||||
implementations directly from Cryptol models---eliminating the traditional gap
|
implementations directly from Cryptol models---eliminating the traditional gap
|
||||||
between specification and implementation where errors commonly arise.
|
between specification and implementation where errors commonly arise.
|
||||||
|
|
||||||
Despite its accomplishments, HARDENS has a fundamental limitation for hybrid control synthesis: the project addressed only discrete digital control logic without modeling or verifying continuous reactor dynamics.
|
Despite its accomplishments, HARDENS has a fundamental limitation directly
|
||||||
|
relevant to hybrid control synthesis: the project addressed only discrete
|
||||||
|
digital control logic without modeling or verifying continuous reactor dynamics.
|
||||||
The Reactor Trip System specification and verification covered discrete state
|
The Reactor Trip System specification and verification covered discrete state
|
||||||
transitions (trip/no-trip decisions), digital sensor input processing through
|
transitions (trip/no-trip decisions), digital sensor input processing through
|
||||||
discrete logic, and discrete actuation outputs (reactor trip commands). The
|
discrete logic, and discrete actuation outputs (reactor trip commands). The
|
||||||
project did not address the continuous dynamics of nuclear reactor physics. Real
|
project did not address continuous dynamics of nuclear reactor physics. Real
|
||||||
reactor safety depends on the interaction between continuous
|
reactor safety depends on the interaction between continuous
|
||||||
processes---temperature, pressure, neutron flux---evolving in response to
|
processes---temperature, pressure, neutron flux---evolving in response to
|
||||||
discrete control decisions. HARDENS verified the discrete controller in
|
discrete control decisions. HARDENS verified the discrete controller in
|
||||||
@ -121,7 +152,7 @@ primary assurance evidence.
|
|||||||
|
|
||||||
\subsubsection{Differential Dynamic Logic: Post-Hoc Hybrid Verification}
|
\subsubsection{Differential Dynamic Logic: Post-Hoc Hybrid Verification}
|
||||||
|
|
||||||
HARDENS verified discrete control logic without continuous dynamics. Other researchers attacked the problem from the opposite direction: extending temporal logics to handle hybrid systems directly. This work produced differential dynamic logic (dL). dL introduces two additional operators
|
HARDENS verified discrete control logic but omitted continuous dynamics—a fundamental limitation for hybrid systems. Recognizing this gap, other researchers pursued a complementary approach: extending temporal logics to handle hybrid systems directly. This work produced differential dynamic logic (dL). dL introduces two additional operators
|
||||||
into temporal logic: the box operator and the diamond operator. The box operator
|
into temporal logic: the box operator and the diamond operator. The box operator
|
||||||
\([\alpha]\phi\) states that for some region \(\phi\), the hybrid system
|
\([\alpha]\phi\) states that for some region \(\phi\), the hybrid system
|
||||||
\(\alpha\) always remains within that region. In this way, it is a safety
|
\(\alpha\) always remains within that region. In this way, it is a safety
|
||||||
@ -154,10 +185,10 @@ design loop for complex systems like nuclear reactor startup procedures.
|
|||||||
|
|
||||||
\subsection{Summary: The Verification Gap}
|
\subsection{Summary: The Verification Gap}
|
||||||
|
|
||||||
This section establishes the current state of practice by answering two Heilmeier questions:
|
This section answered two Heilmeier questions:
|
||||||
|
|
||||||
\textbf{What has been done?} Human operators provide operational flexibility but introduce persistent reliability limitations that four decades of training improvements have failed to eliminate. Formal methods provide correctness guarantees but have not scaled to complete hybrid control design. HARDENS verified discrete logic without continuous dynamics. Differential dynamic logic expresses hybrid properties but requires post-design expert analysis and fails to scale to system synthesis.
|
\textbf{What has been done?} Human operators provide operational flexibility but introduce persistent reliability limitations—four decades of training improvements have failed to eliminate them. Formal methods provide correctness guarantees but have not scaled to complete hybrid control design. HARDENS verified discrete logic without continuous dynamics. Differential dynamic logic expresses hybrid properties but requires post-design expert analysis and does not scale to system synthesis.
|
||||||
|
|
||||||
\textbf{What are the limits of current practice?} No existing methodology synthesizes provably correct hybrid controllers from operational procedures with verification integrated into the design process. This gap between discrete-only formal methods and post-hoc hybrid verification prevents autonomous nuclear control with end-to-end correctness guarantees.
|
\textbf{What are the limits of current practice?} No existing methodology synthesizes provably correct hybrid controllers from operational procedures with verification integrated into the design process. This gap—between discrete-only formal methods and post-hoc hybrid verification—prevents autonomous nuclear control with end-to-end correctness guarantees.
|
||||||
|
|
||||||
Two imperatives emerge. The economic imperative: small modular reactors cannot compete with per-megawatt staffing costs matching large conventional plants. The technical imperative: current approaches verify either discrete logic or continuous dynamics, never both compositionally. These limitations define the research opportunity. Section 3 bridges this gap by establishing what makes this approach new and why it will succeed where prior work has failed.
|
The economic imperative is clear: small modular reactors cannot compete with per-megawatt staffing costs matching large conventional plants. The technical capability gap is equally clear: current approaches verify either discrete logic or continuous dynamics, never both compositionally. Section 3 presents our methodology for bridging this gap, establishing what is new and why it will succeed where prior work has not.
|
||||||
|
|||||||
@ -15,16 +15,16 @@
|
|||||||
% ----------------------------------------------------------------------------
|
% ----------------------------------------------------------------------------
|
||||||
% 1. INTRODUCTION AND HYBRID SYSTEMS DEFINITION
|
% 1. INTRODUCTION AND HYBRID SYSTEMS DEFINITION
|
||||||
% ----------------------------------------------------------------------------
|
% ----------------------------------------------------------------------------
|
||||||
Previous approaches verified either discrete switching logic or continuous control behavior, never both simultaneously. Continuous controllers rely on extensive simulation trials for validation; discrete switching logic undergoes simulated control room testing and human factors research. Neither method provides rigorous guarantees despite consuming enormous resources. HAHACS bridges this gap by composing formal methods from computer science with control-theoretic verification, formalizing reactor operations as hybrid automata.
|
Previous approaches to autonomous control verified either discrete switching logic or continuous control behavior—never both simultaneously. Continuous controllers rely on extensive simulation trials for validation; discrete switching logic undergoes simulated control room testing and human factors research. Neither method provides rigorous guarantees of control system behavior despite consuming enormous resources. HAHACS bridges this gap by composing formal methods from computer science with control-theoretic verification to formalize reactor operations using hybrid automata.
|
||||||
|
|
||||||
Hybrid system verification faces a fundamental challenge: the interaction between discrete and continuous dynamics. Discrete transitions change the governing vector field, creating discontinuities in system behavior. Traditional verification techniques fail to handle this interaction directly. Our methodology decomposes the problem by verifying discrete switching logic and continuous mode behavior separately, then composes them to establish guarantees for the complete hybrid system. This two-layer approach mirrors reactor operations: discrete supervisory logic determines which control mode is active; continuous controllers govern plant behavior within each mode.
|
Hybrid system verification faces a fundamental challenge: the interaction between discrete and continuous dynamics. Discrete transitions change the governing vector field, creating discontinuities in system behavior. Traditional verification techniques—designed for purely discrete or purely continuous systems—cannot handle this interaction directly. Our methodology decomposes the problem by verifying discrete switching logic and continuous mode behavior separately, then composes them to reason about the complete hybrid system. This two-layer approach mirrors reactor operations: discrete supervisory logic determines which control mode is active, while continuous controllers govern plant behavior within each mode.
|
||||||
|
|
||||||
Building a high-assurance hybrid autonomous control system (HAHACS) requires
|
Building a high-assurance hybrid autonomous control system (HAHACS) requires
|
||||||
a mathematical description of the system. This work draws on
|
a mathematical description of the system. This work draws on
|
||||||
automata theory, temporal logic, and control theory to provide that description. A hybrid system is a
|
automata theory, temporal logic, and control theory. A hybrid system is a
|
||||||
dynamical system with both continuous and discrete states. This proposal
|
dynamical system with both continuous and discrete states. This proposal
|
||||||
addresses continuous autonomous hybrid systems specifically: systems with no external input where continuous
|
discusses continuous autonomous hybrid systems specifically—systems with no external input where continuous
|
||||||
states remain continuous when discrete states change. These continuous states represent physical quantities that remain
|
states do not change instantaneously when discrete states change. These continuous states are physical quantities that remain
|
||||||
Lipschitz continuous. We follow the nomenclature from the Handbook on
|
Lipschitz continuous. We follow the nomenclature from the Handbook on
|
||||||
Hybrid Systems Control~\cite{HANDBOOK ON HYBRID SYSTEMS}, redefined here
|
Hybrid Systems Control~\cite{HANDBOOK ON HYBRID SYSTEMS}, redefined here
|
||||||
for convenience:
|
for convenience:
|
||||||
@ -50,27 +50,11 @@ where:
|
|||||||
\item $Inv$: safety invariants on the continuous dynamics
|
\item $Inv$: safety invariants on the continuous dynamics
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
Creating a HAHACS requires constructing this tuple together with proof artifacts that demonstrate the control system's actual implementation satisfies its intended behavior.
|
Creating a HAHACS requires constructing such a tuple together with proof artifacts that demonstrate the control system's actual implementation satisfies its intended behavior.
|
||||||
|
|
||||||
\textbf{What is new in this research?} Reactive synthesis, reachability analysis, and barrier certificates each exist independently. This work composes them into a complete methodology for hybrid control synthesis through three key innovations:
|
\textbf{What is new?} Reactive synthesis, reachability analysis, and barrier certificates each exist independently. Our contribution is the architecture that composes them into a complete methodology for hybrid control synthesis. Three innovations provide the novelty. First, discrete synthesis defines entry/exit/safety contracts that bound continuous verification, inverting the traditional approach of verifying a complete hybrid system globally. Second, continuous modes are classified by objective (transitory, stabilizing, expulsory) to select appropriate verification tools, enabling mode-local analysis with provable composition. Third, existing procedural structure is leveraged to avoid global hybrid system analysis, making the approach tractable for complex systems like nuclear reactor startup. No prior work integrates these techniques into a systematic design methodology from procedures to verified implementation.
|
||||||
|
|
||||||
\begin{enumerate}
|
\textbf{Why will it succeed?} Three factors ensure practical feasibility. First, nuclear procedures already decompose operations into discrete phases with explicit transition criteria—we formalize existing structure rather than impose artificial abstractions. Second, mode-level verification avoids the state explosion that makes global hybrid system analysis intractable, keeping computational complexity bounded by verifying each mode against local contracts. Third, the Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility. We demonstrate feasibility on production control systems with realistic reactor models, not merely prove it in principle.
|
||||||
\item \textbf{Contract-based decomposition:} Discrete synthesis defines entry/exit/safety contracts that bound continuous verification, inverting the traditional approach of global hybrid system verification.
|
|
||||||
\item \textbf{Mode classification:} Continuous modes classify by objective (transitory, stabilizing, expulsory), selecting appropriate verification tools and enabling mode-local analysis with provable composition.
|
|
||||||
\item \textbf{Procedure-driven structure:} Existing procedural structure avoids global hybrid system analysis, making the approach tractable for complex systems like nuclear reactor startup.
|
|
||||||
\end{enumerate}
|
|
||||||
|
|
||||||
No prior work integrates these three techniques into a systematic design methodology that spans procedures to verified implementation.
|
|
||||||
|
|
||||||
\textbf{Why will it succeed?} Three factors ensure practical feasibility:
|
|
||||||
|
|
||||||
\begin{enumerate}
|
|
||||||
\item Nuclear procedures already decompose operations into discrete phases with explicit transition criteria—we formalize existing structure rather than impose artificial abstractions.
|
|
||||||
\item Mode-level verification avoids the state explosion that makes global hybrid system analysis intractable, bounding computational complexity by verifying each mode against local contracts.
|
|
||||||
\item The Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility.
|
|
||||||
\end{enumerate}
|
|
||||||
|
|
||||||
We demonstrate feasibility on production control systems with realistic reactor models, not merely in principle. Figure~\ref{fig:hybrid_automaton} illustrates this hybrid structure for a simplified reactor startup sequence.
|
|
||||||
|
|
||||||
\begin{figure}
|
\begin{figure}
|
||||||
\centering
|
\centering
|
||||||
@ -136,9 +120,14 @@ We demonstrate feasibility on production control systems with realistic reactor
|
|||||||
|
|
||||||
\subsection{System Requirements, Specifications, and Discrete Controllers}
|
\subsection{System Requirements, Specifications, and Discrete Controllers}
|
||||||
|
|
||||||
The hybrid automaton formalism defined above provides a mathematical framework for describing discrete modes, continuous dynamics, guards, and invariants. This subsection shows how to construct such systems from existing operational knowledge rather than imposing artificial abstractions. Nuclear operations already possess a natural hybrid structure that maps directly to this automaton formalism.
|
The eight-tuple hybrid automaton formalism provides a mathematical framework for describing discrete modes, continuous dynamics, guards, and invariants. This subsection shows how to construct such systems from existing operational knowledge rather than imposing artificial abstractions. Nuclear operations already possess a natural hybrid structure that maps directly to this automaton formalism.
|
||||||
|
|
||||||
Human control of nuclear power divides into three scopes: strategic, operational, and tactical. Strategic control represents high-level, long-term decision making for the plant. Objectives at this level are complex and economic in scale, such as managing labor needs and supply chains to optimize scheduled maintenance and downtime. These decisions span months or years. The lowest level—the
|
Human control of nuclear power divides into three scopes: strategic,
|
||||||
|
operational, and tactical. Strategic control represents high-level,
|
||||||
|
long-term decision making for the plant. Objectives at this level are
|
||||||
|
complex and economic in scale, such as managing labor needs and supply chains to
|
||||||
|
optimize scheduled maintenance and downtime. The time scale at this level is
|
||||||
|
long, often spanning months or years. The lowest level—the
|
||||||
tactical level—controls individual components: pumps, turbines, and
|
tactical level—controls individual components: pumps, turbines, and
|
||||||
chemistry. Nuclear power
|
chemistry. Nuclear power
|
||||||
plants today have already automated tactical control somewhat; such automation is generally considered ``automatic control.''
|
plants today have already automated tactical control somewhat; such automation is generally considered ``automatic control.''
|
||||||
@ -147,7 +136,9 @@ physical state of the plant. Tactical control objectives include maintaining
|
|||||||
pressurizer level, maintaining core temperature, or adjusting reactivity with a
|
pressurizer level, maintaining core temperature, or adjusting reactivity with a
|
||||||
chemical shim.
|
chemical shim.
|
||||||
|
|
||||||
The operational control scope links these two extremes, representing the primary responsibility of human operators today. Operational control takes strategic objectives and implements tactical control sequences to achieve them, bridging high-level goals with low-level execution.
|
The operational control scope links these two extremes and represents the primary responsibility of human operators
|
||||||
|
today. Operational control takes the current strategic objective and implements
|
||||||
|
tactical control objectives to drive the plant toward strategic goals, bridging high-level and low-level goals.
|
||||||
|
|
||||||
Consider an example: a strategic goal may be to
|
Consider an example: a strategic goal may be to
|
||||||
perform refueling at a certain time, while the tactical level of the plant is
|
perform refueling at a certain time, while the tactical level of the plant is
|
||||||
@ -157,9 +148,9 @@ the way to achieve this objective.
|
|||||||
|
|
||||||
This structure reveals why the operational and
|
This structure reveals why the operational and
|
||||||
tactical levels fundamentally form a hybrid controller. The tactical level represents
|
tactical levels fundamentally form a hybrid controller. The tactical level represents
|
||||||
continuous plant evolution according to the control input and control
|
continuous evolution of the plant according to the control input and control
|
||||||
law; the operational level represents discrete state evolution that determines
|
law, while the operational level represents discrete state evolution that determines
|
||||||
which tactical control law applies. This operational level becomes the target for autonomous control.
|
which tactical control law to apply. We target this operational level for autonomous control.
|
||||||
|
|
||||||
|
|
||||||
\begin{figure}
|
\begin{figure}
|
||||||
@ -195,12 +186,23 @@ which tactical control law applies. This operational level becomes the target fo
|
|||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
|
|
||||||
This operational control level is the main reason nuclear control today requires human operators. The hybrid nature of this control system makes proving controller performance against strategic requirements difficult. Unified infrastructure for building and verifying hybrid systems does not currently exist. Humans fill this layer because their general intelligence provides a safe way to manage the system's hybrid nature. These operators use prescriptive operating
|
This operational control level is the main reason human
|
||||||
|
operators are required in nuclear control today. The hybrid nature of this control system
|
||||||
|
makes it difficult to prove a controller will perform according to
|
||||||
|
strategic requirements; unified infrastructure for building and verifying
|
||||||
|
hybrid systems does not currently exist. Humans fill this layer
|
||||||
|
because their general intelligence provides a safe way to manage
|
||||||
|
the hybrid nature of the system. These operators use prescriptive operating
|
||||||
manuals to perform their control with strict procedures on what control to
|
manuals to perform their control with strict procedures on what control to
|
||||||
implement at a given time. These procedures provide the key to the operational
|
implement at a given time. These procedures are the key to the operational
|
||||||
control scope.
|
control scope.
|
||||||
|
|
||||||
Constructing a HAHACS leverages two key observations about current practice. First, operational scope control is effectively discrete control. Second, operating procedures describe the implementation rules before construction. A HAHACS's intended behavior must be completely described before construction begins. Requirements define the behavior of any control system: statements about what
|
Constructing a HAHACS leverages two key
|
||||||
|
observations about current practice. First, operational scope control is
|
||||||
|
effectively discrete control. Second, operating procedures describe the rules for implementing this control
|
||||||
|
before implementation. We must completely describe a HAHACS's intended behavior before
|
||||||
|
constructing it. The
|
||||||
|
behavior of any control system originates in requirements: statements about what
|
||||||
the system must do, must not do, and under what conditions. For nuclear systems,
|
the system must do, must not do, and under what conditions. For nuclear systems,
|
||||||
these requirements derive from multiple sources including regulatory mandates,
|
these requirements derive from multiple sources including regulatory mandates,
|
||||||
design basis analyses, and operating procedures. The challenge is formalizing
|
design basis analyses, and operating procedures. The challenge is formalizing
|
||||||
@ -225,10 +227,10 @@ continuous state space: $p_i: \mathcal{X} \rightarrow \{\text{true},
|
|||||||
temperature exceeds 315°C'' or ``pressurizer level is between 30\% and 60\%.''
|
temperature exceeds 315°C'' or ``pressurizer level is between 30\% and 60\%.''
|
||||||
We do not impose this discrete abstraction artificially. Operating
|
We do not impose this discrete abstraction artificially. Operating
|
||||||
procedures for nuclear systems already define go/no-go conditions as discrete
|
procedures for nuclear systems already define go/no-go conditions as discrete
|
||||||
predicates. Design basis safety analysis determined these thresholds. Decades of operational experience have
|
predicates. Design basis safety analysis determined these thresholds; decades of operational experience have
|
||||||
validated them. Our methodology assumes
|
validated them. Our methodology assumes
|
||||||
this domain knowledge exists and provides a framework to formalize it. The approach is feasible for nuclear applications because generations
|
this domain knowledge exists and provides a framework to formalize it. The approach is feasible for nuclear applications because generations
|
||||||
of nuclear engineers have already completed the hard
|
of nuclear engineers have already done the hard
|
||||||
work of defining safe operating boundaries.
|
work of defining safe operating boundaries.
|
||||||
|
|
||||||
Linear temporal logic (LTL) is particularly well-suited for
|
Linear temporal logic (LTL) is particularly well-suited for
|
||||||
@ -247,7 +249,7 @@ eventually reaches operating temperature''), and response properties (``if
|
|||||||
coolant pressure drops, the system initiates shutdown within bounded time'').
|
coolant pressure drops, the system initiates shutdown within bounded time'').
|
||||||
|
|
||||||
|
|
||||||
We use FRET (Formal Requirements Elicitation Tool) to build these temporal logic statements. NASA developed FRET for high-assurance timed systems. FRET provides an intermediate language between temporal logic and natural language, enabling rigid definitions of temporal behavior through syntax accessible to engineers without formal methods expertise. This accessibility proves crucial for industrial feasibility: reducing required expert knowledge makes these tools adoptable by the current workforce.
|
We will use FRET (Formal Requirements Elicitation Tool) to build these temporal logic statements. NASA developed FRET for high-assurance timed systems. FRET provides an intermediate language between temporal logic and natural language, enabling rigid definitions of temporal behavior through syntax accessible to engineers without formal methods expertise. This accessibility is crucial for industrial feasibility: reducing required expert knowledge makes these tools adoptable by the current workforce.
|
||||||
|
|
||||||
FRET's key feature is its ability to start with logically imprecise
|
FRET's key feature is its ability to start with logically imprecise
|
||||||
statements and consecutively refine them into well-posed specifications. We can
|
statements and consecutively refine them into well-posed specifications. We can
|
||||||
@ -258,7 +260,7 @@ implementation. Second, it clearly demonstrates where natural language documents
|
|||||||
are insufficient. Human operators may still use these procedures, making any
|
are insufficient. Human operators may still use these procedures, making any
|
||||||
room for interpretation a weakness that must be addressed.
|
room for interpretation a weakness that must be addressed.
|
||||||
|
|
||||||
FRET has been successfully applied to spacecraft control systems, autonomous vehicle requirements, and medical device specifications. NASA used FRET for the Lunar Gateway project, formalizing flight software requirements that were previously specified only in natural language. The Defense Advanced Research Projects Agency (DARPA) employed FRET in the Assured Autonomy program to verify autonomous systems requirements. These applications demonstrate FRET's maturity for safety-critical domains. Nuclear control procedures present an ideal use case: they are already structured, detailed, and written to minimize ambiguity—precisely the characteristics that enable successful formalization.
|
(Some examples of where FRET has been used and why it will be successful here)
|
||||||
%%% NOTES (Section 2):
|
%%% NOTES (Section 2):
|
||||||
% - Add concrete FRET example showing requirement → FRETish → LTL
|
% - Add concrete FRET example showing requirement → FRETish → LTL
|
||||||
% - Discuss hysteresis and how to prevent mode chattering near boundaries
|
% - Discuss hysteresis and how to prevent mode chattering near boundaries
|
||||||
@ -271,14 +273,21 @@ FRET has been successfully applied to spacecraft control systems, autonomous veh
|
|||||||
|
|
||||||
\subsection{Discrete Controller Synthesis}
|
\subsection{Discrete Controller Synthesis}
|
||||||
|
|
||||||
Temporal logic specifications define what the system must do. Reactive synthesis determines how to implement those requirements, automating the creation of reactive programs from temporal logic—programs that take input for a given state and produce output.
|
With system requirements defined as temporal logic specifications, we now build
|
||||||
|
the discrete control system through reactive synthesis.
|
||||||
With system requirements defined as temporal logic specifications, reactive synthesis builds the discrete control system. Reactive synthesis automates the creation of reactive programs from temporal logic specifications—programs that take an input for a given state and produce
|
Reactive synthesis automates the creation of reactive programs from temporal logic specifications.
|
||||||
|
A reactive program takes an input for a given state and produces
|
||||||
an output. Our systems fit this model: the current discrete state and
|
an output. Our systems fit this model: the current discrete state and
|
||||||
status of guard conditions form the input, while the next discrete
|
status of guard conditions form the input; the next discrete
|
||||||
state forms the output.
|
state forms the output.
|
||||||
|
|
||||||
Reactive synthesis solves a fundamental problem: given an LTL formula $\varphi$ specifying desired system behavior, automatically construct a finite-state machine (strategy) that produces outputs in response to environment inputs such that all resulting execution traces satisfy $\varphi$. If such a strategy exists, the specification is \emph{realizable}. The synthesis algorithm either produces a correct-by-construction controller or reports that no such controller exists. This realizability check provides immediate value: an
|
Reactive synthesis solves a fundamental problem: given an LTL formula $\varphi$
|
||||||
|
that specifies desired system behavior, automatically construct a finite-state
|
||||||
|
machine (strategy) that produces outputs in response to environment inputs such
|
||||||
|
that all resulting execution traces satisfy $\varphi$. If such a strategy
|
||||||
|
exists, the specification is called \emph{realizable}. The synthesis algorithm
|
||||||
|
either produces a correct-by-construction controller or reports that no such
|
||||||
|
controller can exist. This realizability check provides immediate value: an
|
||||||
unrealizable specification indicates conflicting or impossible requirements in
|
unrealizable specification indicates conflicting or impossible requirements in
|
||||||
the original procedures, catching errors before implementation.
|
the original procedures, catching errors before implementation.
|
||||||
|
|
||||||
@ -287,9 +296,9 @@ changes between modes trace back through specifications to requirements, establi
|
|||||||
behavior. Second, it replaces probabilistic human judgment with deterministic guarantees. Human operators cannot eliminate error from discrete control decisions; humans are intrinsically fallible. By defining system behavior using temporal logics and synthesizing the controller using deterministic
|
behavior. Second, it replaces probabilistic human judgment with deterministic guarantees. Human operators cannot eliminate error from discrete control decisions; humans are intrinsically fallible. By defining system behavior using temporal logics and synthesizing the controller using deterministic
|
||||||
algorithms, strategic decisions always follow operating procedures exactly—no exceptions, no deviations, no human factors.
|
algorithms, strategic decisions always follow operating procedures exactly—no exceptions, no deviations, no human factors.
|
||||||
|
|
||||||
The synthesized automaton translates directly to executable code through standard compilation techniques. Each discrete state maps to a control mode, guard conditions map to conditional statements, and the transition function defines the control flow. This compilation process preserves the formal guarantees: the implemented code is correct by construction because the automaton it derives from was synthesized to satisfy the temporal logic specifications.
|
(Talk about how one would go from a discrete automaton to actual code)
|
||||||
|
|
||||||
Reactive synthesis has proven successful in robotics, avionics, and industrial control. Recent applications include synthesizing robot motion planners from natural language specifications, generating flight control software for unmanned aerial vehicles, and creating verified controllers for automotive systems. These successes demonstrate that reactive synthesis scales beyond toy problems to real-world safety-critical applications.
|
(Examples of reactive synthesis in the wild)
|
||||||
|
|
||||||
%%% NOTES (Section 3):
|
%%% NOTES (Section 3):
|
||||||
% - Mention computational complexity of synthesis (doubly exponential worst case)
|
% - Mention computational complexity of synthesis (doubly exponential worst case)
|
||||||
@ -303,11 +312,11 @@ Reactive synthesis has proven successful in robotics, avionics, and industrial c
|
|||||||
|
|
||||||
\subsection{Continuous Control Modes}
|
\subsection{Continuous Control Modes}
|
||||||
|
|
||||||
Reactive synthesis produces a provably correct discrete controller from operating procedures—an automaton that determines when to switch between modes. But hybrid control requires more than correct mode switching; the continuous dynamics executing within each discrete mode must also be verified to ensure the complete system behaves correctly.
|
Reactive synthesis produces a provably correct discrete controller from operating procedures. This discrete controller determines when to switch between modes—but hybrid control requires more. The continuous dynamics executing within each discrete mode must also be verified to ensure the complete system behaves correctly.
|
||||||
|
|
||||||
This subsection describes the continuous control modes executing within each discrete state and explains how they verify against requirements imposed by the discrete layer. The verification approach depends on control objectives. We classify modes into three types—transitory, stabilizing, and expulsory—each requiring different verification tools matched to their distinct purposes.
|
This subsection describes the continuous control modes that execute within each discrete state and explains how we verify they satisfy the requirements imposed by the discrete layer. The verification approach depends on control objectives. We classify modes into three types—transitory, stabilizing, and expulsory—each requiring different verification tools matched to their distinct purposes.
|
||||||
|
|
||||||
This methodology's scope requires clarification: this work verifies continuous controllers but does not synthesize them. The distinction parallels model checking in software verification. Model checking verifies whether a given implementation satisfies its specification without prescribing how to write the software. We assume engineers design continuous controllers using standard control theory techniques. Our contribution provides the verification framework confirming that candidate controllers compose correctly with the discrete layer to produce a safe hybrid system.
|
This methodology's scope regarding continuous controller design requires clarification: this work verifies continuous controllers but does not synthesize them. The distinction parallels model checking in software verification. Model checking verifies whether a given implementation satisfies its specification without prescribing how to write the software. We assume engineers can design continuous controllers using standard control theory techniques. Our contribution is the verification framework that confirms candidate controllers compose correctly with the discrete layer to produce a safe hybrid system.
|
||||||
|
|
||||||
The operational control scope defines go/no-go decisions that determine what
|
The operational control scope defines go/no-go decisions that determine what
|
||||||
kind of continuous control to implement. The entry or exit conditions of a
|
kind of continuous control to implement. The entry or exit conditions of a
|
||||||
@ -346,8 +355,8 @@ $q_i$ must drive the system from any state in $\mathcal{X}_{entry,i}$ to some
|
|||||||
state in $\mathcal{X}_{exit,i}$ while remaining within $\mathcal{X}_{safe,i}$.
|
state in $\mathcal{X}_{exit,i}$ while remaining within $\mathcal{X}_{safe,i}$.
|
||||||
|
|
||||||
We classify continuous controllers into three types based on their objectives:
|
We classify continuous controllers into three types based on their objectives:
|
||||||
transitory, stabilizing, and expulsory. Each type requires distinct verification
|
transitory, stabilizing, and expulsory. Each type has distinct verification
|
||||||
tools matched to its control objective. Transitory modes drive the plant between operating conditions. Stabilizing modes maintain the plant within operating regions. Expulsory modes ensure safety under degraded conditions. The following subsections detail each mode type and its verification approach.
|
requirements that determine which formal methods tools are appropriate. The following subsections detail each mode type and its verification approach.
|
||||||
|
|
||||||
%%% NOTES (Section 4):
|
%%% NOTES (Section 4):
|
||||||
% - Add figure showing the relationship between entry/exit/safety sets
|
% - Add figure showing the relationship between entry/exit/safety sets
|
||||||
@ -362,7 +371,10 @@ tools matched to its control objective. Transitory modes drive the plant between
|
|||||||
|
|
||||||
Transitory modes move the plant from one discrete operating condition to another. Their purpose is to execute transitions: start from entry conditions, reach exit conditions, and maintain safety invariants throughout. Examples include power ramp-up sequences, cooldown procedures, and load-following maneuvers.
|
Transitory modes move the plant from one discrete operating condition to another. Their purpose is to execute transitions: start from entry conditions, reach exit conditions, and maintain safety invariants throughout. Examples include power ramp-up sequences, cooldown procedures, and load-following maneuvers.
|
||||||
|
|
||||||
The control objective for a transitory mode can be stated formally. Given entry conditions $\mathcal{X}_{entry}$, exit conditions $\mathcal{X}_{exit}$, safety invariant $\mathcal{X}_{safe}$, and closed-loop dynamics $\dot{x} = f(x, u(x))$, the controller must satisfy:
|
The control objective for a transitory mode can be stated
|
||||||
|
formally. Given entry conditions $\mathcal{X}_{entry}$, exit conditions
|
||||||
|
$\mathcal{X}_{exit}$, safety invariant $\mathcal{X}_{safe}$, and closed-loop
|
||||||
|
dynamics $\dot{x} = f(x, u(x))$, the controller must satisfy:
|
||||||
\[
|
\[
|
||||||
\forall x_0 \in \mathcal{X}_{entry}: \exists T > 0: x(T) \in \mathcal{X}_{exit}
|
\forall x_0 \in \mathcal{X}_{entry}: \exists T > 0: x(T) \in \mathcal{X}_{exit}
|
||||||
\land \forall t \in [0,T]: x(t) \in \mathcal{X}_{safe}
|
\land \forall t \in [0,T]: x(t) \in \mathcal{X}_{safe}
|
||||||
@ -411,7 +423,7 @@ appropriate to the fidelity of the reactor models available.
|
|||||||
|
|
||||||
\subsubsection{Stabilizing Modes}
|
\subsubsection{Stabilizing Modes}
|
||||||
|
|
||||||
Transitory modes drive the system toward exit conditions. Stabilizing modes do the opposite: they maintain the system within a desired operating region indefinitely. Examples include steady-state power operation, hot standby, and load-following at constant power level. This different control objective requires a different verification approach. Reachability analysis answers "can the system reach a target?" Stabilizing modes ask instead "does the system stay within bounds?" Barrier certificates provide the appropriate tool.
|
Transitory modes drive the system toward exit conditions. Stabilizing modes, in contrast, maintain the system within a desired operating region indefinitely. Examples include steady-state power operation, hot standby, and load-following at constant power level. The different control objective requires a different verification approach: reachability analysis answers "can the system reach a target?" while stabilizing modes must prove "does the system stay within bounds?" Barrier certificates provide the appropriate tool.
|
||||||
Barrier certificates analyze the dynamics of the system to determine whether
|
Barrier certificates analyze the dynamics of the system to determine whether
|
||||||
flux across a given boundary exists. They evaluate whether any trajectory leaves
|
flux across a given boundary exists. They evaluate whether any trajectory leaves
|
||||||
a given boundary. This definition exactly matches what defines the validity of a
|
a given boundary. This definition exactly matches what defines the validity of a
|
||||||
@ -463,7 +475,7 @@ controller.
|
|||||||
|
|
||||||
\subsubsection{Expulsory Modes}
|
\subsubsection{Expulsory Modes}
|
||||||
|
|
||||||
Transitory and stabilizing modes handle nominal operations where plant dynamics match the design model. Expulsory modes handle the opposite case: when the plant deviates from expected behavior through component failures, sensor degradation, or unanticipated disturbances. These continuous controllers prioritize robustness over optimality. The control objective shifts from reaching targets or maintaining regions to driving the plant to a safe shutdown state from potentially anywhere in the state space, under degraded or uncertain dynamics. Examples include emergency core cooling, reactor SCRAM sequences, and controlled depressurization procedures.
|
Transitory and stabilizing modes handle nominal operations under the assumption that plant dynamics match the design model. When the plant deviates from expected behavior—through component failures, sensor degradation, or unanticipated disturbances—expulsory modes take over to ensure safety. These continuous controllers prioritize robustness over optimality. The control objective is to drive the plant to a safe shutdown state from potentially anywhere in the state space, under degraded or uncertain dynamics. Examples include emergency core cooling, reactor SCRAM sequences, and controlled depressurization procedures.
|
||||||
|
|
||||||
Proving controller correctness through reachability and barrier certificates makes detecting physical failures straightforward. The controller cannot be incorrect for the nominal plant model. When an invariant is violated, the plant dynamics must have changed. The HAHACS identifies faults when continuous controllers violate discrete boundary conditions—a direct consequence of verified nominal control modes. Unexpected behavior implies off-nominal conditions.
|
Proving controller correctness through reachability and barrier certificates makes detecting physical failures straightforward. The controller cannot be incorrect for the nominal plant model. When an invariant is violated, the plant dynamics must have changed. The HAHACS identifies faults when continuous controllers violate discrete boundary conditions—a direct consequence of verified nominal control modes. Unexpected behavior implies off-nominal conditions.
|
||||||
|
|
||||||
@ -510,7 +522,7 @@ safety margins will matter more than performance during emergency shutdown.
|
|||||||
|
|
||||||
\subsection{Industrial Implementation}
|
\subsection{Industrial Implementation}
|
||||||
|
|
||||||
The complete methodology—procedure formalization, discrete synthesis, and continuous verification across three mode types—provides a theoretical framework for hybrid control synthesis. Demonstrating practical feasibility requires validation on realistic systems using industrial-grade hardware, advancing from analytical concepts (TRL 2-3) to laboratory demonstration (TRL 5).
|
The complete methodology—procedure formalization, discrete synthesis, and continuous verification across three mode types—must now be validated on realistic systems using industrial-grade hardware to demonstrate practical feasibility and advance from analytical concepts (TRL 2-3) to laboratory demonstration (TRL 5).
|
||||||
This research will leverage the University of Pittsburgh Cyber Energy Center's
|
This research will leverage the University of Pittsburgh Cyber Energy Center's
|
||||||
partnership with Emerson to implement and test the HAHACS methodology on
|
partnership with Emerson to implement and test the HAHACS methodology on
|
||||||
production control equipment. Emerson's Ovation distributed control system is widely deployed
|
production control equipment. Emerson's Ovation distributed control system is widely deployed
|
||||||
@ -540,13 +552,13 @@ of transferring technology directly to industry with a direct collaboration in
|
|||||||
this research, while getting an excellent perspective of how our research
|
this research, while getting an excellent perspective of how our research
|
||||||
outcomes can align best with customer needs.
|
outcomes can align best with customer needs.
|
||||||
|
|
||||||
This section establishes the research methodology by answering two critical Heilmeier questions:
|
This section answered two Heilmeier questions:
|
||||||
|
|
||||||
\textbf{What is new in this research?} This work integrates reactive synthesis, reachability analysis, and barrier certificates into a compositional architecture for hybrid control synthesis. Three innovations distinguish this approach: using discrete synthesis to define verification contracts (inverting traditional global analysis), classifying continuous modes by objective to select appropriate verification tools, and leveraging existing procedural structure to avoid intractable state explosion.
|
\textbf{What is new?} Reactive synthesis, reachability analysis, and barrier certificates are integrated into a compositional architecture for hybrid control synthesis. The methodology inverts traditional approaches by using discrete synthesis to define verification contracts, classifies continuous modes to select appropriate verification tools, and leverages existing procedural structure to avoid intractable global analysis.
|
||||||
|
|
||||||
\textbf{Why will this approach succeed?} Three factors ensure practical feasibility. First, nuclear procedures already decompose operations into discrete phases with explicit transition criteria—the approach formalizes existing structure rather than imposing artificial abstractions. Second, mode-level verification bounds each verification problem locally, avoiding the state explosion that makes global hybrid system analysis intractable. Third, the Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility.
|
\textbf{Why will it succeed?} Nuclear procedures already decompose operations into discrete phases with explicit transition criteria—the methodology formalizes existing structure rather than imposing artificial abstractions. Mode-level verification avoids state explosion by bounding each verification problem locally. The Emerson collaboration provides domain expertise to validate procedure formalization and industrial hardware to demonstrate practical implementation.
|
||||||
|
|
||||||
The methodology is now complete: procedure formalization, discrete synthesis, continuous verification across three mode types, and hardware implementation. What will be done and why it will work have been established. One critical question remains: \textit{how will success be measured?} Section 4 addresses this through Technology Readiness Level advancement, demonstrating both theoretical correctness and practical implementability as the system progresses from component validation (TRL 3) through integrated simulation (TRL 4) to hardware-in-the-loop testing (TRL 5).
|
Having established the complete methodology—from procedure formalization through discrete synthesis to continuous verification and hardware implementation—the next question becomes: how do we measure success? Section 4 addresses this question by defining Technology Readiness Level advancement as the primary metric, demonstrating both theoretical correctness and practical implementability as the system progresses from component validation to integrated hardware testing.
|
||||||
|
|
||||||
%%% NOTES (Section 5):
|
%%% NOTES (Section 5):
|
||||||
% - Get specific details on ARCADE interface from Emerson collaboration
|
% - Get specific details on ARCADE interface from Emerson collaboration
|
||||||
|
|||||||
@ -1,21 +1,31 @@
|
|||||||
\section{Metrics for Success}
|
\section{Metrics for Success}
|
||||||
|
|
||||||
\textbf{How do we measure success?} This research advances through Technology Readiness Levels, progressing from fundamental concepts (TRL 2--3) to validated prototype demonstration (TRL 5).
|
\textbf{How do we measure success?} This research advances through
|
||||||
|
Technology Readiness Levels, progressing from fundamental concepts (TRL 2--3) to validated
|
||||||
|
prototype demonstration (TRL 5).
|
||||||
|
|
||||||
This work begins at TRL 2--3 and aims to reach TRL 5, where system components operate successfully in a relevant laboratory environment. TRL advancement provides the most appropriate success metric for this work, bridging the gap between academic proof-of-concept and practical deployment. This section explains why, then defines specific criteria for each level from TRL 3 through TRL 5.
|
This work begins at TRL 2--3 and aims to reach TRL 5—where
|
||||||
|
system components operate successfully in a relevant laboratory environment.
|
||||||
|
This section explains why TRL advancement provides the most appropriate success
|
||||||
|
metric and defines specific criteria for TRL 5.
|
||||||
|
|
||||||
Technology Readiness Levels provide the ideal success metric by explicitly measuring the gap between academic proof-of-concept and practical deployment—precisely what this work bridges. Academic metrics like papers published or theorems proved fail to capture practical feasibility; empirical metrics like simulation accuracy or computational speed fail to demonstrate theoretical rigor. Only TRLs measure both simultaneously.
|
Technology Readiness Levels provide the ideal success metric by
|
||||||
|
explicitly measuring the gap between academic proof-of-concept and practical
|
||||||
|
deployment—precisely what this work aims to bridge. Academic metrics like
|
||||||
|
papers published or theorems proved cannot capture practical feasibility.
|
||||||
|
Empirical metrics like simulation accuracy or computational speed cannot
|
||||||
|
demonstrate theoretical rigor. TRLs alone measure both simultaneously.
|
||||||
Advancing from TRL 3 to TRL 5 requires maintaining theoretical rigor while
|
Advancing from TRL 3 to TRL 5 requires maintaining theoretical rigor while
|
||||||
progressively demonstrating practical feasibility. Formal verification must
|
progressively demonstrating practical feasibility. Formal verification must
|
||||||
remain valid as the system moves from individual components to integrated
|
remain valid as the system moves from individual components to integrated
|
||||||
hardware testing.
|
hardware testing.
|
||||||
|
|
||||||
The nuclear industry requires extremely high assurance before deploying new
|
The nuclear industry requires extremely high assurance before deploying new
|
||||||
control technologies. Demonstrating theoretical correctness alone proves
|
control technologies. Demonstrating theoretical correctness alone is
|
||||||
insufficient for adoption; conversely, showing empirical performance without
|
insufficient for adoption; conversely, showing empirical performance without
|
||||||
formal guarantees fails to meet regulatory requirements. TRLs capture this dual
|
formal guarantees fails to meet regulatory requirements. TRLs capture this dual
|
||||||
requirement naturally. Each level represents both increased practical maturity
|
requirement naturally. Each level represents both increased practical maturity
|
||||||
and sustained theoretical validity, while TRL assessment forces explicit
|
and sustained theoretical validity. Furthermore, TRL assessment forces explicit
|
||||||
identification of remaining barriers to deployment. The nuclear industry already
|
identification of remaining barriers to deployment. The nuclear industry already
|
||||||
uses TRLs for technology assessment, making this metric directly relevant to
|
uses TRLs for technology assessment, making this metric directly relevant to
|
||||||
potential adopters. Reaching TRL 5 provides a clear answer to industry questions
|
potential adopters. Reaching TRL 5 provides a clear answer to industry questions
|
||||||
@ -79,6 +89,4 @@ a relevant laboratory environment. This establishes both theoretical validity
|
|||||||
and practical feasibility, proving the methodology produces verified
|
and practical feasibility, proving the methodology produces verified
|
||||||
controllers implementable with current technology.
|
controllers implementable with current technology.
|
||||||
|
|
||||||
This section establishes success criteria by answering the Heilmeier question \textbf{How do we measure success?} Technology Readiness Level advancement from 2--3 to 5 provides the answer: we measure success by demonstrating both theoretical correctness and practical feasibility through progressively integrated validation. TRL 3 proves component-level correctness, TRL 4 demonstrates system-level integration in simulation, and TRL 5 validates hardware implementation in a relevant environment. Achieving TRL 5 proves the methodology produces verified controllers implementable with current technology.
|
This section established how we measure success: TRL advancement from 2--3 to 5 demonstrates both theoretical correctness and practical feasibility. However, reaching TRL 5 depends on several critical assumptions. If these assumptions prove false, the research may stall at lower readiness levels despite sound methodology. Section 5 addresses the Heilmeier question \textbf{What could prevent success?} by identifying the primary risks, their early warning indicators, and contingency plans that preserve research value even if core assumptions fail.
|
||||||
|
|
||||||
However, reaching TRL 5 depends on several critical assumptions. If these assumptions prove false, the research could stall at lower readiness levels despite sound methodology. Section 5 addresses the complementary Heilmeier question—\textbf{What could prevent success?}—by identifying primary risks, their early warning indicators, and contingency plans that preserve research value even if core assumptions fail.
|
|
||||||
|
|||||||
@ -1,12 +1,19 @@
|
|||||||
\section{Risks and Contingencies}
|
\section{Risks and Contingencies}
|
||||||
|
|
||||||
\textbf{What could prevent success?} Every research plan rests on assumptions. When those assumptions prove false, research must adapt. Four primary risks could prevent successful completion: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration. Each risk carries associated early warning indicators and contingency plans that preserve research value even when core assumptions prove false. The staged project structure ensures that partial success yields publishable results while clearly identifying remaining barriers to deployment.
|
This research relies on several critical assumptions that, if invalidated, require
|
||||||
|
scope adjustment or methodological revision. Four primary risks could prevent
|
||||||
|
successful completion: computational tractability of
|
||||||
|
synthesis and verification, complexity of the discrete-continuous interface,
|
||||||
|
completeness of procedure formalization, and hardware-in-the-loop integration. Each risk has associated early warning indicators and
|
||||||
|
contingency plans that preserve research value even if core assumptions prove
|
||||||
|
false. The staged project structure ensures that partial success yields
|
||||||
|
publishable results while clearly identifying remaining barriers to deployment.
|
||||||
|
|
||||||
\subsection{Computational Tractability of Synthesis}
|
\subsection{Computational Tractability of Synthesis}
|
||||||
|
|
||||||
The first major assumption is that formalized startup procedures will yield
|
The first major assumption is that formalized startup procedures will yield
|
||||||
automata small enough for efficient synthesis and verification. Reactive
|
automata small enough for efficient synthesis and verification. Reactive
|
||||||
synthesis scales exponentially with specification complexity, which means temporal logic specifications derived from complete startup procedures may
|
synthesis scales exponentially with specification complexity. Temporal logic specifications derived from complete startup procedures may
|
||||||
produce automata with thousands of states. Such large automata would require
|
produce automata with thousands of states. Such large automata would require
|
||||||
synthesis times exceeding days or weeks, preventing demonstration of the
|
synthesis times exceeding days or weeks, preventing demonstration of the
|
||||||
complete methodology within project timelines. Reachability analysis for
|
complete methodology within project timelines. Reachability analysis for
|
||||||
@ -24,11 +31,12 @@ Reachability analysis failing to converge within reasonable time bounds would
|
|||||||
show that continuous mode verification cannot be completed with available
|
show that continuous mode verification cannot be completed with available
|
||||||
computational resources.
|
computational resources.
|
||||||
|
|
||||||
If computational tractability becomes the limiting factor, we reduce scope to a minimal viable startup sequence covering only cold shutdown to criticality to low-power hold. This scope reduction omits power ascension and other operational phases. The reduced sequence still demonstrates the complete methodology—procedure formalization, discrete synthesis, continuous verification, and hardware implementation—while reducing computational burden. The research contribution remains valid: we prove that formal hybrid control synthesis is achievable for safety-critical nuclear applications and clearly identify which operational complexities exceed current computational capabilities. We document the limitation as a scaling constraint requiring future work, not a methodological failure.
|
If computational tractability becomes the limiting factor, we reduce scope to a
|
||||||
|
minimal viable startup sequence covering only cold shutdown to criticality to low-power hold, omitting power ascension and other operational phases. This reduced sequence still demonstrates the complete methodology—procedure formalization, discrete synthesis, continuous verification, and hardware implementation—while reducing computational burden. The research contribution remains valid: we prove that formal hybrid control synthesis is achievable for safety-critical nuclear applications and clearly identify which operational complexities exceed current computational capabilities. We document the limitation as a scaling constraint requiring future work, not a methodological failure.
|
||||||
|
|
||||||
\subsection{Discrete-Continuous Interface Formalization}
|
\subsection{Discrete-Continuous Interface Formalization}
|
||||||
|
|
||||||
While computational tractability addresses whether synthesis can complete within practical time bounds—a practical constraint—the second risk proves more fundamental: whether boolean guard
|
Computational tractability addresses whether synthesis can complete within practical time bounds—a practical constraint. The second risk is more fundamental: whether boolean guard
|
||||||
conditions in temporal logic can map cleanly to continuous state boundaries required for mode
|
conditions in temporal logic can map cleanly to continuous state boundaries required for mode
|
||||||
transitions. This interface represents the fundamental challenge of hybrid
|
transitions. This interface represents the fundamental challenge of hybrid
|
||||||
systems: relating discrete switching logic to continuous dynamics. Temporal
|
systems: relating discrete switching logic to continuous dynamics. Temporal
|
||||||
@ -84,7 +92,7 @@ computational resources where they matter most for safety assurance.
|
|||||||
|
|
||||||
\subsection{Procedure Formalization Completeness}
|
\subsection{Procedure Formalization Completeness}
|
||||||
|
|
||||||
While the previous two risks concern verification infrastructure—computational scaling and mathematical formalization—the third assumption addresses the source material itself: whether existing startup procedures contain sufficient
|
The previous two risks concern verification infrastructure—computational scaling and mathematical formalization. The third assumption addresses the source material itself: that existing startup procedures contain sufficient
|
||||||
detail and clarity for translation into temporal logic specifications. Nuclear
|
detail and clarity for translation into temporal logic specifications. Nuclear
|
||||||
operating procedures, while extensively detailed, were written for human
|
operating procedures, while extensively detailed, were written for human
|
||||||
operators who bring contextual understanding and adaptive reasoning to their
|
operators who bring contextual understanding and adaptive reasoning to their
|
||||||
@ -140,6 +148,6 @@ extensions, ensuring they address industry-wide practices rather than specific
|
|||||||
quirks.
|
quirks.
|
||||||
|
|
||||||
|
|
||||||
This section identifies barriers to success by answering the Heilmeier question \textbf{What could prevent success?} Four primary risks threaten project completion: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration challenges. Each risk has identifiable early warning indicators and viable mitigation strategies. The staged project structure ensures that partial success yields publishable results while clearly identifying remaining barriers to deployment—even when core assumptions prove invalid, the research produces valuable contributions.
|
This section addressed the Heilmeier question: \textbf{What could prevent success?} Four primary risks—computational tractability, discrete-continuous interface complexity, procedure formalization completeness, and hardware integration—each have identifiable early indicators and viable mitigation strategies. The staged approach ensures valuable contributions even if core assumptions prove invalid: partial success yields publishable results that clearly identify remaining barriers to deployment.
|
||||||
|
|
||||||
The technical research plan is now complete. What will be done (Section 3), how success will be measured (Section 4), and what might prevent it (this section) have been established. One critical Heilmeier question remains: \textbf{Who cares? Why now? What difference will it make?} Section 6 answers these questions by connecting this technical methodology to urgent economic and infrastructure challenges facing the nuclear industry and broader energy sector.
|
With technical feasibility established through the methodology (Section 3), success metrics defined (Section 4), and risks addressed with contingency plans (Section 5), the research plan is complete from a technical perspective. Section 6 now addresses the remaining Heilmeier questions that establish broader impact: \textbf{Who cares?} \textbf{Why now?} \textbf{What difference will it make?}
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
\section{Broader Impacts}
|
\section{Broader Impacts}
|
||||||
|
|
||||||
\textbf{Who cares? Why now? What difference will it make?} Three stakeholder groups face the same economic constraint: the nuclear industry, datacenter operators, and clean energy advocates. All confront high operating costs driven by staffing requirements. AI infrastructure demands, growing exponentially, have made this constraint urgent.
|
\textbf{Who cares? Why now?} The nuclear industry, datacenter operators, and clean energy advocates all face the same economic constraint: high operating costs driven by staffing requirements. AI infrastructure demands—growing exponentially—have made this constraint urgent.
|
||||||
|
|
||||||
Nuclear power presents both a compelling application domain and an urgent
|
Nuclear power presents both a compelling application domain and an urgent
|
||||||
economic challenge. Recent interest in powering artificial intelligence
|
economic challenge. Recent interest in powering artificial intelligence
|
||||||
@ -10,11 +10,26 @@ Deploying SMRs at datacenter sites minimizes transmission losses and
|
|||||||
eliminates emissions. However, nuclear power
|
eliminates emissions. However, nuclear power
|
||||||
economics at this scale demand careful attention to operating costs.
|
economics at this scale demand careful attention to operating costs.
|
||||||
|
|
||||||
The U.S. Energy Information Administration's Annual Energy Outlook 2022 projects advanced nuclear power entering service in 2027 will cost \$88.24 per megawatt-hour~\cite{eia_lcoe_2022}. Datacenter electricity demand is projected to reach 1,050 terawatt-hours annually by 2030~\cite{eesi_datacenter_2024}. Nuclear power supplying this demand would generate total annual costs exceeding \$92 billion. Operations and maintenance represents a substantial component. The EIA estimates that fixed O\&M costs alone account for \$16.15 per megawatt-hour, with additional variable O\&M costs embedded in fuel and operating expenses~\cite{eia_lcoe_2022}. Combined, O\&M-related costs represent approximately 23--30\% of the total levelized cost of electricity, translating to \$21--28 billion annually for projected datacenter demand.
|
The U.S. Energy Information Administration's Annual Energy Outlook
|
||||||
|
2022 projects advanced nuclear power entering service in 2027 will cost
|
||||||
|
\$88.24 per megawatt-hour~\cite{eia_lcoe_2022}. Datacenter electricity demand is
|
||||||
|
projected to reach 1,050 terawatt-hours annually by
|
||||||
|
2030~\cite{eesi_datacenter_2024}. Nuclear power supplying this demand would
|
||||||
|
generate total annual costs exceeding \$92 billion. Within this
|
||||||
|
figure, operations and maintenance represents a substantial component. The EIA
|
||||||
|
estimates that fixed O\&M costs alone account for \$16.15 per megawatt-hour,
|
||||||
|
with additional variable O\&M costs embedded in fuel and operating
|
||||||
|
expenses~\cite{eia_lcoe_2022}. Combined, O\&M-related costs represent
|
||||||
|
approximately 23--30\% of the total levelized cost of electricity, translating
|
||||||
|
to \$21--28 billion annually for projected datacenter demand.
|
||||||
|
|
||||||
\textbf{What difference will it make?} This research directly addresses the \$21--28 billion annual O\&M cost challenge through high-assurance autonomous control, making small modular reactors economically viable for datacenter power.
|
\textbf{What difference will it make?} This research directly addresses the
|
||||||
|
\$21--28 billion annual O\&M cost challenge through high-assurance autonomous
|
||||||
|
control, making small modular reactors economically viable for datacenter power.
|
||||||
|
|
||||||
Current nuclear operations require full control room staffing for each reactor—whether large conventional units or small modular designs. For large reactors producing 1,000+ MW, staffing costs spread across substantial output. Small modular reactors producing 50-300 MW face the same staffing requirements with far lower output, which makes per-megawatt costs prohibitive. These staffing requirements drive the economic challenge that threatens SMR deployment for datacenter applications. Synthesizing provably correct hybrid controllers from formal
|
Current nuclear operations require full control room staffing for each
|
||||||
|
reactor—whether large conventional units or small modular designs. For large reactors producing 1,000+ MW, staffing costs spread across substantial output. Small modular reactors producing 50-300 MW face the same staffing requirements with far lower output, making per-megawatt costs prohibitive. These staffing requirements drive the economic challenge
|
||||||
|
that threatens SMR deployment for datacenter applications. Synthesizing provably correct hybrid controllers from formal
|
||||||
specifications automates routine operational sequences that currently require
|
specifications automates routine operational sequences that currently require
|
||||||
constant human oversight. This enables a fundamental shift from direct operator
|
constant human oversight. This enables a fundamental shift from direct operator
|
||||||
control to supervisory monitoring, where operators oversee multiple autonomous
|
control to supervisory monitoring, where operators oversee multiple autonomous
|
||||||
@ -58,12 +73,12 @@ establish both the technical feasibility and regulatory pathway for broader
|
|||||||
adoption across critical infrastructure.
|
adoption across critical infrastructure.
|
||||||
|
|
||||||
|
|
||||||
This section establishes impact by answering three critical Heilmeier questions:
|
This section answered three Heilmeier questions:
|
||||||
|
|
||||||
\textbf{Who cares?} The nuclear industry faces an economic crisis for small modular reactors due to per-megawatt staffing costs. Datacenter operators need hundreds of megawatts of continuous clean power for AI infrastructure. Any organization operating staffing-intensive safety-critical systems faces similar economic pressures.
|
\textbf{Who cares?} The nuclear industry, datacenter operators, and anyone facing high operating costs from staffing-intensive safety-critical control.
|
||||||
|
|
||||||
\textbf{Why now?} Two forces converge to make this research urgent. First, exponentially growing AI infrastructure demands have created immediate need for economical nuclear power at datacenter scale. Second, formal methods tools have matured to where compositional hybrid verification has become computationally achievable—what was theoretically possible but practically intractable a decade ago is now feasible.
|
\textbf{Why now?} AI infrastructure demands have made nuclear economics urgent, and formal methods tools have matured to the point where compositional hybrid verification is achievable.
|
||||||
|
|
||||||
\textbf{What difference will it make?} This research addresses a \$21--28 billion annual cost barrier by enabling autonomous control with mathematical safety guarantees. Beyond immediate economic impact, the methodology establishes a generalizable framework for safety-critical autonomous systems across critical infrastructure.
|
\textbf{What difference will it make?} Enabling autonomous control with mathematical safety guarantees addresses a \$21--28 billion annual cost barrier while establishing a generalizable framework for safety-critical autonomous systems.
|
||||||
|
|
||||||
The complete research plan now spans technical approach, success metrics, risk mitigation, and broader impact. One final Heilmeier question remains: \textbf{How long will it take?} Section 8 provides a structured 24-month research plan progressing through milestones tied to Technology Readiness Level advancement, demonstrating the proposed work is achievable within a doctoral timeline.
|
Section 8 addresses the final Heilmeier question—how long will it take?—presenting a structured 24-month research plan with milestones tied to Technology Readiness Level advancement.
|
||||||
|
|||||||
@ -1,6 +1,11 @@
|
|||||||
\section{Schedule, Milestones, and Deliverables}
|
\section{Schedule, Milestones, and Deliverables}
|
||||||
|
|
||||||
\textbf{How long will it take?} This research will be conducted over six trimesters (24 months) of full-time effort following the proposal defense in Spring 2026. The University of Pittsburgh Cyber Energy Center and NRC Fellowship provide all computational and experimental resources. The work progresses sequentially through three main research thrusts, culminating in integrated demonstration and validation.
|
\textbf{How long will it take?} This research will be conducted over six
|
||||||
|
trimesters (24 months) of full-time effort following the proposal defense in
|
||||||
|
Spring 2026. The University of Pittsburgh Cyber Energy Center and NRC
|
||||||
|
Fellowship provide all computational and experimental resources. The work progresses
|
||||||
|
sequentially through three main research thrusts, culminating in
|
||||||
|
integrated demonstration and validation.
|
||||||
|
|
||||||
The first semester (Spring 2026) focuses on Thrust 1, translating startup
|
The first semester (Spring 2026) focuses on Thrust 1, translating startup
|
||||||
procedures into formal temporal logic specifications using FRET. This
|
procedures into formal temporal logic specifications using FRET. This
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user