Editorial pass: three-level copy-edit of thesis proposal
TACTICAL (sentence-level): - Applied Gopen's Sense of Structure principles - Improved topic-stress positioning and topic strings - Strengthened verb choices and active voice usage - Broke long complex sentences into clearer sequences - Enhanced issue-point structure OPERATIONAL (paragraph/section): - Improved transitions between paragraphs and sections - Strengthened coherence within subsections - Made connections between ideas more explicit - Enhanced flow from State of Art → Research Approach → Metrics → Risks → Impacts STRATEGIC (document-level): - Strengthened Heilmeier catechism alignment - Made each section's assigned questions more explicit - Improved summary paragraphs to clearly answer Heilmeier questions - Enhanced linking between sections to maintain global coherence Changes preserve technical content while improving clarity and impact.
This commit is contained in:
parent
b6fa9b84c1
commit
353247847b
@ -2,20 +2,20 @@
|
||||
This research develops autonomous control systems with mathematical guarantees of safe and correct behavior.
|
||||
|
||||
% INTRODUCTORY PARAGRAPH Hook
|
||||
Today's nuclear reactors depend on human operators who control them through extensive training, following detailed written procedures and switching between control objectives based on plant conditions.
|
||||
Human operators control today's nuclear reactors through extensive training. They follow detailed written procedures and switch between control objectives based on plant conditions.
|
||||
% Gap
|
||||
Small modular reactors face a fundamental economic challenge: per-megawatt staffing costs significantly exceed those of conventional plants, threatening economic viability. Autonomous control systems could manage complex operational sequences without constant supervision—but only if they provide assurance equal to or exceeding human-operated systems.
|
||||
Small modular reactors face a fundamental economic challenge: their per-megawatt staffing costs significantly exceed those of conventional plants. This threatens their economic viability. Autonomous control systems could manage complex operational sequences without constant supervision—but only with assurance equal to or exceeding that of human-operated systems.
|
||||
|
||||
% APPROACH PARAGRAPH Solution
|
||||
This research combines formal methods from computer science with control theory to produce hybrid control systems correct by construction.
|
||||
This research produces hybrid control systems correct by construction. It combines formal methods from computer science with control theory.
|
||||
% Rationale
|
||||
Human operators already work this way: discrete logic switches between continuous control modes. Formal methods generate provably correct switching logic but fail when continuous dynamics govern transitions. Control theory verifies continuous behavior but cannot prove discrete switching correctness. Both approaches must work together to achieve end-to-end correctness.
|
||||
Human operators already work this way: discrete logic switches between continuous control modes. Formal methods alone fail—they generate provably correct switching logic but cannot handle continuous dynamics governing transitions. Control theory alone also fails—it verifies continuous behavior but cannot prove discrete switching correctness. End-to-end correctness requires both approaches working together.
|
||||
% Hypothesis and Technical Approach
|
||||
Three stages bridge this gap. First, NASA's Formal Requirements Elicitation Tool (FRET) translates written operating procedures into temporal logic specifications, structuring requirements by scope, condition, component, timing, and response. Realizability checking then exposes conflicts and ambiguities before implementation begins. Second, reactive synthesis generates deterministic automata that are provably correct by construction. Third, reachability analysis verifies that continuous controllers—designed using standard control theory—satisfy the requirements that each discrete mode imposes.
|
||||
Three stages bridge this gap. First, NASA's Formal Requirements Elicitation Tool (FRET) translates written operating procedures into temporal logic specifications. It structures requirements by scope, condition, component, timing, and response. Realizability checking then exposes conflicts and ambiguities before implementation begins. Second, reactive synthesis generates deterministic automata provably correct by construction. Third, reachability analysis verifies that continuous controllers satisfy the requirements each discrete mode imposes. Standard control theory techniques design these continuous controllers.
|
||||
|
||||
Continuous modes classify by control objective. Transitory modes drive the plant between conditions. Stabilizing modes maintain operation within regions. Expulsory modes ensure safety under failures. Barrier certificates and assume-guarantee contracts prove safe mode transitions, enabling local verification without global trajectory analysis. The methodology will demonstrate on an Emerson Ovation control system.
|
||||
Control objectives classify continuous modes into three types. Transitory modes drive the plant between conditions. Stabilizing modes maintain operation within regions. Expulsory modes ensure safety under failures. Barrier certificates and assume-guarantee contracts prove safe mode transitions. This enables local verification without global trajectory analysis. An Emerson Ovation control system will demonstrate the methodology.
|
||||
% Pay-off
|
||||
This approach manages complex nuclear power operations autonomously while maintaining safety guarantees, directly addressing the economic constraints that threaten small modular reactor viability.
|
||||
This approach manages complex nuclear power operations autonomously while maintaining safety guarantees. It directly addresses the economic constraints threatening small modular reactor viability.
|
||||
|
||||
% OUTCOMES PARAGRAPHS
|
||||
This research, if successful, produces three concrete outcomes:
|
||||
|
||||
@ -6,16 +6,16 @@ This research develops autonomous hybrid control systems with mathematical guara
|
||||
% INTRODUCTORY PARAGRAPH Hook
|
||||
Nuclear power plants require the highest levels of control system reliability. Control system failures risk economic losses, service interruptions, or radiological release.
|
||||
% Known information
|
||||
Today's nuclear plants depend on human operators who control them through extensive training, following detailed written procedures and strict regulatory requirements while switching between control modes based on plant conditions and procedural guidance.
|
||||
Human operators control today's nuclear plants through extensive training. They follow detailed written procedures and strict regulatory requirements. They switch between control modes based on plant conditions and procedural guidance.
|
||||
% Gap
|
||||
This reliance on human operators prevents autonomous control and creates a fundamental economic challenge for next-generation reactor designs: per-megawatt staffing costs for small modular reactors far exceed those of conventional plants, threatening their economic viability. Autonomous control systems could manage complex operational sequences without constant human supervision—but only if they provide assurance equal to or exceeding that of human operators.
|
||||
This reliance on human operators prevents autonomous control. It creates a fundamental economic challenge for next-generation reactor designs. Small modular reactors face per-megawatt staffing costs far exceeding those of conventional plants. This threatens their economic viability. Autonomous control systems could manage complex operational sequences without constant human supervision—but only with assurance equal to or exceeding that of human operators.
|
||||
|
||||
% APPROACH PARAGRAPH Solution
|
||||
This research combines formal methods with control theory to produce hybrid control systems correct by construction.
|
||||
This research produces hybrid control systems correct by construction. It combines formal methods with control theory.
|
||||
% Rationale
|
||||
Human operators already work this way: discrete logic switches between continuous control modes. Formal methods generate provably correct switching logic from written requirements, but they fail when continuous dynamics govern transitions. Control theory verifies continuous behavior but cannot prove discrete switching correctness. Both approaches must work together to achieve end-to-end correctness.
|
||||
Human operators already work this way: discrete logic switches between continuous control modes. Formal methods generate provably correct switching logic from written requirements. They fail, however, when continuous dynamics govern transitions. Control theory verifies continuous behavior. It cannot, however, prove discrete switching correctness. End-to-end correctness requires both approaches working together.
|
||||
% Hypothesis
|
||||
This approach closes the gap through two steps. First, it synthesizes discrete mode transitions directly from written operating procedures. Second, it verifies continuous behavior between transitions. Operating procedures formalize into logical specifications that constrain continuous dynamics. The result: autonomous controllers provably free from design defects.
|
||||
Two steps close this gap. First, discrete mode transitions synthesize directly from written operating procedures. Second, continuous behavior between transitions verifies against discrete requirements. Operating procedures formalize into logical specifications. These specifications constrain continuous dynamics. The result: autonomous controllers provably free from design defects.
|
||||
|
||||
The University of Pittsburgh Cyber Energy Center provides access to industry collaboration and Emerson control hardware, ensuring solutions align with practical implementation
|
||||
requirements.
|
||||
@ -63,9 +63,9 @@ This approach produces three concrete outcomes:
|
||||
\end{enumerate}
|
||||
|
||||
% IMPACT PARAGRAPH Innovation
|
||||
These three outcomes—procedure translation, continuous verification, and hardware demonstration—establish a complete methodology from regulatory documents to deployed systems.
|
||||
These three outcomes establish a complete methodology from regulatory documents to deployed systems: procedure translation, continuous verification, and hardware demonstration.
|
||||
|
||||
\textbf{What makes this research new?} This work unifies discrete synthesis with continuous verification to enable end-to-end correctness guarantees for hybrid systems—no existing methodology achieves this. The key innovation: treating discrete specifications as contracts that continuous controllers must satisfy, allowing each layer to verify independently while guaranteeing correct composition. Formal methods verify discrete logic. Control theory verifies continuous dynamics. No existing methodology bridges both with compositional guarantees. Section 2 examines why prior work has not achieved this integration. Section 3 details how this integration will be accomplished.
|
||||
\textbf{What makes this research new?} No existing methodology achieves end-to-end correctness guarantees for hybrid systems. This work unifies discrete synthesis with continuous verification to close that gap. The key innovation treats discrete specifications as contracts that continuous controllers must satisfy. Each layer verifies independently while guaranteeing correct composition. Formal methods verify discrete logic. Control theory verifies continuous dynamics. No existing methodology bridges both with compositional guarantees. Section 2 examines why prior work has not achieved this integration. Section 3 details how this integration will be accomplished.
|
||||
|
||||
% Outcome Impact
|
||||
If successful, control engineers create autonomous controllers from
|
||||
|
||||
@ -6,9 +6,9 @@
|
||||
|
||||
Understanding the limits of current practice requires first examining how nuclear plants operate today. Three aspects structure this analysis: the hierarchy of procedures, the role of operators in executing them, and the operational modes that govern reactor control.
|
||||
|
||||
Nuclear plant procedures form a strict hierarchy: Normal operating procedures govern routine operations, abnormal operating procedures handle off-normal conditions, Emergency Operating Procedures (EOPs) manage design-basis accidents, Severe Accident Management Guidelines (SAMGs) address beyond-design-basis events, and Extensive Damage Mitigation Guidelines (EDMGs) cover catastrophic damage. These procedures must comply with 10 CFR 50.34(b)(6)(ii); NUREG-0899 provides development guidance~\cite{NUREG-0899, 10CFR50.34}.
|
||||
Nuclear plant procedures form a strict hierarchy. Normal operating procedures govern routine operations. Abnormal operating procedures handle off-normal conditions. Emergency Operating Procedures (EOPs) manage design-basis accidents. Severe Accident Management Guidelines (SAMGs) address beyond-design-basis events. Extensive Damage Mitigation Guidelines (EDMGs) cover catastrophic damage. These procedures must comply with 10 CFR 50.34(b)(6)(ii); NUREG-0899 provides development guidance~\cite{NUREG-0899, 10CFR50.34}.
|
||||
|
||||
Procedure development relies on expert judgment and simulator validation—not formal verification. 10 CFR 55.59~\cite{10CFR55.59} requires rigorous assessment through technical evaluation, simulator validation testing, and biennial review. Yet key safety properties escape formal verification: no mathematical proofs confirm that procedures cover all possible plant states, that required actions complete within available timeframes, or that transitions between procedure sets maintain safety invariants.
|
||||
Procedure development relies on expert judgment and simulator validation—not formal verification. 10 CFR 55.59~\cite{10CFR55.59} requires rigorous assessment through technical evaluation, simulator validation testing, and biennial review. Key safety properties, however, escape formal verification. No mathematical proofs confirm that procedures cover all possible plant states. No proofs verify that required actions complete within available timeframes. No proofs ensure that transitions between procedure sets maintain safety invariants.
|
||||
|
||||
\textbf{LIMITATION:} \textit{Procedures lack formal verification of correctness
|
||||
and completeness.} Current procedure development relies on expert judgment and
|
||||
@ -25,9 +25,9 @@ This division between automated and human-controlled functions reveals the funda
|
||||
|
||||
\subsection{Human Factors in Nuclear Accidents}
|
||||
|
||||
The previous subsection established that procedures lack formal verification despite rigorous development. This represents only half the reliability challenge. Even perfect procedures cannot guarantee safe operation when humans execute them imperfectly. Human operators—the second pillar of current practice—introduce reliability limitations independent of procedure quality.
|
||||
Procedures lack formal verification despite rigorous development. The previous subsection established this. This represents only half the reliability challenge. Perfect procedures cannot guarantee safe operation when humans execute them imperfectly. Human operators—the second pillar of current practice—introduce reliability limitations independent of procedure quality.
|
||||
|
||||
Procedures define what to do; human operators determine when and how. This human determination introduces persistent failure modes that training alone cannot eliminate.
|
||||
Procedures define what to do. Human operators determine when and how. This determination introduces persistent failure modes. Training alone cannot eliminate them.
|
||||
|
||||
Current-generation nuclear power plants employ over 3,600 active NRC-licensed
|
||||
reactor operators in the United States~\cite{operator_statistics}. These
|
||||
@ -58,9 +58,11 @@ limitations are fundamental to human-driven control, not remediable defects.
|
||||
|
||||
\subsection{Formal Methods}
|
||||
|
||||
The previous two subsections revealed two critical limitations of current practice: procedures lack formal verification, and human operators introduce persistent reliability issues that four decades of training improvements have failed to eliminate. Training and procedural improvements cannot solve these problems. Formal methods might offer mathematical guarantees of correctness that eliminate both human error and procedural ambiguity—but can they?
|
||||
The previous two subsections revealed two critical limitations of current practice. First, procedures lack formal verification despite rigorous development processes. Second, human operators introduce persistent reliability issues that four decades of training improvements have failed to eliminate.
|
||||
|
||||
Even the most advanced formal methods applications in nuclear control leave a critical verification gap for autonomous hybrid systems. This subsection examines two approaches illustrating this gap. HARDENS verified discrete logic without continuous dynamics. Differential dynamic logic handles hybrid verification only post-hoc. Each demonstrates the current state of formal methods while revealing the verification gap this research addresses.
|
||||
Training and procedural improvements cannot solve these problems. They are fundamental limitations, not remediable defects. Formal methods might offer a solution. They could provide mathematical guarantees of correctness that eliminate both human error and procedural ambiguity. But can they deliver on this promise for autonomous hybrid control systems?
|
||||
|
||||
Even the most advanced formal methods applications in nuclear control leave a critical verification gap. This subsection examines two approaches that illustrate this gap. HARDENS verified discrete logic without continuous dynamics. Differential dynamic logic handles hybrid verification only post-hoc. Each demonstrates the current state of formal methods while revealing the verification gap this research addresses.
|
||||
|
||||
\subsubsection{HARDENS: The State of Formal Methods in Nuclear Control}
|
||||
|
||||
@ -153,10 +155,12 @@ design loop for complex systems like nuclear reactor startup procedures.
|
||||
|
||||
\subsection{Summary: The Verification Gap}
|
||||
|
||||
This section answered two Heilmeier questions:
|
||||
This section answered two Heilmeier questions assigned to the State of the Art analysis.
|
||||
|
||||
\textbf{What has been done?} Human operators provide operational flexibility but introduce persistent reliability limitations that four decades of training improvements have failed to eliminate. Formal methods provide correctness guarantees but have not scaled to complete hybrid control design. HARDENS verified discrete logic without continuous dynamics. Differential dynamic logic expresses hybrid properties but requires post-design expert analysis and fails to scale to system synthesis. No existing approach addresses both discrete and continuous verification compositionally.
|
||||
\textbf{What has been done?} Three approaches currently exist, each with fundamental limitations. Human operators provide operational flexibility but introduce persistent reliability limitations. Four decades of training improvements have failed to eliminate these limitations. Formal methods provide correctness guarantees but have not scaled to complete hybrid control design. HARDENS verified discrete logic without continuous dynamics. Differential dynamic logic expresses hybrid properties but requires post-design expert analysis and fails to scale to system synthesis. No existing approach addresses both discrete and continuous verification compositionally.
|
||||
|
||||
\textbf{What are the limits of current practice?} No existing methodology synthesizes provably correct hybrid controllers from operational procedures with verification integrated into the design process. Current approaches verify either discrete logic or continuous dynamics—never both compositionally. This verification gap prevents autonomous nuclear control with end-to-end correctness guarantees. Training improvements cannot overcome human reliability limits. Post-hoc verification cannot scale to system design.
|
||||
\textbf{What are the limits of current practice?} The answer is clear: no existing methodology synthesizes provably correct hybrid controllers from operational procedures with verification integrated into the design process. Current approaches verify either discrete logic or continuous dynamics—never both compositionally. This verification gap prevents autonomous nuclear control with end-to-end correctness guarantees. Training improvements cannot overcome human reliability limits. Post-hoc verification cannot scale to system design.
|
||||
|
||||
Two imperatives converge to make this gap urgent. First, economic necessity: small modular reactors cannot compete with per-megawatt staffing costs matching large conventional plants. Second, technical opportunity: formal methods tools have matured sufficiently to enable compositional hybrid verification. Section 3 addresses this verification gap by establishing what makes the proposed approach new and why it will succeed.
|
||||
Two imperatives converge to make closing this gap urgent. First, economic necessity: small modular reactors cannot compete economically with per-megawatt staffing costs matching large conventional plants. Second, technical opportunity: formal methods tools have matured sufficiently to enable compositional hybrid verification.
|
||||
|
||||
Section 3 addresses this verification gap. It establishes what makes the proposed approach new and why it will succeed.
|
||||
|
||||
@ -15,9 +15,9 @@
|
||||
% ----------------------------------------------------------------------------
|
||||
% 1. INTRODUCTION AND HYBRID SYSTEMS DEFINITION
|
||||
% ----------------------------------------------------------------------------
|
||||
Previous approaches verified either discrete switching logic or continuous control behavior—never both simultaneously. Engineers validate continuous controllers through extensive simulation trials. They test discrete switching logic through simulated control room testing and human factors research. Neither method provides rigorous guarantees despite consuming enormous resources.
|
||||
Previous approaches verified either discrete switching logic or continuous control behavior—never both simultaneously. Engineers validate continuous controllers through extensive simulation trials. They test discrete switching logic through simulated control room testing and human factors research. Neither method provides rigorous guarantees. Both consume enormous resources.
|
||||
|
||||
This work bridges that gap by composing formal methods from computer science with control-theoretic verification, formalizing reactor operations as hybrid automata.
|
||||
This work bridges that gap. It composes formal methods from computer science with control-theoretic verification. It formalizes reactor operations as hybrid automata.
|
||||
|
||||
Hybrid system verification faces a fundamental challenge: discrete transitions change the governing vector field, creating discontinuities in system behavior through the interaction between discrete and continuous dynamics. Traditional verification techniques cannot handle this interaction directly.
|
||||
|
||||
@ -48,7 +48,7 @@ where:
|
||||
|
||||
A HAHACS requires this tuple together with proof artifacts demonstrating that the control system's actual implementation satisfies its intended behavior.
|
||||
|
||||
\textbf{What is new in this research?} Section 2 established that existing approaches verify either discrete logic or continuous dynamics—never both compositionally. Reactive synthesis, reachability analysis, and barrier certificates each exist independently. Prior work has not integrated them into a systematic design methodology spanning procedures to verified implementation. Three innovations enable this integration:
|
||||
\textbf{What is new in this research?} Existing approaches verify either discrete logic or continuous dynamics—never both compositionally. Section 2 established this limitation. Reactive synthesis, reachability analysis, and barrier certificates each exist independently. Prior work has not integrated them into a systematic design methodology spanning procedures to verified implementation. Three innovations enable this integration:
|
||||
|
||||
\begin{enumerate}
|
||||
\item \textbf{Contract-based decomposition:} This approach inverts the traditional structure. Instead of attempting global hybrid system verification, discrete synthesis defines entry/exit/safety contracts that bound continuous verification, transforming an intractable global problem into tractable local problems.
|
||||
@ -58,11 +58,11 @@ A HAHACS requires this tuple together with proof artifacts demonstrating that th
|
||||
|
||||
\textbf{Why will it succeed?} Three factors ensure practical feasibility where prior work has failed.
|
||||
|
||||
\textbf{First, leverage existing structure:} Nuclear procedures already decompose operations into discrete phases with explicit transition criteria. This work formalizes existing structure rather than imposing artificial abstractions, making adoption tractable for domain experts without formal methods training.
|
||||
\textbf{First, existing structure:} Nuclear procedures already decompose operations into discrete phases with explicit transition criteria. This work formalizes existing structure. It does not impose artificial abstractions. Domain experts without formal methods training can adopt the approach.
|
||||
|
||||
\textbf{Second, avoid state explosion:} Mode-level verification checks each mode against local contracts rather than attempting global hybrid system analysis. This decomposition bounds computational complexity, transforming an intractable global problem into tractable local verifications.
|
||||
\textbf{Second, bounded complexity:} Mode-level verification checks each mode against local contracts. It avoids attempting global hybrid system analysis. This decomposition bounds computational complexity. It transforms an intractable global problem into tractable local verifications.
|
||||
|
||||
\textbf{Third, industrial validation:} The Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility, ensuring solutions address real deployment constraints, not just theoretical correctness.
|
||||
\textbf{Third, industrial validation:} The Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility. Solutions address real deployment constraints, not just theoretical correctness.
|
||||
|
||||
These factors combine to demonstrate feasibility on production control systems with realistic reactor models—not merely in principle. Figure~\ref{fig:hybrid_automaton} illustrates the hybrid structure for a simplified reactor startup sequence.
|
||||
|
||||
@ -130,7 +130,11 @@ These factors combine to demonstrate feasibility on production control systems w
|
||||
|
||||
\subsection{System Requirements, Specifications, and Discrete Controllers}
|
||||
|
||||
The previous subsection established the hybrid automaton formalism—a mathematical framework for describing discrete modes, continuous dynamics, guards, and invariants. But where do these formal descriptions originate? This subsection demonstrates that nuclear operations already possess a natural hybrid structure that maps directly to the automaton formalism through three control scopes: strategic, operational, and tactical. Rather than imposing artificial abstractions, this approach constructs formal hybrid systems from existing operational knowledge—leveraging decades of domain expertise encoded in operating procedures.
|
||||
The previous subsection established the hybrid automaton formalism. This mathematical framework describes discrete modes, continuous dynamics, guards, and invariants.
|
||||
|
||||
A critical question remains: where do these formal descriptions originate? This subsection answers by demonstrating that nuclear operations already possess a natural hybrid structure. This structure maps directly to the automaton formalism through three control scopes: strategic, operational, and tactical.
|
||||
|
||||
The approach does not impose artificial abstractions. Instead, it constructs formal hybrid systems from existing operational knowledge. It leverages decades of domain expertise encoded in operating procedures.
|
||||
|
||||
Human control of nuclear power divides into three scopes: strategic, operational, and tactical. Strategic control represents high-level, long-term decision making spanning months or years. It manages labor needs and supply chains to optimize scheduled maintenance and downtime.
|
||||
|
||||
@ -253,9 +257,9 @@ Reactive synthesis automates the creation of reactive programs from temporal log
|
||||
Reactive synthesis solves a fundamental problem: given an LTL formula $\varphi$ specifying desired system behavior, automatically construct a finite-state machine (strategy) that produces outputs in response to environment inputs such that all resulting execution traces satisfy $\varphi$. If such a strategy exists, the specification is \emph{realizable}. The synthesis algorithm either produces a correct-by-construction controller or reports that no such controller exists. Unrealizable specifications indicate conflicting or impossible requirements in
|
||||
the original procedures—this realizability check catches errors before implementation.
|
||||
|
||||
Reactive synthesis offers a decisive advantage: the discrete automaton requires no human engineering of its implementation. The resultant automaton is correct by construction, eliminating human error at the implementation stage entirely. Human designers focus their effort where it belongs—on specifying system behavior, not implementing switching logic.
|
||||
Reactive synthesis offers a decisive advantage: the discrete automaton requires no human engineering of its implementation. The resultant automaton is correct by construction. This eliminates human error at the implementation stage entirely. Human designers focus their effort where it belongs—on specifying system behavior, not implementing switching logic.
|
||||
|
||||
This shift carries two critical implications. First, complete traceability: the reasons the controller changes between modes trace back through specifications to requirements, establishing clear liability and justification for system behavior. Second, deterministic guarantees replace probabilistic human judgment. Human operators cannot eliminate error from discrete control decisions; humans are intrinsically fallible. Defining system behavior using temporal logics and synthesizing the controller using deterministic algorithms ensures strategic decisions always follow operating procedures exactly—no exceptions, no deviations, no human factors.
|
||||
This shift carries two critical implications. First, complete traceability: the controller changes between modes for reasons that trace back through specifications to requirements. This establishes clear liability and justification for system behavior. Second, deterministic guarantees replace probabilistic human judgment. Human operators cannot eliminate error from discrete control decisions. Humans are intrinsically fallible. Temporal logics define system behavior. Deterministic algorithms synthesize the controller. Strategic decisions always follow operating procedures exactly—no exceptions, no deviations, no human factors.
|
||||
|
||||
The synthesized automaton translates directly to executable code through standard compilation techniques. Each discrete state maps to a control mode, guard conditions map to conditional statements, and the transition function defines the control flow. This compilation process preserves the formal guarantees by ensuring the implemented code is correct by construction—the automaton from which it derives was synthesized to satisfy the temporal logic specifications.
|
||||
|
||||
@ -275,9 +279,11 @@ Having established how discrete mode-switching logic synthesizes from procedures
|
||||
|
||||
\subsection{Continuous Control Modes}
|
||||
|
||||
The previous subsection established that reactive synthesis produces a provably correct discrete controller from operating procedures—an automaton that determines when to switch between modes. Hybrid control, however, requires more than correct mode switching: the continuous dynamics executing within each discrete mode must also be verified to ensure correct system behavior.
|
||||
The previous subsection established that reactive synthesis produces a provably correct discrete controller. This controller is an automaton synthesized from operating procedures. It determines when to switch between modes.
|
||||
|
||||
This subsection describes the continuous control modes executing within each discrete state and explains how they verify against requirements imposed by the discrete layer. Control objectives determine the verification approach: modes classify into three types—transitory, stabilizing, and expulsory—each requiring different verification tools matched to their distinct purposes.
|
||||
Hybrid control, however, requires more than correct mode switching. The continuous dynamics executing within each discrete mode must also verify against requirements. Without this continuous verification, the discrete controller cannot guarantee correct system behavior.
|
||||
|
||||
This subsection describes the continuous control modes executing within each discrete state. It explains how they verify against requirements imposed by the discrete layer. Control objectives determine the verification approach. Modes classify into three types—transitory, stabilizing, and expulsory. Each type requires different verification tools matched to its distinct purpose.
|
||||
|
||||
This methodology's scope requires clarification: this work verifies continuous controllers but does not synthesize them. The distinction parallels model checking in software verification. Model checking confirms whether a given implementation satisfies its specification without prescribing how to write the software. Engineers design continuous controllers using standard control theory techniques—this work assumes that design capability exists. The contribution lies in the verification framework that confirms candidate controllers compose correctly with the discrete layer to produce a safe hybrid system.
|
||||
|
||||
@ -380,9 +386,11 @@ appropriate to the fidelity of the reactor models available.
|
||||
|
||||
\subsubsection{Stabilizing Modes}
|
||||
|
||||
Transitory modes drive the system toward exit conditions—answering "can we reach the target?" Stabilizing modes address a fundamentally different question: "will we stay within bounds?" These modes maintain the system within a desired operating region indefinitely—examples include steady-state power operation, hot standby, and load-following at constant power level. This different control objective requires a different verification approach.
|
||||
Transitory modes drive the system toward exit conditions. They answer the question: "can we reach the target?"
|
||||
|
||||
Where reachability analysis answers "can the system reach a target?", stabilizing modes ask "does the system stay within bounds?" Barrier certificates provide the appropriate tool.
|
||||
Stabilizing modes address a fundamentally different question: "will we stay within bounds?" These modes maintain the system within a desired operating region indefinitely. Examples include steady-state power operation, hot standby, and load-following at constant power level. This different control objective requires a different verification approach.
|
||||
|
||||
Reachability analysis answers "can the system reach a target?" Stabilizing modes instead ask "does the system stay within bounds?" Barrier certificates provide the appropriate verification tool for this question.
|
||||
Barrier certificates analyze the dynamics of the system to determine whether
|
||||
flux across a given boundary exists. They evaluate whether any trajectory leaves
|
||||
a given boundary. This definition exactly matches what defines the validity of a
|
||||
@ -434,7 +442,9 @@ controller.
|
||||
|
||||
\subsubsection{Expulsory Modes}
|
||||
|
||||
The first two mode types—transitory modes that move the plant between conditions and stabilizing modes that maintain the plant within regions—handle nominal operations under the assumption that plant dynamics match the design model. Both assume the plant behaves as expected. Expulsory modes address a fundamentally different scenario: situations where the plant deviates from expected behavior due to component failures, sensor degradation, or unanticipated disturbances. Here, robustness matters more than optimality.
|
||||
The first two mode types handle nominal operations. Transitory modes move the plant between conditions. Stabilizing modes maintain the plant within regions. Both assume that plant dynamics match the design model. Both assume the plant behaves as expected.
|
||||
|
||||
Expulsory modes address a fundamentally different scenario. They handle situations where the plant deviates from expected behavior. This deviation may result from component failures, sensor degradation, or unanticipated disturbances. Here, robustness matters more than optimality.
|
||||
|
||||
Expulsory controllers prioritize robustness over optimality. The control objective shifts from reaching targets or maintaining regions to driving the plant to a safe shutdown state from potentially anywhere in the state space, under degraded or uncertain dynamics. Examples include emergency core cooling, reactor SCRAM sequences, and controlled depressurization procedures.
|
||||
|
||||
@ -513,13 +523,21 @@ of transferring technology directly to industry with a direct collaboration in
|
||||
this research, while getting an excellent perspective of how our research
|
||||
outcomes can align best with customer needs.
|
||||
|
||||
This section answered two critical Heilmeier questions:
|
||||
This section answered two critical Heilmeier questions.
|
||||
|
||||
\textbf{What is new in this research?} This work integrates reactive synthesis, reachability analysis, and barrier certificates into a compositional methodology for hybrid control synthesis. Three innovations enable this integration. First, contract-based decomposition uses discrete synthesis to define verification contracts that bound continuous verification, inverting traditional global analysis. Second, mode classification classifies continuous modes by objective to select appropriate verification tools, enabling mode-local analysis with provable composition guarantees. Third, procedure-driven structure leverages existing procedural structure to avoid intractable state explosion. Section 2 established that prior work verified either discrete logic or continuous dynamics—never both compositionally. Compositional verification enables what global analysis cannot achieve.
|
||||
\textbf{What is new in this research?} This work integrates reactive synthesis, reachability analysis, and barrier certificates into a compositional methodology for hybrid control synthesis. Three innovations enable this integration.
|
||||
|
||||
\textbf{Why will this approach succeed?} Three factors ensure practical feasibility. First, nuclear procedures already decompose operations into discrete phases with explicit transition criteria, allowing the approach to formalize existing structure rather than impose artificial abstractions. Second, mode-level verification bounds each verification problem locally, avoiding the state explosion that makes global hybrid system analysis intractable. Third, the Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility, ensuring solutions address real deployment constraints, not just theoretical correctness.
|
||||
First, contract-based decomposition uses discrete synthesis to define verification contracts that bound continuous verification. This inverts traditional global analysis. Second, mode classification classifies continuous modes by control objective to select appropriate verification tools. This enables mode-local analysis with provable composition guarantees. Third, procedure-driven structure leverages existing procedural structure to avoid intractable state explosion.
|
||||
|
||||
The methodology is now complete: procedure formalization, discrete synthesis, continuous verification across three mode types, and hardware implementation. What remains are operational questions about executing this research plan. Section 4 addresses \textit{How will success be measured?} through Technology Readiness Level advancement. Section 5 addresses \textit{What could prevent success?} through risk analysis and contingency planning. Section 6 addresses \textit{Who cares? Why now? What difference will it make?} through economic and societal impact analysis.
|
||||
Section 2 established that prior work verified either discrete logic or continuous dynamics—never both compositionally. Compositional verification enables what global analysis cannot achieve.
|
||||
|
||||
\textbf{Why will this approach succeed?} Three factors ensure practical feasibility.
|
||||
|
||||
First, nuclear procedures already decompose operations into discrete phases with explicit transition criteria. The approach formalizes existing structure. It does not impose artificial abstractions. Second, mode-level verification bounds each verification problem locally. This avoids the state explosion that makes global hybrid system analysis intractable. Third, the Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility. Solutions address real deployment constraints, not just theoretical correctness.
|
||||
|
||||
The methodology is now complete. It encompasses procedure formalization, discrete synthesis, continuous verification across three mode types, and hardware implementation.
|
||||
|
||||
Operational questions about executing this research plan remain. Section 4 addresses \textit{How will success be measured?} through Technology Readiness Level advancement. Section 5 addresses \textit{What could prevent success?} through risk analysis and contingency planning. Section 6 addresses \textit{Who cares? Why now? What difference will it make?} through economic and societal impact analysis.
|
||||
|
||||
%%% NOTES (Section 5):
|
||||
% - Get specific details on ARCADE interface from Emerson collaboration
|
||||
|
||||
@ -4,9 +4,11 @@
|
||||
|
||||
This work begins at TRL 2--3 and aims to reach TRL 5, where system components operate successfully in a relevant laboratory environment. TRL advancement provides the most appropriate success metric: it explicitly measures the gap between academic proof-of-concept and practical deployment. This section explains why TRLs are the right metric, then defines specific criteria for each level from TRL 3 through TRL 5.
|
||||
|
||||
Technology Readiness Levels provide the ideal success metric by explicitly measuring the gap between academic proof-of-concept and practical deployment—precisely what this work bridges. Academic metrics like papers published or theorems proved fail to capture practical feasibility. Empirical metrics like simulation accuracy or computational speed fail to demonstrate theoretical rigor. TRLs measure both simultaneously.
|
||||
Technology Readiness Levels provide the ideal success metric. They explicitly measure the gap between academic proof-of-concept and practical deployment. This is precisely what this work bridges.
|
||||
|
||||
Advancing from TRL 3 to TRL 5 requires maintaining theoretical rigor while progressively demonstrating practical feasibility. Formal verification must remain valid as the system moves from individual components to integrated hardware testing—the proofs must compose as the system scales.
|
||||
Academic metrics like papers published or theorems proved fail to capture practical feasibility. Empirical metrics like simulation accuracy or computational speed fail to demonstrate theoretical rigor. TRLs measure both simultaneously.
|
||||
|
||||
Advancing from TRL 3 to TRL 5 requires maintaining theoretical rigor while progressively demonstrating practical feasibility. The system moves from individual components to integrated hardware testing. Formal verification must remain valid throughout this progression. The proofs must compose as the system scales.
|
||||
|
||||
The nuclear industry requires extremely high assurance before deploying new
|
||||
control technologies. Demonstrating theoretical correctness alone proves
|
||||
@ -77,6 +79,14 @@ a relevant laboratory environment. This establishes both theoretical validity
|
||||
and practical feasibility, proving the methodology produces verified
|
||||
controllers implementable with current technology.
|
||||
|
||||
This section answered the Heilmeier question \textbf{How do we measure success?} Technology Readiness Level advancement from 2--3 to 5 provides the answer. Success means demonstrating both theoretical correctness and practical feasibility through progressively integrated validation. TRL 3 proves component-level correctness. TRL 4 demonstrates system-level integration in simulation. TRL 5 validates hardware implementation in a relevant environment. Achieving TRL 5 proves the methodology produces verified controllers implementable with current technology.
|
||||
This section answered the Heilmeier question assigned to success metrics.
|
||||
|
||||
Reaching TRL 5 depends on several critical assumptions. If these assumptions prove false, the research could stall at lower readiness levels despite sound methodology. Section 5 addresses the complementary Heilmeier question \textbf{What could prevent success?} It identifies primary risks, establishes early warning indicators, and defines contingency plans that preserve research value even when core assumptions fail.
|
||||
\textbf{How do we measure success?} Technology Readiness Level advancement from 2--3 to 5 provides the answer. Success means demonstrating both theoretical correctness and practical feasibility through progressively integrated validation.
|
||||
|
||||
TRL 3 proves component-level correctness—each part works independently. TRL 4 demonstrates system-level integration in simulation—the parts compose correctly. TRL 5 validates hardware implementation in a relevant environment—the complete system works on real control hardware.
|
||||
|
||||
Achieving TRL 5 proves the methodology produces verified controllers implementable with current technology. This is the definition of success for this research.
|
||||
|
||||
Reaching TRL 5, however, depends on several critical assumptions. If these assumptions prove false, the research could stall at lower readiness levels despite sound methodology.
|
||||
|
||||
Section 5 addresses the complementary Heilmeier question \textbf{What could prevent success?} It identifies primary risks. It establishes early warning indicators. It defines contingency plans that preserve research value even when core assumptions fail.
|
||||
|
||||
@ -1,6 +1,10 @@
|
||||
\section{Risks and Contingencies}
|
||||
|
||||
\textbf{What could prevent success?} Section 4 defined success as reaching TRL 5 through component validation, system integration, and hardware demonstration. But every research plan rests on assumptions that might prove false. This section identifies four primary risks that could prevent successful completion: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration. Each risk carries associated early warning indicators and contingency plans that preserve research value even when core assumptions fail. The staged project structure ensures that partial success yields publishable results while clearly identifying remaining barriers to deployment.
|
||||
\textbf{What could prevent success?} Section 4 defined success as reaching TRL 5. This requires component validation, system integration, and hardware demonstration.
|
||||
|
||||
Every research plan, however, rests on assumptions that might prove false. This section identifies four primary risks that could prevent successful completion. These risks are: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration.
|
||||
|
||||
Each risk carries associated early warning indicators. Each has contingency plans that preserve research value even when core assumptions fail. The staged project structure ensures that partial success yields publishable results. It clearly identifies remaining barriers to deployment even when full success proves elusive.
|
||||
|
||||
\subsection{Computational Tractability of Synthesis}
|
||||
|
||||
@ -124,6 +128,16 @@ extensions, ensuring they address industry-wide practices rather than specific
|
||||
quirks.
|
||||
|
||||
|
||||
This section answered the Heilmeier question \textbf{What could prevent success?} Four primary risks threaten project completion: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration challenges. Each risk has identifiable early warning indicators and viable mitigation strategies that preserve research value even when core assumptions fail. The staged project structure ensures that partial success yields publishable results while clearly identifying remaining barriers to deployment. This critical design feature maintains contribution to the field regardless of which technical obstacles prove insurmountable—even "failure" advances the field by documenting precisely which barriers remain.
|
||||
This section answered the Heilmeier question assigned to risk analysis.
|
||||
|
||||
The technical research plan is now complete. Section 3 established what will be done and why it will succeed. Section 4 established how success will be measured. This section established what might prevent it and how to mitigate those risks. One critical Heilmeier question remains: \textbf{Who cares? Why now? What difference will it make?} Section 6 answers by connecting this technical methodology to urgent economic and infrastructure challenges facing the nuclear industry and broader energy sector.
|
||||
\textbf{What could prevent success?} Four primary risks threaten project completion: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration challenges.
|
||||
|
||||
Each risk has identifiable early warning indicators. These indicators enable early detection before failure becomes inevitable. Each risk has viable mitigation strategies that preserve research value even when core assumptions fail.
|
||||
|
||||
The staged project structure ensures that partial success yields publishable results. It clearly identifies remaining barriers to deployment. This critical design feature maintains contribution to the field regardless of which technical obstacles prove insurmountable. Even "failure" advances the field by documenting precisely which barriers remain.
|
||||
|
||||
The technical research plan is now complete. Section 3 established what will be done and why it will succeed. Section 4 established how success will be measured through TRL advancement. This section established what might prevent success and how to mitigate those risks.
|
||||
|
||||
One critical Heilmeier question remains: \textbf{Who cares? Why now? What difference will it make?}
|
||||
|
||||
Section 6 answers this question. It connects technical methodology to urgent economic and infrastructure challenges facing the nuclear industry and broader energy sector.
|
||||
|
||||
@ -1,8 +1,12 @@
|
||||
\section{Broader Impacts}
|
||||
|
||||
\textbf{Who cares? Why now? What difference will it make?} Sections 2--5 established the complete technical research plan: what has been done and its limits (Section 2), what is new and why it will succeed (Section 3), how success will be measured (Section 4), and what could prevent success (Section 5). This section addresses the remaining Heilmeier questions, connecting technical methodology to economic and societal impact. The answer: this research directly addresses a \$21--28 billion annual cost barrier while establishing a generalizable framework for safety-critical autonomous systems.
|
||||
\textbf{Who cares? Why now? What difference will it make?} Sections 2--5 established the complete technical research plan. Section 2 examined what has been done and its limits. Section 3 detailed what is new and why it will succeed. Section 4 explained how success will be measured. Section 5 identified what could prevent success and how to mitigate those risks.
|
||||
|
||||
The technical approach enables compositional hybrid verification—a capability that did not exist before. But why does this matter beyond academic contribution? Three stakeholder groups converge on the same economic constraint: high operating costs driven by staffing requirements.
|
||||
This section addresses the remaining Heilmeier questions. It connects technical methodology to economic and societal impact. The answer: this research directly addresses a \$21--28 billion annual cost barrier. It establishes a generalizable framework for safety-critical autonomous systems.
|
||||
|
||||
The technical approach enables compositional hybrid verification. This capability did not exist before. But why does this matter beyond academic contribution?
|
||||
|
||||
Three stakeholder groups converge on the same economic constraint: high operating costs driven by staffing requirements.
|
||||
|
||||
The nuclear industry faces uncompetitive per-megawatt costs for small modular reactors. Datacenter operators need hundreds of megawatts of continuous clean power for AI infrastructure. Clean energy advocates need nuclear power to be economically viable.
|
||||
|
||||
@ -58,12 +62,14 @@ establish both the technical feasibility and regulatory pathway for broader
|
||||
adoption across critical infrastructure.
|
||||
|
||||
|
||||
This section answered three critical Heilmeier questions:
|
||||
This section answered three critical Heilmeier questions assigned to broader impacts analysis.
|
||||
|
||||
\textbf{Who cares?} Three stakeholder groups face the same constraint. The nuclear industry faces an economic crisis for small modular reactors due to per-megawatt staffing costs. Datacenter operators need hundreds of megawatts of continuous clean power for AI infrastructure. Clean energy advocates need nuclear power to be economically competitive. All three need autonomous control with safety guarantees.
|
||||
\textbf{Who cares?} Three stakeholder groups face the same constraint and converge on this solution. The nuclear industry faces an economic crisis for small modular reactors due to per-megawatt staffing costs. Datacenter operators need hundreds of megawatts of continuous clean power for AI infrastructure. Clean energy advocates need nuclear power to be economically competitive. All three stakeholder groups need autonomous control with safety guarantees. This research provides it.
|
||||
|
||||
\textbf{Why now?} Two forces converge. First, exponentially growing AI infrastructure demands have created immediate need for economical nuclear power at datacenter scale—projections show datacenter electricity demand reaching 1,050 terawatt-hours annually by 2030. Second, formal methods tools have matured, making compositional hybrid verification computationally achievable. What was theoretically possible but practically intractable a decade ago is now feasible.
|
||||
\textbf{Why now?} Two forces converge to create urgency. First, exponentially growing AI infrastructure demands have created immediate need for economical nuclear power at datacenter scale. Projections show datacenter electricity demand reaching 1,050 terawatt-hours annually by 2030. Second, formal methods tools have matured sufficiently to make compositional hybrid verification computationally achievable. What was theoretically possible but practically intractable a decade ago is now feasible. The problem is urgent and the tools exist to solve it.
|
||||
|
||||
\textbf{What difference will it make?} This research addresses a \$21--28 billion annual cost barrier by enabling autonomous control with mathematical safety guarantees. Beyond immediate economic impact, the methodology establishes a generalizable framework for safety-critical autonomous systems across critical infrastructure.
|
||||
\textbf{What difference will it make?} This research addresses a \$21--28 billion annual cost barrier. It enables autonomous control with mathematical safety guarantees. Beyond immediate economic impact, the methodology establishes a generalizable framework for safety-critical autonomous systems across critical infrastructure. The impact extends beyond nuclear power to any safety-critical system requiring provable correctness.
|
||||
|
||||
The complete research plan now spans technical approach, success metrics, risk mitigation, and broader impact. One final Heilmeier question remains: \textbf{How long will it take?} Section 8 answers with a structured 24-month research plan progressing through milestones tied to Technology Readiness Level advancement, demonstrating the proposed work is achievable within a doctoral timeline.
|
||||
The complete research plan now spans technical approach, success metrics, risk mitigation, and broader impact. One final Heilmeier question remains: \textbf{How long will it take?}
|
||||
|
||||
Section 8 answers this question. It provides a structured 24-month research plan progressing through milestones tied to Technology Readiness Level advancement. This demonstrates the proposed work is achievable within a doctoral timeline.
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user