Editorial pass: tactical (sentence-level), operational (paragraph-level), and strategic (document-level) improvements

TACTICAL (Gopen's Sense of Structure):
- Strengthened weak verb constructions
- Improved issue-point positioning and stress placement
- Converted passive to active voice where appropriate
- Tightened choppy sentences and improved flow
- Enhanced topic strings and parallel structure

OPERATIONAL (paragraph/section flow):
- Improved transitions between subsections
- Reduced repetition and redundancy
- Enhanced coherence within sections
- Streamlined paragraph openings for better flow

STRATEGIC (Heilmeier alignment):
- Strengthened section-to-section linkages
- Improved parallel structure in Heilmeier question answers
- Enhanced consistency in how sections reference each other
- Tightened transitions to next sections
This commit is contained in:
Split 2026-03-09 14:48:16 -04:00
parent ddeafd7fd8
commit faea7e6292
7 changed files with 46 additions and 49 deletions

View File

@ -7,7 +7,7 @@ Extensively trained operators run nuclear reactors today. They follow detailed w
Small modular reactors face a fundamental economic challenge: per-megawatt staffing costs significantly exceed those of conventional plants. This cost gap threatens economic viability. Autonomous control systems could manage complex operational sequences without constant supervision—but only if they provide assurance equal to or exceeding human-operated systems.
% APPROACH PARAGRAPH Solution
This research combines formal methods from computer science with control theory. The result: hybrid control systems correct by construction.
This research combines formal methods from computer science with control theory to produce hybrid control systems correct by construction.
% Rationale
Operators already work this way. Discrete logic switches between continuous control modes. Existing formal methods generate provably correct switching logic—but they fail when continuous dynamics govern transitions. Control theory verifies continuous behavior—but it cannot prove discrete switching correctness. End-to-end correctness requires both approaches together.
% Hypothesis and Technical Approach

View File

@ -11,7 +11,7 @@ Extensively trained human operators run nuclear plants today. They follow detail
This reliance on human operators prevents autonomous control. It creates a fundamental economic challenge for next-generation reactor designs. Per-megawatt staffing costs for small modular reactors far exceed those of conventional plants. This gap threatens economic viability. Autonomous control systems could manage complex operational sequences without constant human supervision—but only if they provide assurance equal to or exceeding human operators.
% APPROACH PARAGRAPH Solution
This research combines formal methods with control theory. The result: hybrid control systems correct by construction.
This research combines formal methods with control theory to produce hybrid control systems correct by construction.
% Rationale
Operators already work this way. Discrete logic switches between continuous control modes. Existing formal methods generate provably correct switching logic from written requirements—but they fail when continuous dynamics govern transitions. Control theory verifies continuous behavior—but it cannot prove discrete switching correctness. End-to-end correctness requires both approaches together.
% Hypothesis
@ -65,7 +65,7 @@ This approach produces three concrete outcomes:
% IMPACT PARAGRAPH Innovation
These three outcomes—procedure translation, continuous verification, and hardware demonstration—establish a complete methodology from regulatory documents to deployed systems.
\textbf{What makes this research new?} This work unifies discrete synthesis with continuous verification to enable end-to-end correctness guarantees for hybrid systems. Formal methods verify discrete logic. Control theory verifies continuous dynamics. No existing methodology bridges both with compositional guarantees. The bridge emerges by treating discrete specifications as contracts that continuous controllers must satisfy. Each layer verifies independently while guaranteeing correct composition. Section 2 (State of the Art) examines why prior work has not achieved this integration. Section 3 (Research Approach) details how this integration will be accomplished.
\textbf{What makes this research new?} This work unifies discrete synthesis with continuous verification to enable end-to-end correctness guarantees for hybrid systems. Formal methods verify discrete logic; control theory verifies continuous dynamics. No existing methodology bridges both with compositional guarantees. The bridge emerges by treating discrete specifications as contracts that continuous controllers must satisfy, allowing each layer to verify independently while guaranteeing correct composition. Section 2 (State of the Art) examines why prior work has not achieved this integration. Section 3 (Research Approach) details how this integration will be accomplished.
% Outcome Impact
If successful, control engineers create autonomous controllers from

View File

@ -8,7 +8,7 @@ Current practice must be understood before its limits can be identified. This su
Nuclear plant procedures form a hierarchy. Normal operating procedures govern routine operations. Abnormal operating procedures handle off-normal conditions. Emergency Operating Procedures (EOPs) manage design-basis accidents. Severe Accident Management Guidelines (SAMGs) address beyond-design-basis events. Extensive Damage Mitigation Guidelines (EDMGs) cover catastrophic damage. These procedures must comply with 10 CFR 50.34(b)(6)(ii); NUREG-0899 provides development guidance~\cite{NUREG-0899, 10CFR50.34}.
Procedure development relies on expert judgment and simulator validation—not formal verification. Technical evaluation, simulator validation testing, and biennial review under 10 CFR 55.59~\cite{10CFR55.59} assess procedures rigorously. Yet key safety properties escape formal verification. Mathematical proof does not confirm that procedures cover all possible plant states. No proof exists that required actions complete within available timeframes. No proof exists that safety invariants hold across procedure-set transitions.
Procedure development relies on expert judgment and simulator validation—not formal verification. Technical evaluation, simulator validation testing, and biennial review under 10 CFR 55.59~\cite{10CFR55.59} assess procedures rigorously. Yet key safety properties escape formal verification. No mathematical proof confirms that procedures cover all possible plant states. No proof exists that required actions complete within available timeframes or that safety invariants hold across procedure-set transitions.
\textbf{LIMITATION:} \textit{Procedures lack formal verification of correctness
and completeness.} Current procedure development relies on expert judgment and
@ -25,7 +25,7 @@ The division between automated and human-controlled functions reveals the fundam
\subsection{Human Factors in Nuclear Accidents}
The previous subsection established that procedures lack formal verification despite rigorous development. This represents only half the reliability challenge. The other half emerges from procedure execution. Even perfect procedures cannot guarantee safe operation when humans execute them imperfectly. Human operators—the second pillar of current practice—introduce reliability limitations independent of procedure quality. Procedures define what to do. Human operators determine when and how. Perfect procedures cannot eliminate human error.
The previous subsection established that procedures lack formal verification despite rigorous development. This represents only half the reliability challenge. Even perfect procedures cannot guarantee safe operation when humans execute them imperfectly. Human operators—the second pillar of current practice—introduce reliability limitations independent of procedure quality. Procedures define what to do; human operators determine when and how.
Current-generation nuclear power plants employ over 3,600 active NRC-licensed
reactor operators in the United States~\cite{operator_statistics}. These
@ -35,7 +35,7 @@ shift supervisors~\cite{10CFR55}. Staffing typically requires at least two ROs
and one SRO for current-generation units~\cite{10CFR50.54}. Becoming a reactor
operator requires several years of training.
Human error persistently contributes to nuclear safety incidents despite decades of improvements in training and procedures. This persistence motivates formal automated control with mathematical safety guarantees. Under 10 CFR Part 55, operators hold legal authority to make critical decisions, including authority to depart from normal regulations during emergencies. The Three Mile Island (TMI) accident demonstrated how personnel error, design deficiencies, and component failures combine to cause disaster. Operators misread confusing and contradictory indications. They then shut off the emergency water system~\cite{Kemeny1979}. The President's Commission on TMI identified a
Despite decades of improvements in training and procedures, human error persistently contributes to nuclear safety incidents. This persistence motivates formal automated control with mathematical safety guarantees. Under 10 CFR Part 55, operators hold legal authority to make critical decisions, including authority to depart from normal regulations during emergencies. The Three Mile Island (TMI) accident demonstrated how personnel error, design deficiencies, and component failures combine to cause disaster. Operators misread confusing and contradictory indications. They then shut off the emergency water system~\cite{Kemeny1979}. The President's Commission on TMI identified a
fundamental ambiguity: placing responsibility for safe power plant operations on
the licensee without formally verifying that operators can fulfill this
responsibility does not guarantee safety. This tension between operational
@ -56,9 +56,9 @@ limitations are fundamental to human-driven control, not remediable defects.
\subsection{Formal Methods}
The previous two subsections revealed two critical limitations of current practice. First, procedures lack formal verification. Second, human operators introduce persistent reliability issues that four decades of training improvements have failed to eliminate. Training and procedural improvements cannot solve these problems—but formal methods might. Mathematical guarantees of correctness could eliminate both human error and procedural ambiguity.
The previous two subsections revealed two critical limitations of current practice: procedures lack formal verification, and human operators introduce persistent reliability issues that four decades of training improvements have failed to eliminate. Training and procedural improvements cannot solve these problems—but formal methods might, offering mathematical guarantees of correctness that could eliminate both human error and procedural ambiguity.
Yet even the most advanced formal methods applications in nuclear control leave a critical verification gap for autonomous hybrid systems. This subsection examines two approaches illustrating this gap: HARDENS, which verified discrete logic without continuous dynamics, and differential dynamic logic, which handles hybrid verification only post-hoc. Each demonstrates the current state of formal methods while revealing the verification gap this research addresses.
Even the most advanced formal methods applications in nuclear control, however, leave a critical verification gap for autonomous hybrid systems. This subsection examines two approaches illustrating this gap: HARDENS, which verified discrete logic without continuous dynamics, and differential dynamic logic, which handles hybrid verification only post-hoc. Each demonstrates the current state of formal methods while revealing the verification gap this research addresses.
\subsubsection{HARDENS: The State of Formal Methods in Nuclear Control}
@ -155,4 +155,4 @@ This section answered two Heilmeier questions about current practice:
\textbf{What are the limits of current practice?} No existing methodology synthesizes provably correct hybrid controllers from operational procedures with verification integrated into the design process. Current approaches verify either discrete logic or continuous dynamics—never both compositionally. This gap prevents autonomous nuclear control with end-to-end correctness guarantees.
Two imperatives converge. Economic: small modular reactors cannot compete with per-megawatt staffing costs matching large conventional plants. Technical: current approaches lack compositional verification for hybrid systems. Section 3 addresses this gap. It establishes what makes the proposed approach new and why it will succeed where prior work has failed.
Two imperatives converge: economic (small modular reactors cannot compete with per-megawatt staffing costs matching large conventional plants) and technical (current approaches lack compositional verification for hybrid systems). Section 3 addresses this gap by establishing what makes the proposed approach new and why it will succeed where prior work has failed.

View File

@ -15,13 +15,13 @@
% ----------------------------------------------------------------------------
% 1. INTRODUCTION AND HYBRID SYSTEMS DEFINITION
% ----------------------------------------------------------------------------
Previous approaches verified either discrete switching logic or continuous control behavior—never both simultaneously. Continuous controllers rely on extensive simulation trials for validation. Discrete switching logic undergoes simulated control room testing and human factors research. Neither method provides rigorous guarantees—despite consuming enormous resources.
Previous approaches verified either discrete switching logic or continuous control behavior—never both simultaneously. Continuous controllers rely on extensive simulation trials for validation. Discrete switching logic undergoes simulated control room testing and human factors research. Despite consuming enormous resources, neither method provides rigorous guarantees.
This work bridges the gap. It composes formal methods from computer science with control-theoretic verification. Reactor operations formalize as hybrid automata.
This work bridges the gap by composing formal methods from computer science with control-theoretic verification. Reactor operations formalize as hybrid automata.
Hybrid system verification faces a fundamental challenge. Discrete transitions change the governing vector field. This creates discontinuities in system behavior through the interaction between discrete and continuous dynamics. Traditional verification techniques fail to handle this interaction directly.
Our methodology decomposes the problem. It verifies discrete switching logic and continuous mode behavior separately. Then it composes them to establish guarantees for the complete hybrid system. This two-layer approach mirrors reactor operations. Discrete supervisory logic determines which control mode is active. Continuous controllers govern plant behavior within each mode.
Our methodology decomposes the problem by verifying discrete switching logic and continuous mode behavior separately, then composing them to establish guarantees for the complete hybrid system. This two-layer approach mirrors reactor operations: discrete supervisory logic determines which control mode is active, while continuous controllers govern plant behavior within each mode.
Building a high-assurance hybrid autonomous control system requires
a mathematical description of the system. This work draws on
@ -54,7 +54,7 @@ where:
\item $Inv$: safety invariants on the continuous dynamics
\end{itemize}
Creating a HAHACS requires constructing this tuple together with proof artifacts that demonstrate the control system's actual implementation satisfies its intended behavior.
Creating a HAHACS requires constructing this tuple together with proof artifacts demonstrating that the control system's actual implementation satisfies its intended behavior.
\textbf{What is new in this research?} Section 2 established that existing approaches verify either discrete logic or continuous dynamics—never both compositionally. Reactive synthesis, reachability analysis, and barrier certificates each exist independently. Three key innovations compose them into a complete methodology for hybrid control synthesis:
@ -140,7 +140,7 @@ Feasibility demonstrates on production control systems with realistic reactor mo
\subsection{System Requirements, Specifications, and Discrete Controllers}
The previous subsection established the hybrid automaton formalism—a mathematical framework for describing discrete modes, continuous dynamics, guards, and invariants. But where do these formal descriptions come from? This subsection shows how to construct such systems from existing operational knowledge rather than imposing artificial abstractions. Nuclear operations already possess a natural hybrid structure. This structure maps directly to the automaton formalism through three control scopes: strategic, operational, and tactical.
The previous subsection established the hybrid automaton formalism—a mathematical framework for describing discrete modes, continuous dynamics, guards, and invariants. But where do these formal descriptions come from? Nuclear operations already possess a natural hybrid structure that maps directly to the automaton formalism through three control scopes: strategic, operational, and tactical. This subsection shows how to construct formal hybrid systems from existing operational knowledge rather than imposing artificial abstractions.
Human control of nuclear power divides into three scopes: strategic, operational, and tactical. Strategic control represents high-level, long-term decision making spanning months or years: managing labor needs and supply chains to optimize scheduled maintenance and downtime.
@ -150,7 +150,7 @@ The operational scope links these extremes, representing the primary responsibil
An example clarifies this three-level structure. Consider a strategic goal to perform refueling at a certain time. The tactical level currently maintains core temperature. The operational level issues the shutdown procedure, using several smaller tactical goals along the way to achieve this objective.
This structure reveals why the operational and tactical levels fundamentally form a hybrid controller. The tactical level represents continuous plant evolution according to control input and control law. The operational level represents discrete state evolution that determines which tactical control law applies. This operational level becomes the target for autonomous control.
This structure reveals why the operational and tactical levels fundamentally form a hybrid controller: the tactical level represents continuous plant evolution according to control input and control law, while the operational level represents discrete state evolution determining which tactical control law applies. This operational level becomes the target for autonomous control.
\begin{figure}
@ -211,10 +211,9 @@ continuous state space: $p_i: \mathcal{X} \rightarrow \{\text{true},
temperature exceeds 315°C'' or ``pressurizer level is between 30\% and 60\%.''
We do not impose this discrete abstraction artificially. Operating
procedures for nuclear systems already define go/no-go conditions as discrete
predicates. Design basis safety analysis determined these thresholds. Decades of operational experience have
validated them. Our methodology assumes
this domain knowledge exists and provides a framework to formalize it. The approach is feasible for nuclear applications because generations
of nuclear engineers have already completed the hard
predicates. Design basis safety analysis determined these thresholds; decades of operational experience validated them. Our methodology assumes
this domain knowledge exists and provides a framework to formalize it. The approach proves feasible for nuclear applications because generations
of nuclear engineers already completed the hard
work of defining safe operating boundaries.
Linear temporal logic (LTL) is particularly well-suited for
@ -236,13 +235,13 @@ coolant pressure drops, the system initiates shutdown within bounded time'').
This work uses FRET (Formal Requirements Elicitation Tool) to build these temporal logic statements. NASA developed FRET for high-assurance timed systems. FRET provides an intermediate language between temporal logic and natural language. It enables rigid definitions of temporal behavior through syntax accessible to engineers without formal methods expertise. This accessibility proves crucial for industrial feasibility. Reducing required expert knowledge makes these tools adoptable by the current nuclear workforce.
FRET's key feature is its ability to start with logically imprecise
statements and refine them consecutively into well-posed specifications. We can
statements and refine them consecutively into well-posed specifications. We
leverage this by directly importing operating procedures and design
requirements into FRET in natural language, then iteratively refining them into
specifications for a HAHACS. This approach provides two distinct benefits. First, it draws a direct link from design documentation to digital system
implementation. Second, it clearly demonstrates where natural language documents
specifications for a HAHACS. This approach provides two distinct benefits: first, it draws a direct link from design documentation to digital system
implementation; second, it clearly demonstrates where natural language documents
fall short. Human operators may still use these procedures, making any
room for interpretation a weakness that must be addressed.
room for interpretation a weakness requiring correction.
FRET has been successfully applied to spacecraft control systems, autonomous vehicle requirements, and medical device specifications. NASA used FRET for the Lunar Gateway project, formalizing flight software requirements that were previously specified only in natural language. The Defense Advanced Research Projects Agency (DARPA) employed FRET in the Assured Autonomy program to verify autonomous systems requirements. These applications demonstrate FRET's maturity for safety-critical domains. Nuclear control procedures present an ideal use case: they are already structured, detailed, and written to minimize ambiguity—precisely the characteristics that enable successful formalization.
%%% NOTES (Section 2):
@ -264,9 +263,9 @@ Reactive synthesis automates the creation of reactive programs from temporal log
Reactive synthesis solves a fundamental problem: given an LTL formula $\varphi$ specifying desired system behavior, automatically construct a finite-state machine (strategy) that produces outputs in response to environment inputs such that all resulting execution traces satisfy $\varphi$. If such a strategy exists, the specification is \emph{realizable}. The synthesis algorithm either produces a correct-by-construction controller or reports that no such controller exists. Unrealizable specifications indicate conflicting or impossible requirements in
the original procedures—this realizability check catches errors before implementation.
Reactive synthesis offers a decisive advantage. The discrete automaton requires no human engineering of its implementation. The resultant automaton is correct by construction. This eliminates human error at the implementation stage—entirely. Human designers can focus their effort where it belongs: on specifying system behavior, not implementing switching logic.
Reactive synthesis offers a decisive advantage: the discrete automaton requires no human engineering of its implementation. The resultant automaton is correct by construction, eliminating human error at the implementation stage entirely. Human designers focus their effort where it belongs—on specifying system behavior, not implementing switching logic.
This shift has two critical implications. First, complete traceability. The reasons the controller changes between modes trace back through specifications to requirements. This establishes clear liability and justification for system behavior. Second, deterministic guarantees replace probabilistic human judgment. Human operators cannot eliminate error from discrete control decisions. Humans are intrinsically fallible. Defining system behavior using temporal logics and synthesizing the controller using deterministic algorithms ensures strategic decisions always follow operating procedures exactly—no exceptions, no deviations, no human factors.
This shift carries two critical implications. First, complete traceability: the reasons the controller changes between modes trace back through specifications to requirements, establishing clear liability and justification for system behavior. Second, deterministic guarantees replace probabilistic human judgment. Human operators cannot eliminate error from discrete control decisions; humans are intrinsically fallible. Defining system behavior using temporal logics and synthesizing the controller using deterministic algorithms ensures strategic decisions always follow operating procedures exactly—no exceptions, no deviations, no human factors.
The synthesized automaton translates directly to executable code through standard compilation techniques. Each discrete state maps to a control mode, guard conditions map to conditional statements, and the transition function defines the control flow. This compilation process preserves the formal guarantees: the implemented code is correct by construction because the automaton it derives from was synthesized to satisfy the temporal logic specifications.
@ -288,7 +287,7 @@ The previous subsection established that reactive synthesis produces a provably
This subsection describes the continuous control modes executing within each discrete state and explains how they verify against requirements imposed by the discrete layer. Control objectives determine the verification approach. Modes classify into three types—transitory, stabilizing, and expulsory—each requiring different verification tools matched to their distinct purposes.
This methodology's scope requires clarification: this work verifies continuous controllers but does not synthesize them. The distinction parallels model checking in software verification, where verification confirms whether a given implementation satisfies its specification without prescribing how to write the software. Engineers design continuous controllers using standard control theory techniques—this work assumes that design capability exists. The contribution provides the verification framework confirming that candidate controllers compose correctly with the discrete layer to produce a safe hybrid system.
This methodology's scope requires clarification: this work verifies continuous controllers but does not synthesize them. The distinction parallels model checking in software verification, where verification confirms whether a given implementation satisfies its specification without prescribing how to write the software. Engineers design continuous controllers using standard control theory techniques—this work assumes that design capability exists. The contribution is the verification framework confirming that candidate controllers compose correctly with the discrete layer to produce a safe hybrid system.
The operational control scope defines go/no-go decisions that determine what
kind of continuous control to implement. The entry or exit conditions of a
@ -341,9 +340,7 @@ tools matched to its control objective. Transitory modes drive the plant between
\subsubsection{Transitory Modes}
Transitory modes execute transitions between operating conditions—the first of three continuous controller types.
Transitory modes move the plant from one discrete operating condition to another. Their purpose is to execute transitions: start from entry conditions, reach exit conditions, and maintain safety invariants throughout. Examples include power ramp-up sequences, cooldown procedures, and load-following maneuvers.
Transitory modes—the first of three continuous controller types—execute transitions between operating conditions. Their purpose is to move the plant from one discrete operating condition to another: start from entry conditions, reach exit conditions, and maintain safety invariants throughout. Examples include power ramp-up sequences, cooldown procedures, and load-following maneuvers.
We can state the control objective for a transitory mode formally. Given entry conditions $\mathcal{X}_{entry}$, exit conditions $\mathcal{X}_{exit}$, safety invariant $\mathcal{X}_{safe}$, and closed-loop dynamics $\dot{x} = f(x, u(x))$, the controller must satisfy:
\[
@ -394,9 +391,9 @@ appropriate to the fidelity of the reactor models available.
\subsubsection{Stabilizing Modes}
The previous subsection addressed transitory modes—modes that drive the system toward exit conditions. Stabilizing modes do the opposite. They maintain the system within a desired operating region indefinitely. Examples include steady-state power operation, hot standby, and load-following at constant power level. This different control objective requires a different verification approach.
The previous subsection addressed transitory modes—modes that drive the system toward exit conditions. Stabilizing modes do the opposite: they maintain the system within a desired operating region indefinitely. Examples include steady-state power operation, hot standby, and load-following at constant power level. This different control objective requires a different verification approach.
Reachability analysis answers "can the system reach a target?" Stabilizing modes ask instead "does the system stay within bounds?" Barrier certificates provide the appropriate tool.
Where reachability analysis answers "can the system reach a target?", stabilizing modes ask "does the system stay within bounds?" Barrier certificates provide the appropriate tool.
Barrier certificates analyze the dynamics of the system to determine whether
flux across a given boundary exists. They evaluate whether any trajectory leaves
a given boundary. This definition exactly matches what defines the validity of a
@ -448,7 +445,7 @@ controller.
\subsubsection{Expulsory Modes}
The first two mode types handle nominal operations. Transitory modes move the plant between conditions. Stabilizing modes maintain the plant within regions. Both assume plant dynamics match the design model. Expulsory modes handle the opposite case: when the plant deviates from expected behavior. Component failures, sensor degradation, or unanticipated disturbances cause this deviation.
The first two mode types handle nominal operations: transitory modes move the plant between conditions, while stabilizing modes maintain the plant within regions. Both assume plant dynamics match the design model. Expulsory modes handle the opposite case—when the plant deviates from expected behavior due to component failures, sensor degradation, or unanticipated disturbances.
Expulsory controllers prioritize robustness over optimality. The control objective shifts from reaching targets or maintaining regions to driving the plant to a safe shutdown state from potentially anywhere in the state space, under degraded or uncertain dynamics. Examples include emergency core cooling, reactor SCRAM sequences, and controlled depressurization procedures.
@ -529,11 +526,11 @@ outcomes can align best with customer needs.
This section answered two critical Heilmeier questions about the research approach:
\textbf{What is new in this research?} This work integrates reactive synthesis, reachability analysis, and barrier certificates into a compositional methodology for hybrid control synthesis. Three innovations distinguish this approach. First, using discrete synthesis to define verification contracts—inverting traditional global analysis. Second, classifying continuous modes by objective to select appropriate verification tools. Third, leveraging existing procedural structure to avoid intractable state explosion. Section 2 established that prior work verified either discrete logic or continuous dynamics—never both compositionally. Compositional verification enables what global analysis cannot achieve.
\textbf{What is new in this research?} This work integrates reactive synthesis, reachability analysis, and barrier certificates into a compositional methodology for hybrid control synthesis through three innovations: first, using discrete synthesis to define verification contracts—inverting traditional global analysis; second, classifying continuous modes by objective to select appropriate verification tools; third, leveraging existing procedural structure to avoid intractable state explosion. Section 2 established that prior work verified either discrete logic or continuous dynamics—never both compositionally. Compositional verification enables what global analysis cannot achieve.
\textbf{Why will this approach succeed?} Three factors ensure practical feasibility. First, nuclear procedures already decompose operations into discrete phases with explicit transition criteria. The approach formalizes existing structure rather than imposing artificial abstractions. Second, mode-level verification bounds each verification problem locally. This avoids the state explosion that makes global hybrid system analysis intractable. Third, the Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility.
\textbf{Why will this approach succeed?} Three factors ensure practical feasibility: first, nuclear procedures already decompose operations into discrete phases with explicit transition criteria, allowing the approach to formalize existing structure rather than impose artificial abstractions; second, mode-level verification bounds each verification problem locally, avoiding the state explosion that makes global hybrid system analysis intractable; third, the Emerson collaboration provides both domain expertise to validate procedure formalization and industrial hardware to demonstrate implementation feasibility.
The methodology is now complete: procedure formalization, discrete synthesis, continuous verification across three mode types, and hardware implementation. What remains are operational questions about executing this research plan. Section 4 addresses \textit{How will success be measured?} through Technology Readiness Level advancement. Section 5 addresses \textit{What could prevent success?} through risk analysis and contingency planning. Section 6 addresses \textit{Who cares? Why now? What difference will it make?} through economic and societal impact analysis.
The methodology is now completeprocedure formalization, discrete synthesis, continuous verification across three mode types, and hardware implementation. What remains are operational questions about executing this research plan: Section 4 addresses \textit{How will success be measured?} through Technology Readiness Level advancement; Section 5 addresses \textit{What could prevent success?} through risk analysis and contingency planning; Section 6 addresses \textit{Who cares? Why now? What difference will it make?} through economic and societal impact analysis.
%%% NOTES (Section 5):
% - Get specific details on ARCADE interface from Emerson collaboration

View File

@ -2,7 +2,7 @@
\textbf{How do we measure success?} This research advances through Technology Readiness Levels, progressing from fundamental concepts (TRL 2--3) to validated prototype demonstration (TRL 5).
This work begins at TRL 2--3 and aims to reach TRL 5, where system components operate successfully in a relevant laboratory environment. TRL advancement provides the most appropriate success metric by bridging the gap between academic proof-of-concept and practical deployment. This section explains why before defining specific criteria for each level from TRL 3 through TRL 5.
This work begins at TRL 2--3 and aims to reach TRL 5, where system components operate successfully in a relevant laboratory environment. TRL advancement provides the most appropriate success metric by bridging the gap between academic proof-of-concept and practical deployment. This section first explains why, then defines specific criteria for each level from TRL 3 through TRL 5.
Technology Readiness Levels provide the ideal success metric by explicitly measuring the gap between academic proof-of-concept and practical deployment—precisely what this work bridges. Academic metrics like papers published or theorems proved fail to capture practical feasibility. Empirical metrics like simulation accuracy or computational speed fail to demonstrate theoretical rigor. Both measure simultaneously through TRLs.
@ -77,6 +77,6 @@ a relevant laboratory environment. This establishes both theoretical validity
and practical feasibility, proving the methodology produces verified
controllers implementable with current technology.
This section answered the Heilmeier question \textbf{How do we measure success?} Technology Readiness Level advancement from 2--3 to 5 provides the answer. Success means demonstrating both theoretical correctness and practical feasibility through progressively integrated validation. TRL 3 proves component-level correctness. TRL 4 demonstrates system-level integration in simulation. TRL 5 validates hardware implementation in a relevant environment. Achieving TRL 5 proves the methodology produces verified controllers implementable with current technology.
This section answered the Heilmeier question \textbf{How do we measure success?} Technology Readiness Level advancement from 2--3 to 5 provides the answer: success means demonstrating both theoretical correctness and practical feasibility through progressively integrated validation. TRL 3 proves component-level correctness; TRL 4 demonstrates system-level integration in simulation; TRL 5 validates hardware implementation in a relevant environment. Achieving TRL 5 proves the methodology produces verified controllers implementable with current technology.
Reaching TRL 5 depends on several critical assumptions. If these assumptions prove false, the research could stall at lower readiness levels despite sound methodology. Section 5 addresses the complementary Heilmeier question \textbf{What could prevent success?} It identifies primary risks, establishes early warning indicators, and defines contingency plans that preserve research value even when core assumptions fail.
Reaching TRL 5 depends on several critical assumptions. If these assumptions prove false, the research could stall at lower readiness levels despite sound methodology. Section 5 addresses the complementary Heilmeier question \textbf{What could prevent success?} by identifying primary risks, establishing early warning indicators, and defining contingency plans that preserve research value even when core assumptions fail.

View File

@ -4,7 +4,7 @@
\subsection{Computational Tractability of Synthesis}
The first major assumption: formalized startup procedures will yield automata small enough for efficient synthesis and verification. Reactive synthesis scales exponentially with specification complexity. Temporal logic specifications derived from complete startup procedures may produce automata with thousands of states, requiring synthesis times exceeding days or weeks that prevent demonstrating the complete methodology within project timelines. Reachability analysis for continuous modes with high-dimensional state spaces may similarly prove computationally intractable. Either barrier would constitute a fundamental obstacle to achieving research objectives.
The first major assumption: formalized startup procedures will yield automata small enough for efficient synthesis and verification. Reactive synthesis scales exponentially with specification complexity. Temporal logic specifications derived from complete startup procedures may produce automata with thousands of states, requiring synthesis times exceeding days or weeks—preventing completion of the methodology within project timelines. Reachability analysis for continuous modes with high-dimensional state spaces may similarly prove computationally intractable. Either barrier would constitute a fundamental obstacle to achieving research objectives.
Several indicators would provide early warning of computational tractability
problems. Synthesis times exceeding 24 hours for simplified procedure subsets
@ -122,6 +122,6 @@ extensions, ensuring they address industry-wide practices rather than specific
quirks.
This section answered the Heilmeier question \textbf{What could prevent success?} Four primary risks threaten project completion: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration challenges. Each risk has identifiable early warning indicators and viable mitigation strategies. The staged project structure ensures that partial success yields publishable results while clearly identifying remaining barriers to deployment. Even when core assumptions prove invalid, the research produces valuable contributions that advance the field.
This section answered the Heilmeier question \textbf{What could prevent success?} Four primary risks threaten project completion: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, completeness of procedure formalization, and hardware-in-the-loop integration challenges. Each risk has identifiable early warning indicators and viable mitigation strategies. The staged project structure ensures that partial success yields publishable results while clearly identifying remaining barriers to deployment. Even when core assumptions prove invalid, the research produces valuable contributions advancing the field.
The technical research plan is now complete. Section 3 established what will be done. Section 4 established how success will be measured. This section established what might prevent it. One critical Heilmeier question remains: \textbf{Who cares? Why now? What difference will it make?} Section 6 answers by connecting this technical methodology to urgent economic and infrastructure challenges facing the nuclear industry and broader energy sector.
The technical research plan is now complete: Section 3 established what will be done; Section 4 established how success will be measured; this section established what might prevent it. One critical Heilmeier question remains—\textbf{Who cares? Why now? What difference will it make?}—which Section 6 answers by connecting this technical methodology to urgent economic and infrastructure challenges facing the nuclear industry and broader energy sector.

View File

@ -4,7 +4,7 @@
Three stakeholder groups face the same economic constraint: the nuclear industry, datacenter operators, and clean energy advocates. All confront high operating costs driven by staffing requirements. AI infrastructure demands, growing exponentially, have made this constraint urgent.
Nuclear power presents both a compelling application domain and an urgent economic challenge. Recent interest in powering artificial intelligence infrastructure has renewed focus on small modular reactors (SMRs), particularly for hyperscale datacenters requiring hundreds of megawatts of continuous power. SMRs deployed at datacenter sites minimize transmission losses and eliminate emissions. Nuclear power economics at this scale demand careful attention to operating costs.
Nuclear power presents both a compelling application domain and an urgent economic challenge. Recent interest in powering artificial intelligence infrastructure has renewed focus on small modular reactors (SMRs), particularly for hyperscale datacenters requiring hundreds of megawatts of continuous power. SMRs deployed at datacenter sites minimize transmission losses and eliminate emissions, but nuclear power economics at this scale demand careful attention to operating costs.
The U.S. Energy Information Administration's Annual Energy Outlook 2022 projects advanced nuclear power entering service in 2027 will cost \$88.24 per megawatt-hour~\cite{eia_lcoe_2022}. Datacenter electricity demand is projected to reach 1,050 terawatt-hours annually by 2030~\cite{eesi_datacenter_2024}. Nuclear power supplying this demand would generate total annual costs exceeding \$92 billion. Operations and maintenance represents a substantial component: the EIA estimates that fixed O\&M costs alone account for \$16.15 per megawatt-hour, with additional variable O\&M costs embedded in fuel and operating expenses~\cite{eia_lcoe_2022}. Combined, O\&M-related costs represent approximately 23--30\% of total levelized cost, translating to \$21--28 billion annually for projected datacenter demand.
@ -16,12 +16,12 @@ constant human oversight. This enables a fundamental shift from direct operator
control to supervisory monitoring, where operators oversee multiple autonomous
reactors rather than manually controlling individual units.
The correct-by-construction methodology is critical for this transition.
The correct-by-construction methodology proves critical for this transition.
Traditional automation approaches cannot provide sufficient safety guarantees
for nuclear applications, where regulatory requirements and public safety
concerns demand the highest levels of assurance. By formally verifying both the
discrete mode-switching logic and the continuous control behavior, this research
will produce controllers with mathematical proofs of correctness. These
produces controllers with mathematical proofs of correctness. These
guarantees enable automation to safely handle routine operations---startup
sequences, power level changes, and normal operational transitions---that
currently require human operators to follow written procedures. Operators will
@ -56,10 +56,10 @@ adoption across critical infrastructure.
This section answered three critical Heilmeier questions about impact:
\textbf{Who cares?} Three stakeholder groups face the same constraint. The nuclear industry faces an economic crisis for small modular reactors due to per-megawatt staffing costs. Datacenter operators need hundreds of megawatts of continuous clean power for AI infrastructure. Clean energy advocates need nuclear power to be economically competitive. All three need autonomous control with safety guarantees.
\textbf{Who cares?} Three stakeholder groups face the same constraint: the nuclear industry faces an economic crisis for small modular reactors due to per-megawatt staffing costs; datacenter operators need hundreds of megawatts of continuous clean power for AI infrastructure; clean energy advocates need nuclear power to be economically competitive. All three need autonomous control with safety guarantees.
\textbf{Why now?} Two forces converge. First, exponentially growing AI infrastructure demands have created immediate need for economical nuclear power at datacenter scale. Projections show datacenter electricity demand reaching 1,050 terawatt-hours annually by 2030. Second, formal methods tools have matured. Compositional hybrid verification has become computationally achievable—what was theoretically possible but practically intractable a decade ago is now feasible.
\textbf{Why now?} Two forces converge: first, exponentially growing AI infrastructure demands have created immediate need for economical nuclear power at datacenter scale, with projections showing datacenter electricity demand reaching 1,050 terawatt-hours annually by 2030; second, formal methods tools have matured, making compositional hybrid verification computationally achievable—what was theoretically possible but practically intractable a decade ago is now feasible.
\textbf{What difference will it make?} This research addresses a \$21--28 billion annual cost barrier. It enables autonomous control with mathematical safety guarantees. Beyond immediate economic impact, the methodology establishes a generalizable framework for safety-critical autonomous systems across critical infrastructure.
\textbf{What difference will it make?} This research addresses a \$21--28 billion annual cost barrier by enabling autonomous control with mathematical safety guarantees. Beyond immediate economic impact, the methodology establishes a generalizable framework for safety-critical autonomous systems across critical infrastructure.
The complete research plan now spans technical approach, success metrics, risk mitigation, and broader impact. One final Heilmeier question remains: \textbf{How long will it take?} Section 8 provides a structured 24-month research plan. It progresses through milestones tied to Technology Readiness Level advancement, demonstrating the proposed work is achievable within a doctoral timeline.
The complete research plan now spans technical approach, success metrics, risk mitigation, and broader impact. One final Heilmeier question remains\textbf{How long will it take?}—which Section 8 answers with a structured 24-month research plan progressing through milestones tied to Technology Readiness Level advancement, demonstrating the proposed work is achievable within a doctoral timeline.