226 lines
13 KiB
TeX
226 lines
13 KiB
TeX
\section{State of the Art and Limits of Current Practice}
|
|
|
|
The principal aim of this research is to create autonomous reactor control
|
|
systems that are tractably safe. Understanding what is being automated requires
|
|
understanding how nuclear reactors are operated today. This section examines
|
|
reactor operating procedures, investigates limitations of human-based operation,
|
|
and reviews current formal methods approaches to reactor control systems.
|
|
|
|
\subsection{Current Reactor Procedures and Operation}
|
|
|
|
Nuclear plant operations are governed by a hierarchy of written procedures,
|
|
ranging from normal operating procedures for routine operations to Emergency
|
|
Operating Procedures (EOPs) for design-basis accidents and Severe Accident
|
|
Management Guidelines (SAMGs) for beyond-design-basis events. These procedures
|
|
exist because reactor operation requires deterministic responses to a wide range
|
|
of plant conditions, from routine power changes to catastrophic failures. These
|
|
procedures must comply with 10 CFR 50.34(b)(6)(ii) and are developed using
|
|
guidance from NUREG-0899~\cite{NUREG-0899, 10CFR50.34}. Procedures undergo
|
|
technical evaluation, simulator validation testing, and biennial review as part
|
|
of operator requalification under 10 CFR 55.59~\cite{10CFR55.59}. Despite this
|
|
rigor, procedures fundamentally lack exhaustive verification of key safety
|
|
properties.
|
|
|
|
\textbf{LIMITATION:} \textit{Procedures lack exhaustive verification of
|
|
correctness and completeness.} No mathematical proof exists that procedures
|
|
cover all possible plant states, that required actions can be completed within
|
|
available timeframes, or that transitions between procedure sets maintain safety
|
|
invariants. Paper-based procedures cannot ensure correct application, and even
|
|
computer-based procedure systems lack the formal guarantees that automated
|
|
reasoning could provide.
|
|
|
|
Nuclear plants operate with multiple control modes: automatic control, where
|
|
the reactor control system maintains target parameters through continuous
|
|
reactivity adjustment; manual control, where operators directly manipulate
|
|
the reactor; and various intermediate modes. In typical pressurized water
|
|
reactor operation, the reactor control system automatically maintains a
|
|
floating average temperature and compensates for power demand changes through
|
|
reactivity feedback loops alone. Safety systems, by contrast, operate with
|
|
implemented automation. Reactor Protection Systems trip automatically on
|
|
safety signals with millisecond response times, and engineered safety
|
|
features actuate automatically on accident signals without operator action
|
|
required.
|
|
|
|
This division between automated and human-controlled
|
|
functions is itself the hybrid control problem. Automated systems handle
|
|
reactor protection: trips on safety parameters, emergency core cooling
|
|
actuation, containment isolation, and basic process
|
|
control~\cite{WRPS.Description, gentillon_westinghouse_1999}. Human
|
|
operators retain control of everything else: power level changes,
|
|
startup/shutdown sequences, mode transitions, and procedure
|
|
implementation.
|
|
|
|
The persistent role of human error in nuclear safety incidents---despite decades
|
|
of improvements in training and procedures---provides the most compelling
|
|
motivation for formal automated control with mathematical safety guarantees.
|
|
Operators hold legal authority under 10 CFR Part 55 to make critical
|
|
decisions~\cite{10CFR55}, including departing from normal regulations during
|
|
emergencies. This authority itself introduces risk. The Three Mile Island (TMI)
|
|
accident demonstrated how a combination of personnel error, design deficiencies,
|
|
and component failures led to partial meltdown when operators misread confusing
|
|
and contradictory readings and shut off the emergency water
|
|
system~\cite{Kemeny1979}. The President's Commission on TMI identified a
|
|
fundamental ambiguity: placing responsibility for safe power plant operations on
|
|
the licensee without formal verification that operators can fulfill this
|
|
responsibility does not guarantee safety. This tension between operational
|
|
flexibility and safety assurance remains unresolved: the person responsible for
|
|
reactor safety is often the root cause of failures.\splitnote{``the person
|
|
responsible for reactor safety is often the root cause of failures'' —
|
|
devastating summary. Very effective.}
|
|
|
|
Multiple independent analyses converge on a striking statistic: 70--80\% of
|
|
nuclear power plant events are attributed to human error, versus
|
|
approximately 20\% to equipment failures~\cite{WNA2020}. More significantly,
|
|
the root cause of all severe accidents at nuclear power plants---Three Mile
|
|
Island, Chernobyl, and Fukushima Daiichi---has been identified as primarily human
|
|
factors~\cite{hogberg_root_2013}. A detailed analysis of 190 events at
|
|
Chinese nuclear power plants from
|
|
2007--2020~\cite{zhang_analysis_2025} found that 53\% of events involved
|
|
active errors, while 92\% were associated with latent
|
|
errors---organizational and systemic weaknesses that create conditions for
|
|
failure.\splitnote{Strong empirical grounding. The Chinese plant data is a
|
|
nice addition — shows this isn't just a Western regulatory perspective.}
|
|
|
|
|
|
\textbf{LIMITATION:} \textit{Human factors impose fundamental reliability
|
|
limits that cannot be overcome through training alone.} The persistent human
|
|
error contribution despite four decades of improvements demonstrates that
|
|
these limitations are fundamental rather than a remediable part of
|
|
human-driven control.\splitnote{Well-stated. The ``four decades'' point
|
|
drives it home.}
|
|
|
|
\subsection{Formal Methods}
|
|
\subsubsection{HARDENS}
|
|
|
|
The High Assurance Rigorous Digital Engineering for Nuclear Safety (HARDENS)
|
|
project represents the most advanced application of formal methods to nuclear
|
|
reactor control systems to date~\cite{Kiniry2024}. HARDENS aimed to address a
|
|
fundamental dilemma: existing U.S. nuclear control rooms rely on analog
|
|
technologies from the 1950s--60s. This technology is obsolete compared to modern
|
|
control systems and incurs significant risk and cost. A U.S. Nuclear Regulatory
|
|
Commission report demonstrated that model-based systems engineering and formal
|
|
methods could design, verify, and implement a complex protection system meeting
|
|
regulatory criteria. The project delivered a Reactor Trip System (RTS)
|
|
implementation with traceability from regulatory requirements to verified
|
|
software through formal architecture specifications.
|
|
|
|
HARDENS employed formal methods tools at
|
|
every level of system development, from high-level requirements through
|
|
executable models to generated code. High-level specifications used
|
|
Lando, SysMLv2, and FRET (NASA Formal Requirements Elicitation Tool) to
|
|
capture stakeholder requirements, domain engineering, certification
|
|
requirements, and safety requirements. Requirements were analyzed for
|
|
consistency, completeness, and realizability using SAT and SMT solvers.
|
|
Executable formal models used Cryptol to create a behavioral model of the
|
|
entire RTS, including all subsystems, components, and limited digital twin
|
|
models of sensors, actuators, and compute infrastructure. Automatic code
|
|
synthesis generated verifiable C implementations and SystemVerilog hardware
|
|
implementations directly from Cryptol models---eliminating the traditional
|
|
gap between specification and implementation where errors commonly
|
|
arise.
|
|
|
|
Despite these accomplishments, HARDENS addressed only discrete digital
|
|
control logic. The Reactor Trip System verification covered discrete state
|
|
transitions, digital sensor processing, and discrete actuation outputs. It
|
|
did not model or verify continuous reactor dynamics. Real reactor safety
|
|
depends on the interaction between continuous processes---temperature,
|
|
pressure, neutron flux---evolving in response to discrete control decisions.
|
|
HARDENS verified the discrete controller in isolation but not a
|
|
closed-loop hybrid system behavior.
|
|
|
|
\textbf{LIMITATION:} \textit{HARDENS addressed discrete control logic
|
|
without continuous dynamics or hybrid system verification.} Verifying
|
|
discrete control logic alone provides no guarantee that the closed-loop
|
|
system exhibits desired continuous behavior such as stability, convergence to
|
|
setpoints, or maintained safety margins.
|
|
|
|
HARDENS produced a demonstrator system at Technology Readiness Level 2--3
|
|
(analytical proof of concept with laboratory breadboard validation) rather than
|
|
a deployment-ready system validated through extended operational testing. The
|
|
NRC Final Report explicitly notes~\cite{Kiniry2024} that all material is
|
|
considered in development, not a finalized product, and that ``The demonstration
|
|
of its technical soundness was to be at a level consistent with satisfaction of
|
|
the current regulatory criteria, although with no explicit demonstration of how
|
|
regulatory requirements are met.'' The project did not include deployment in
|
|
%DAS 3/16/26. I double checked this quote. It's on page 4 of the HARDENS report
|
|
actual nuclear facilities, testing with real reactor systems under operational
|
|
conditions, side-by-side validation with operational analog RTS systems,
|
|
systematic failure mode testing, NRC licensing review, or human factors
|
|
validation with licensed operators in realistic control room scenarios.
|
|
|
|
\textbf{LIMITATION:} \textit{HARDENS achieved TRL 2--3 without experimental
|
|
validation.} While formal verification provides mathematical correctness
|
|
guarantees for the implemented discrete logic, the gap between formal
|
|
verification and actual system deployment involves myriad practical
|
|
considerations: integration with legacy systems, human-system interaction in
|
|
realistic operational contexts, and regulatory acceptance of formal methods as
|
|
primary assurance evidence remain as significant challenges.
|
|
|
|
\subsubsection{Temporal Logic and Formal Specification}
|
|
|
|
Formal verification of any system requires a precise language for stating
|
|
what the system must do. Temporal logic provides this language by extending
|
|
classical propositional logic with operators that express properties over
|
|
time. Where propositional logic can state that a condition is true or false,
|
|
temporal logic can state that a condition must always hold, must eventually
|
|
hold, or must hold until some other condition is met. This expressiveness
|
|
makes temporal logic the foundation for specifying reactive systems---systems
|
|
that continuously interact with their environment and must satisfy ongoing
|
|
behavioral requirements.
|
|
|
|
For nuclear control applications, temporal logic captures exactly the kinds
|
|
of properties that matter: safety properties (``the reactor never enters an
|
|
unsafe configuration''), liveness properties (``the system eventually reaches
|
|
operating temperature''), and response properties (``if coolant pressure
|
|
drops, shutdown initiates within bounded time''). These properties can be
|
|
derived directly from operating procedures and regulatory requirements,
|
|
creating a formal link between existing documentation and verifiable system
|
|
behavior.
|
|
|
|
NASA's Formal Requirements Elicitation Tool (FRET) bridges the gap between
|
|
natural language requirements and temporal logic specifications. FRET
|
|
provides a structured intermediate language that allows engineers to define
|
|
temporal behavior without writing raw logic formulas. The HARDENS project
|
|
used FRET for requirement capture, and it has been validated on aerospace
|
|
systems including a Lift+Cruise aircraft with 53 formalized
|
|
requirements~\cite{Pressburger2023}. FRET's ability to start with imprecise
|
|
natural language and iteratively refine it into well-posed specifications
|
|
makes it particularly suited to formalizing nuclear operating procedures.
|
|
|
|
\subsubsection{Differential Dynamic Logic}
|
|
|
|
A separate line of work extends temporal logics to verify hybrid systems
|
|
directly. The result has been the field of differential dynamic logic (dL). dL
|
|
introduces two additional operators into temporal logic: the box operator and
|
|
the diamond operator. The box operator \([\alpha]\phi\) states that for some
|
|
region \(\phi\), the hybrid system \(\alpha\) always remains within that
|
|
region. In this way, it is a safety invariant being
|
|
enforced for the system. The second operator, the diamond operator
|
|
\(<\alpha>\phi\) says that for the region \(\phi\), there is at least one
|
|
trajectory of \(\alpha\) that enters that region. This is a declaration of a
|
|
liveness property.
|
|
|
|
|
|
While dL allows for the specification of these liveness and safety properties,
|
|
actually proving them for a given hybrid system is quite difficult. Automated
|
|
proof assistants such as KeYmaera X exist to help develop proofs of systems
|
|
using dL, but so far have been insufficient for reasonably complex hybrid
|
|
systems. The main issue behind creating system proofs using dL is
|
|
non-terminating solutions. Approaches have been made to alleviate these issues
|
|
for nuclear power contexts using contract and decomposition based methods, but
|
|
do not yet constitute a complete design methodology \cite{kapuria_using_2025}.
|
|
Instead, these approaches have been used on systems that have been designed a
|
|
priori, and require expert knowledge to create the system proofs.
|
|
|
|
%Maybe a limitation here? Something about dL doesn't scale or help in design
|
|
%very much, so the limitation is that logic based hybrid system approaches
|
|
%have not been used in the DESIGN of autonomous controllers, only in the
|
|
%analysis of systems that already exist.
|
|
|
|
\textbf{LIMITATION:} \textit{Differential dynamic logic has been used for
|
|
post-hoc analysis of existing systems, not for the constructive design of
|
|
autonomous controllers.} Current dL-based approaches verify systems that
|
|
have already been designed, requiring expert knowledge to construct proofs.
|
|
They have not been applied to the synthesis of new controllers from
|
|
operational requirements.
|