M .task/backlog.data M .task/pending.data M .task/undo.data A Writing/ERLM/SaboRisksAndContingencies.pdf M Writing/ERLM/broader-impacts/v1.tex M Writing/ERLM/main.aux M Writing/ERLM/main.fdb_latexmk M Writing/ERLM/main.fls
232 lines
15 KiB
TeX
232 lines
15 KiB
TeX
\section{Risks and Contingencies}
|
|
|
|
This research relies on several critical assumptions that, if invalidated,
|
|
would require scope adjustment or methodological revision. The primary risks to
|
|
successful completion fall into four categories: computational tractability of
|
|
synthesis and verification, complexity of the discrete-continuous interface,
|
|
completeness of procedure formalization, and hardware-in-the-loop integration
|
|
challenges. Each risk has associated indicators for early detection and
|
|
contingency plans that preserve research value even if core assumptions prove
|
|
false. The staged project structure ensures that partial success yields
|
|
publishable results and clear identification of remaining barriers to
|
|
deployment.
|
|
|
|
\subsection{Computational Tractability of Synthesis}
|
|
|
|
The first major assumption is that formalized startup procedures will yield
|
|
automata small enough for efficient synthesis and verification. Reactive
|
|
synthesis scales exponentially with specification complexity, which creates risk
|
|
that temporal logic specifications derived from complete startup procedures may
|
|
produce automata with thousands of states. Such large automata would require
|
|
synthesis times exceeding days or weeks, preventing demonstration of the
|
|
complete methodology within project timelines. Reachability analysis for
|
|
continuous modes with high-dimensional state spaces may similarly prove
|
|
computationally intractable. Either barrier would constitute a fundamental
|
|
obstacle to achieving the research objectives.
|
|
|
|
Several indicators would provide early warning of computational tractability
|
|
problems. Synthesis times exceeding 24 hours for simplified procedure subsets
|
|
would suggest that complete procedures are intractable. Generated automata
|
|
containing more than 1,000 discrete states would indicate that the discrete
|
|
state space is too large for efficient verification. Specifications flagged as
|
|
unrealizable by FRET or STRIX would reveal fundamental conflicts in the
|
|
formalized procedures. Reachability analysis failing to converge within
|
|
reasonable time bounds would show that continuous mode verification cannot be
|
|
completed with available computational resources.
|
|
|
|
The contingency plan for computational intractability is to reduce scope to a
|
|
minimal viable startup sequence. This reduced sequence would cover only cold
|
|
shutdown to criticality to low power hold, omitting power ascension and other
|
|
operational phases. The subset would still demonstrate the complete
|
|
methodology while reducing computational burden. The research contribution
|
|
would remain valid even with reduced scope, proving that formal hybrid control
|
|
synthesis is achievable for safety-critical nuclear applications. The limitation
|
|
to simplified operational sequences would be explicitly documented as a
|
|
constraint rather than a failure.
|
|
|
|
Reachability analysis specifically can exploit time-scale separation inherent in
|
|
reactor dynamics. Fast thermal transients can be treated quasi-steady relative
|
|
to slower nuclear kinetics, which enables decomposition into smaller subsystems.
|
|
Temperature dynamics operate on time scales of seconds to minutes, while neutron
|
|
kinetics respond in milliseconds to seconds for prompt effects and hours for
|
|
xenon poisoning. These distinct time scales permit separate analysis with
|
|
conservative coupling assumptions between subsystems, dramatically reducing the
|
|
dimensionality of reachability computations.
|
|
|
|
Mitigation strategies exist even before contingency plans become necessary.
|
|
Access to the University of Pittsburgh Center for Research Computing provides
|
|
high-performance computing resources if single-workstation computation proves
|
|
insufficient. Parallel synthesis algorithms and distributed reachability
|
|
analysis can leverage these resources to extend computational feasibility.
|
|
Compositional verification approaches using assume-guarantee reasoning can
|
|
decompose monolithic verification problems into tractable subproblems, each of
|
|
which can be solved independently before composition.
|
|
|
|
\subsection{Discrete-Continuous Interface Formalization}
|
|
|
|
The second critical assumption concerns the mapping between boolean guard
|
|
conditions in temporal logic and continuous state boundaries required for mode
|
|
transitions. This interface represents the fundamental challenge of hybrid
|
|
systems: relating discrete switching logic to continuous dynamics. Temporal
|
|
logic operates on boolean predicates, while continuous control requires
|
|
reasoning about differential equations and reachable sets. Guard conditions that
|
|
require complex nonlinear predicates may resist boolean abstraction, making
|
|
synthesis intractable. Continuous safety regions that cannot be expressed as
|
|
conjunctions of verifiable constraints would similarly create insurmountable
|
|
verification challenges. The risk extends beyond static interface definition to
|
|
dynamic behavior across transitions: barrier certificates may fail to exist for
|
|
proposed transitions, or continuous modes may be unable to guarantee convergence
|
|
to discrete transition boundaries.
|
|
|
|
Early indicators of interface formalization problems would appear during both
|
|
synthesis and verification phases. Guard conditions requiring complex nonlinear
|
|
predicates that resist boolean abstraction would suggest fundamental misalignment
|
|
between discrete specifications and continuous realities. Continuous safety
|
|
regions that cannot be expressed as conjunctions of half-spaces or polynomial
|
|
inequalities would indicate that the interface between discrete guards and
|
|
continuous invariants is too complex. Failure to construct barrier certificates
|
|
proving safety across mode transitions would reveal that the continuous dynamics
|
|
cannot be formally related to discrete switching logic. Reachability analysis
|
|
showing that continuous modes cannot reach intended transition boundaries from
|
|
all possible initial conditions would demonstrate that the synthesized discrete
|
|
controller is incompatible with achievable continuous behavior.
|
|
|
|
The primary contingency for interface complexity is to restrict continuous modes
|
|
to operate within polytopic invariants. Polytopes are state regions defined as
|
|
intersections of linear half-spaces, which map directly to boolean predicates
|
|
through linear inequality checks. This restriction ensures tractable synthesis
|
|
while maintaining theoretical rigor, though at the cost of limiting
|
|
expressiveness compared to arbitrary nonlinear regions. The discrete-continuous
|
|
interface remains well-defined and verifiable with polytopic restrictions,
|
|
providing a clear fallback position that preserves the core methodology.
|
|
Conservative over-approximations offer an alternative approach: a nonlinear safe
|
|
region can be inner-approximated by a polytope, sacrificing operational
|
|
flexibility to maintain formal guarantees. The three-mode classification already
|
|
structures the problem to minimize complex transitions, with critical safety
|
|
properties concentrated in expulsory modes that can receive additional design
|
|
attention.
|
|
|
|
Mitigation strategies focus on designing continuous controllers with discrete
|
|
transitions as primary objectives from the outset. Rather than designing
|
|
continuous control laws independently and verifying transitions post-hoc, the
|
|
approach uses transition requirements as design constraints. Control barrier
|
|
functions provide a systematic method to synthesize controllers that guarantee
|
|
forward invariance of safe sets and convergence to transition boundaries. This
|
|
design-for-verification approach reduces the likelihood that interface
|
|
complexity becomes insurmountable. Focusing verification effort on expulsory
|
|
modes---where safety is most critical---allows more complex analysis to be
|
|
applied selectively rather than uniformly across all modes, concentrating
|
|
computational resources where they matter most for safety assurance.
|
|
|
|
\subsection{Procedure Formalization Completeness}
|
|
|
|
The third assumption is that existing SmAHTR startup procedures contain
|
|
sufficient detail and clarity for translation into temporal logic specifications.
|
|
Nuclear operating procedures, while extensively detailed, were written for human
|
|
operators who bring contextual understanding and adaptive reasoning to their
|
|
interpretation. Procedures may contain implicit knowledge, ambiguous directives,
|
|
or references to operator judgment that resist formalization in current
|
|
specification languages. Underspecified timing constraints, ambiguous condition
|
|
definitions, or gaps in operational coverage would cause synthesis to fail or
|
|
produce incorrect automata. The risk is not merely that formalization is
|
|
difficult, but that current procedures fundamentally lack the precision required
|
|
for autonomous control, revealing a gap between human-oriented documentation and
|
|
machine-executable specifications.
|
|
|
|
Several indicators would reveal formalization completeness problems early in the
|
|
project. FRET realizability checks failing due to underspecified behaviors or
|
|
conflicting requirements would indicate that procedures do not form a complete
|
|
specification. Multiple valid interpretations of procedural steps with no clear
|
|
resolution would demonstrate that procedure language is insufficiently precise
|
|
for automated synthesis. Procedures referencing ``operator judgment,'' ``as
|
|
appropriate,'' or similar discretionary language for critical decisions would
|
|
explicitly identify points where human reasoning cannot be directly formalized.
|
|
Domain experts unable to provide crisp answers to specification questions about
|
|
edge cases would suggest that the procedures themselves do not fully define
|
|
system behavior, relying instead on operator training and experience to fill
|
|
gaps.
|
|
|
|
The contingency plan treats inadequate specification as itself a research
|
|
contribution rather than a project failure. Documenting specific ambiguities
|
|
encountered would create a taxonomy of formalization barriers: timing
|
|
underspecification, missing preconditions, discretionary actions, and undefined
|
|
failure modes. Each category would be analyzed to understand why current
|
|
procedure-writing practices produce these gaps and what specification languages
|
|
would need to address them. Proposed extensions to FRETish or similar
|
|
specification languages would demonstrate how to bridge the gap between current
|
|
procedures and the precision needed for autonomous control. The research output
|
|
would shift from ``here is a complete autonomous controller'' to ``here is what
|
|
formal autonomous control requires that current procedures do not provide, and
|
|
here are language extensions to bridge that gap.'' This contribution remains
|
|
valuable to both the nuclear industry and formal methods community, establishing
|
|
clear requirements for next-generation procedure development and autonomous
|
|
control specification languages.
|
|
|
|
Early-stage procedure analysis with domain experts provides the primary
|
|
mitigation strategy. Collaboration through the University of Pittsburgh Cyber
|
|
Energy Center enables identification and resolution of ambiguities before
|
|
synthesis attempts, rather than discovering them during failed synthesis runs.
|
|
Iterative refinement with reactor operators and control engineers can clarify
|
|
procedural intent before formalization begins, reducing the risk of discovering
|
|
insurmountable specification gaps late in the project. Comparison with
|
|
procedures from multiple reactor designs---pressurized water reactors, boiling
|
|
water reactors, and advanced designs---may reveal common patterns and standard
|
|
ambiguities amenable to systematic resolution. This cross-design analysis would
|
|
strengthen the generalizability of any proposed specification language
|
|
extensions, ensuring they address industry-wide practices rather than
|
|
SmAHTR-specific quirks.
|
|
|
|
\subsection{Hardware-in-the-Loop Integration Complexity}
|
|
|
|
The fourth assumption is that the ARCADE interface can provide stable real-time
|
|
communication between Simulink simulation and Ovation control hardware at
|
|
control rates required for reactor dynamics. Hardware-in-the-loop testing
|
|
introduces timing constraints, communication latency, and platform compatibility
|
|
challenges that are absent in pure simulation. Control rates for reactor systems
|
|
typically range from 10-100 Hz for continuous control to millisecond response
|
|
times for protection system actions. Control loop jitter, communication
|
|
dropouts, or computational limitations in the Ovation PLC could prevent
|
|
successful HIL validation even if the synthesized controller is theoretically
|
|
correct. Real-time operating system constraints, network latency, and hardware
|
|
execution speed may prove incompatible with verified timing assumptions embedded
|
|
in the controller design.
|
|
|
|
Early indicators would identify hardware integration problems before they derail
|
|
the entire validation effort. Communication dropouts or buffer overruns between ARCADE
|
|
and Ovation would indicate that the interface cannot maintain stable real-time
|
|
data exchange. The Ovation PLC proving unable to execute the synthesized
|
|
automaton at required speed would reveal fundamental computational limitations
|
|
of the target hardware platform. Timing analysis showing that hardware cannot
|
|
meet real-time deadlines assumed during verification would demonstrate
|
|
incompatibility between formal guarantees and physical implementation
|
|
constraints.
|
|
|
|
The contingency plan is to demonstrate the controller in software-in-the-loop
|
|
configuration with detailed timing analysis showing industrial hardware
|
|
feasibility. Software-in-the-loop testing executes the complete verified
|
|
controller in a real-time software environment that emulates hardware timing
|
|
constraints without requiring physical hardware. Combined with worst-case
|
|
execution time analysis of the synthesized automaton and continuous control
|
|
algorithms, software-in-the-loop validation can provide strong evidence of
|
|
implementability even without physical hardware demonstration. This approach
|
|
maintains TRL 4 rather than TRL 5, but still validates
|
|
the synthesis methodology and establishes a clear pathway to hardware
|
|
deployment. The research contribution remains intact: demonstrating that formal
|
|
hybrid control synthesis produces implementable controllers, with remaining
|
|
barriers clearly identified as hardware integration challenges rather than
|
|
fundamental methodological limitations.
|
|
|
|
Mitigation strategies leverage existing infrastructure and adopt early testing
|
|
practices. ARCADE has been successfully used for reactor simulation HIL testing
|
|
at the University of Pittsburgh, establishing feasibility in principle and
|
|
providing institutional knowledge about common integration challenges. Conducting
|
|
early integration testing during the synthesis phase, rather than deferring HIL
|
|
attempts until late in the project, identifies timing constraints and
|
|
communication requirements that can inform controller design. Early testing
|
|
ensures that synthesized controllers are compatible with hardware limitations
|
|
from the outset, rather than discovering incompatibilities after synthesis is
|
|
complete. The Ovation platform supports multiple implementation approaches
|
|
including function blocks, structured text, and ladder logic, which provides
|
|
flexibility in how synthesized automata are realized and may enable workarounds
|
|
if one implementation approach proves problematic.
|