M .task/backlog.data M .task/pending.data M .task/undo.data M Writing/ERLM/1-goals-and-outcomes/research_statement.tex A Writing/ERLM/1-goals-and-outcomes/research_statement_v2.tex A Writing/ERLM/1-goals-and-outcomes/v8.tex A Writing/ERLM/2-state-of-the-art/v7.tex M Writing/ERLM/3-research-approach/v4.tex
161 lines
10 KiB
TeX
161 lines
10 KiB
TeX
\section{Risks and Contingencies}
|
|
|
|
This research relies on several critical assumptions that, if invalidated,
|
|
would require scope adjustment or methodological revision. The primary risks to
|
|
successful completion fall into four categories: computational tractability of
|
|
synthesis and verification, complexity of the discrete-continuous interface,
|
|
completeness of procedure formalization, and hardware-in-the-loop integration
|
|
challenges. Each risk has associated indicators for early detection and
|
|
contingency plans that preserve research value even if core assumptions prove
|
|
false. The staged project structure ensures that partial success yields
|
|
publishable results and clear identification of remaining barriers to
|
|
deployment.
|
|
|
|
\subsection{Computational Tractability of Synthesis}
|
|
|
|
The first major assumption is that formalized startup procedures will yield
|
|
automata small enough for efficient synthesis and verification. Reactive
|
|
synthesis scales exponentially with specification complexity, which creates risk
|
|
that temporal logic specifications derived from complete startup procedures may
|
|
produce automata with thousands of states. Such large automata would require
|
|
synthesis times exceeding days or weeks, preventing demonstration of the
|
|
complete methodology within project timelines. Reachability analysis for
|
|
continuous modes with high-dimensional state spaces may similarly prove
|
|
computationally intractable. Either barrier would constitute a fundamental
|
|
obstacle to achieving the research objectives.
|
|
|
|
Several indicators would provide early warning of computational tractability
|
|
problems. Synthesis times exceeding 24 hours for simplified procedure subsets
|
|
would suggest that complete procedures are intractable. Generated automata
|
|
containing more than 1,000 discrete states would indicate that the discrete
|
|
state space is too large for efficient verification. Specifications flagged as
|
|
unrealizable by FRET or Strix would reveal fundamental conflicts in the
|
|
formalized procedures. Reachability analysis failing to converge within
|
|
reasonable time bounds would show that continuous mode verification cannot be
|
|
completed with available computational resources.
|
|
|
|
The contingency plan for computational intractability is to reduce scope to a
|
|
minimal viable startup sequence. This reduced sequence would cover only cold
|
|
shutdown to criticality to low power hold, omitting power ascension and other
|
|
operational phases. The subset would still demonstrate the complete
|
|
methodology while reducing computational burden. The research contribution
|
|
would remain valid even with reduced scope, proving that formal hybrid control
|
|
synthesis is achievable for safety-critical nuclear applications. The limitation
|
|
to simplified operational sequences would be explicitly documented as a
|
|
constraint rather than a failure.
|
|
|
|
\subsection{Discrete-Continuous Interface Formalization}
|
|
|
|
The second critical assumption concerns the mapping between boolean guard
|
|
conditions in temporal logic and continuous state boundaries required for mode
|
|
transitions. This interface represents the fundamental challenge of hybrid
|
|
systems: relating discrete switching logic to continuous dynamics. Temporal
|
|
logic operates on boolean predicates, while continuous control requires
|
|
reasoning about differential equations and reachable sets. Guard conditions that
|
|
require complex nonlinear predicates may resist boolean abstraction, making
|
|
synthesis intractable. Continuous safety regions that cannot be expressed as
|
|
conjunctions of verifiable constraints would similarly create insurmountable
|
|
verification challenges. The risk extends beyond static interface definition to
|
|
dynamic behavior across transitions: barrier certificates may fail to exist for
|
|
proposed transitions, or continuous modes may be unable to guarantee convergence
|
|
to discrete transition boundaries.
|
|
|
|
Early indicators of interface formalization problems would appear during both
|
|
synthesis and verification phases. Guard conditions requiring complex nonlinear
|
|
predicates that resist boolean abstraction would suggest fundamental misalignment
|
|
between discrete specifications and continuous realities. Continuous safety
|
|
regions that cannot be expressed as conjunctions of half-spaces or polynomial
|
|
inequalities would indicate that the interface between discrete guards and
|
|
continuous invariants is too complex. Failure to construct barrier certificates
|
|
proving safety across mode transitions would reveal that the continuous dynamics
|
|
cannot be formally related to discrete switching logic. Reachability analysis
|
|
showing that continuous modes cannot reach intended transition boundaries from
|
|
all possible initial conditions would demonstrate that the synthesized discrete
|
|
controller is incompatible with achievable continuous behavior.
|
|
|
|
The primary contingency for interface complexity is to restrict continuous modes
|
|
to operate within polytopic invariants. Polytopes are state regions defined as
|
|
intersections of linear half-spaces, which map directly to boolean predicates
|
|
through linear inequality checks. This restriction ensures tractable synthesis
|
|
while maintaining theoretical rigor, though at the cost of limiting
|
|
expressiveness compared to arbitrary nonlinear regions. The discrete-continuous
|
|
interface remains well-defined and verifiable with polytopic restrictions,
|
|
providing a clear fallback position that preserves the core methodology.
|
|
Conservative over-approximations offer an alternative approach: a nonlinear safe
|
|
region can be inner-approximated by a polytope, sacrificing operational
|
|
flexibility to maintain formal guarantees. The three-mode classification already
|
|
structures the problem to minimize complex transitions, with critical safety
|
|
properties concentrated in expulsory modes that can receive additional design
|
|
attention.
|
|
|
|
Mitigation strategies focus on designing continuous controllers with discrete
|
|
transitions as primary objectives from the outset. Rather than designing
|
|
continuous control laws independently and verifying transitions post-hoc, the
|
|
approach uses transition requirements as design constraints. Control barrier
|
|
functions provide a systematic method to synthesize controllers that guarantee
|
|
forward invariance of safe sets and convergence to transition boundaries. This
|
|
design-for-verification approach reduces the likelihood that interface
|
|
complexity becomes insurmountable. Focusing verification effort on expulsory
|
|
modes---where safety is most critical---allows more complex analysis to be
|
|
applied selectively rather than uniformly across all modes, concentrating
|
|
computational resources where they matter most for safety assurance.
|
|
|
|
\subsection{Procedure Formalization Completeness}
|
|
|
|
The third assumption is that existing startup procedures contain
|
|
sufficient detail and clarity for translation into temporal logic specifications.
|
|
Nuclear operating procedures, while extensively detailed, were written for human
|
|
operators who bring contextual understanding and adaptive reasoning to their
|
|
interpretation. Procedures may contain implicit knowledge, ambiguous directives,
|
|
or references to operator judgment that resist formalization in current
|
|
specification languages. Underspecified timing constraints, ambiguous condition
|
|
definitions, or gaps in operational coverage would cause synthesis to fail or
|
|
produce incorrect automata. The risk is not merely that formalization is
|
|
difficult, but that current procedures fundamentally lack the precision required
|
|
for autonomous control, revealing a gap between human-oriented documentation and
|
|
machine-executable specifications.
|
|
|
|
Several indicators would reveal formalization completeness problems early in the
|
|
project. FRET realizability checks failing due to underspecified behaviors or
|
|
conflicting requirements would indicate that procedures do not form a complete
|
|
specification. Multiple valid interpretations of procedural steps with no clear
|
|
resolution would demonstrate that procedure language is insufficiently precise
|
|
for automated synthesis. Procedures referencing ``operator judgment,'' ``as
|
|
appropriate,'' or similar discretionary language for critical decisions would
|
|
explicitly identify points where human reasoning cannot be directly formalized.
|
|
Domain experts unable to provide crisp answers to specification questions about
|
|
edge cases would suggest that the procedures themselves do not fully define
|
|
system behavior, relying instead on operator training and experience to fill
|
|
gaps.
|
|
|
|
The contingency plan treats inadequate specification as itself a research
|
|
contribution rather than a project failure. Documenting specific ambiguities
|
|
encountered would create a taxonomy of formalization barriers: timing
|
|
underspecification, missing preconditions, discretionary actions, and undefined
|
|
failure modes. Each category would be analyzed to understand why current
|
|
procedure-writing practices produce these gaps and what specification languages
|
|
would need to address them. Proposed extensions to FRETish or similar
|
|
specification languages would demonstrate how to bridge the gap between current
|
|
procedures and the precision needed for autonomous control. The research output
|
|
would shift from ``here is a complete autonomous controller'' to ``here is what
|
|
formal autonomous control requires that current procedures do not provide, and
|
|
here are language extensions to bridge that gap.'' This contribution remains
|
|
valuable to both the nuclear industry and formal methods community, establishing
|
|
clear requirements for next-generation procedure development and autonomous
|
|
control specification languages.
|
|
|
|
Early-stage procedure analysis with domain experts provides the primary
|
|
mitigation strategy. Collaboration through the University of Pittsburgh Cyber
|
|
Energy Center enables identification and resolution of ambiguities before
|
|
synthesis attempts, rather than discovering them during failed synthesis runs.
|
|
Iterative refinement with reactor operators and control engineers can clarify
|
|
procedural intent before formalization begins, reducing the risk of discovering
|
|
insurmountable specification gaps late in the project. Comparison with
|
|
procedures from multiple reactor designs---pressurized water reactors, boiling
|
|
water reactors, and advanced designs---may reveal common patterns and standard
|
|
ambiguities amenable to systematic resolution. This cross-design analysis would
|
|
strengthen the generalizability of any proposed specification language
|
|
extensions, ensuring they address industry-wide practices rather than
|
|
specific quirks.
|
|
|