\section{Risks and Contingencies} \textbf{Heilmeier Question: What could prevent success?} Section 4 defined success as reaching TRL 5 through component validation, system integration, and hardware demonstration. That definition assumes critical technical challenges can be overcome. Every research plan rests on assumptions that might prove false. Three primary risks could prevent reaching TRL 5: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, and completeness of procedure formalization. Each risk has identifiable early warning indicators and viable mitigation strategies. Each risk carries associated early warning indicators and contingency plans that preserve research value even when core assumptions fail. The staged project structure ensures that partial success yields publishable results and clearly identifies remaining barriers to deployment, even when full success proves elusive. \subsection{Computational Tractability of Synthesis} Computational tractability represents the first major risk. The core assumption: formalized startup procedures will yield automata small enough for efficient synthesis and verification. This assumption may fail—reactive synthesis scales exponentially with specification complexity. Temporal logic specifications from complete startup procedures may produce automata with thousands of states, with synthesis times exceeding days or weeks and preventing completion within project timelines. Reachability analysis for continuous modes with high-dimensional state spaces may similarly prove intractable. Either barrier would constitute a fundamental obstacle. Several indicators would provide early warning of computational tractability problems. Synthesis times exceeding 24 hours for simplified procedure subsets would suggest complete procedures are intractable. Generated automata containing more than 1,000 discrete states would indicate the discrete state space is too large for efficient verification. Specifications flagged as unrealizable by FRET or Strix would reveal fundamental conflicts in the formalized procedures. Reachability analysis failing to converge within reasonable time bounds would show that continuous mode verification cannot be completed with available computational resources. If computational tractability becomes the limiting factor, we reduce scope to a minimal viable startup sequence covering only cold shutdown to criticality to low-power hold. This scope reduction omits power ascension and other operational phases. The reduced sequence still demonstrates the complete methodology—procedure formalization, discrete synthesis, continuous verification, and hardware implementation—while reducing computational burden. The research contribution remains valid: we prove that formal hybrid control synthesis is achievable for safety-critical nuclear applications and clearly identify which operational complexities exceed current computational capabilities. We document the limitation as a scaling constraint requiring future work, not a methodological failure. \subsection{Discrete-Continuous Interface Formalization} The first risk addressed practical constraints: whether synthesis can complete within reasonable time bounds. The second risk proves more fundamental: whether boolean guard conditions in temporal logic can map cleanly to continuous state boundaries required for mode transitions. This interface represents the fundamental challenge of hybrid systems: relating discrete switching logic to continuous dynamics. Temporal logic operates on boolean predicates, while continuous control requires reasoning about differential equations and reachable sets. Guard conditions requiring complex nonlinear predicates may resist boolean abstraction. This would make synthesis intractable. Continuous safety regions that cannot be expressed as conjunctions of verifiable constraints would similarly create insurmountable verification challenges. The risk extends beyond static interface definition to dynamic behavior across transitions. Barrier certificates may fail to exist for proposed transitions. Continuous modes may be unable to guarantee convergence to discrete transition boundaries. Early indicators of interface formalization problems would appear during both synthesis and verification phases. Guard conditions requiring complex nonlinear predicates that resist boolean abstraction would suggest fundamental misalignment between discrete specifications and continuous realities. Continuous safety regions that cannot be expressed as conjunctions of half-spaces or polynomial inequalities would indicate the interface between discrete guards and continuous invariants is too complex. Failure to construct barrier certificates proving safety across mode transitions would reveal that continuous dynamics cannot be formally related to discrete switching logic. Reachability analysis showing that continuous modes cannot reach intended transition boundaries from all possible initial conditions would demonstrate the synthesized discrete controller is incompatible with achievable continuous behavior. The primary contingency for interface complexity is restricting continuous modes to operate within polytopic invariants. Polytopes are state regions defined as intersections of linear half-spaces, which map directly to boolean predicates through linear inequality checks. This restriction ensures tractable synthesis while maintaining theoretical rigor, though at the cost of limiting expressiveness compared to arbitrary nonlinear regions. The discrete-continuous interface remains well-defined and verifiable with polytopic restrictions, providing a clear fallback position that preserves the core methodology. Conservative over-approximations offer an alternative approach: a nonlinear safe region can be inner-approximated by a polytope, sacrificing operational flexibility to maintain formal guarantees. The three-mode classification already structures the problem to minimize complex transitions, with critical safety properties concentrated in expulsory modes that can receive additional design attention. Mitigation strategies focus on designing continuous controllers with discrete transitions as primary objectives from the outset. Rather than designing continuous control laws independently and verifying transitions post-hoc, the approach uses transition requirements as design constraints. Control barrier functions provide a systematic method to synthesize controllers that guarantee forward invariance of safe sets and convergence to transition boundaries. This design-for-verification approach reduces the likelihood that interface complexity becomes insurmountable. Focusing verification effort on expulsory modes---where safety is most critical---allows more complex analysis to be applied selectively rather than uniformly across all modes, concentrating computational resources where they matter most for safety assurance. \subsection{Procedure Formalization Completeness} The previous two risks concerned verification infrastructure—computational scaling and mathematical formalization. The third risk addresses the source material itself: whether existing startup procedures contain sufficient detail and clarity for translation into temporal logic specifications. Nuclear operating procedures, while extensively detailed, were written for human operators who bring contextual understanding and adaptive reasoning to their interpretation. Procedures may contain implicit knowledge, ambiguous directives, or references to operator judgment that resist formalization in current specification languages. Underspecified timing constraints, ambiguous condition definitions, or gaps in operational coverage would cause synthesis to fail or produce incorrect automata. The risk is not merely that formalization is difficult, but that current procedures fundamentally lack the precision required for autonomous control, revealing a gap between human-oriented documentation and machine-executable specifications. Several indicators would reveal formalization completeness problems early in the project. FRET realizability checks failing due to underspecified behaviors or conflicting requirements would indicate procedures do not form a complete specification. Multiple valid interpretations of procedural steps with no clear resolution would demonstrate procedure language is insufficiently precise for automated synthesis. Procedures referencing ``operator judgment,'' ``as appropriate,'' or similar discretionary language for critical decisions would explicitly identify points where human reasoning cannot be directly formalized. Domain experts unable to provide crisp answers to specification questions about edge cases would suggest the procedures themselves do not fully define system behavior, relying instead on operator training and experience to fill gaps. The contingency plan treats inadequate specification as itself a research contribution rather than a project failure. Documenting specific ambiguities encountered would create a taxonomy of formalization barriers: timing underspecification, missing preconditions, discretionary actions, and undefined failure modes. Each category would be analyzed to understand why current procedure-writing practices produce these gaps and what specification languages would need to address them. Proposed extensions to FRETish or similar specification languages would demonstrate how to bridge the gap between current procedures and the precision needed for autonomous control. The research output would shift from ``here is a complete autonomous controller'' to ``here is what formal autonomous control requires that current procedures do not provide, and here are language extensions to bridge that gap.'' This contribution remains valuable to both the nuclear industry and formal methods community, establishing clear requirements for next-generation procedure development and autonomous control specification languages. Early-stage procedure analysis with domain experts provides the primary mitigation strategy. Collaboration through the University of Pittsburgh Cyber Energy Center enables identification and resolution of ambiguities before synthesis attempts, rather than discovering them during failed synthesis runs. Iterative refinement with reactor operators and control engineers can clarify procedural intent before formalization begins, reducing the risk of discovering insurmountable specification gaps late in the project. Comparison with procedures from multiple reactor designs---pressurized water reactors, boiling water reactors, and advanced designs---may reveal common patterns and standard ambiguities amenable to systematic resolution. This cross-design analysis would strengthen the generalizability of any proposed specification language extensions, ensuring they address industry-wide practices rather than specific quirks. This section answered the Heilmeier question: What could prevent success? \textbf{Answer:} Three primary risks threaten TRL 5 achievement: computational tractability of synthesis and verification, complexity of the discrete-continuous interface, and completeness of procedure formalization. Each risk has identifiable early warning indicators enabling detection before failure becomes inevitable. Each has viable mitigation strategies preserving research value even when core assumptions fail. The staged project structure ensures partial success yields publishable results identifying remaining deployment barriers. This design maintains contribution regardless of which technical obstacles prove insurmountable. Even "failure" advances the field by documenting precisely which barriers remain. The technical research plan is complete. Section 3 established what will be done and why it will succeed. Section 4 defined how to measure success. This section identified what might prevent success and how to respond. One critical question remains: Who cares? Why now? What difference will it make? Section 6 connects this technical methodology to urgent economic and societal challenges.