13 KiB
Ideas taken from https://services.anu.edu.au/files/development_opportunity/ResearchProposalTips_0.pdf
Title / Topic
Research Problem (Justification)
- Why does robust control exist
- air conditioning example - but what if the plant is different? What is buddy leaves a window open
- We can examine whether or not our controller (the ac unit) can handle the perturbed plant
- We can know how open the window is before we have problems
- We can guarantee this for this controller design and designed laws
- So if we do this can be sure when we build the unit that this is how it will perform?
- Well if it's controlled with a microcontroller or other code based solution, no.
- The abstraction between the design and the finished controller destroys the guarantee
- Things can happen in implementation that make the controller built not true to design
- As a result, we need to reverify robustness on built controllers
- This exists for structured perturbations. We
Gap In The Literature
Slide 1: Robust Control Foundations
Assertion: Robust control ensures stability despite system discrepancies.
Evidence:
- Controllers are based on physical models that differ from real systems.
- Robust control analyzes resilience to system perturbations.
- Evolved from single-input single-output to multi-input multi-output systems.
(Cite Doyle, Green, Brunton)
Slide 2: Structured vs. Unstructured Perturbations
Assertion: Robust control addresses structured and unstructured perturbations differently.
Evidence:
- Structured: Based on physical tolerances (e.g., spring rates).
- Unstructured: Accounts for unmodeled dynamics and broader uncertainties.
(Diagram comparing structured and unstructured perturbations)
(Cite Doyle, Green)
Slide 3: Disk-Based Unstructured Uncertainty
Assertion: Disk-based perturbation quantifies unstructured uncertainties.
Evidence:
- Key equation:
\tilde{P} = (1 + \Delta W_2) PP: Nominal plant.\Delta: Perturbation transfer function.W_2: Uncertainty envelope.
- Conditions for
W_2and\Delta:\left| \frac{\tilde{P}(j\omega)}{P(j\omega)} - 1 \right| \leq \beta |W_2(j\omega)|||\Delta||_\infty \leq \beta.
(Include a visual of how \Delta affects P)
Slide 4: Current Limitations in Robust Control
Assertion: Current methods lack discrete examples of unstructured perturbations.
Evidence:
\Deltais undefined for experimental robustness verification.- Structured uncertainties are used experimentally but neglect unmodeled dynamics.
(Cite Farzan, Hamilton)
Slide 5: Diffusion Models as a Solution
Assertion: Diffusion models can generate unstructured perturbations.
Evidence:
- Forward process transforms data to Gaussian distribution.
- Reverse process generates approximations of target data.
- Applications in protein folding, training data generation.
(Diagram of forward/reverse processes in diffusion models)
(Cite Sohl-Dickstein, Abramson)
Slide 6: Parallels Between Diffusion Models and This Project
Assertion: Diffusion models address sparse perturbation generation in engineering.
Evidence:
- Diffusion models create diverse training data from sparse sets.
- Proposed approach: Generate unstructured perturbations from structured sets.
(Illustration of sparse-to-diverse transformation concept)
Goals and Outcomes
Research Methodology
Slide 1: Research Motivation
Assertion: Current methods for generating unstructured perturbations are limited in flexibility and generalizability.
- Evidence:
- Unstructured perturbations lack adaptability to various scenarios.
- Proposed approach leverages diffusion generative models for flexible perturbation generation.
Visuals:
- A flowchart contrasting traditional perturbation methods vs. diffusion models.
Slide 2: Diffusion Model Features
Assertion: Frequency response data forms the foundation for feature creation in diffusion models.
- Evidence:
- Features discretize dynamics into a vector of magnitude and phase.
- Supports training without imparting unintended structure.
Visuals:
- Diagram from Figure 1 showing the discretization of frequency response.
Slide 3: Creating Frequency Features
Assertion: Discretizing the frequency response enables scalable feature sets.
- Evidence:
- Fine resolution for complex behavior or coarse for computational efficiency.
- Features provide physical context across frequency scales.
Visuals:
- Table comparing fine vs. coarse frequency sampling.
- Annotated example of magnitude/phase vector with scales labeled.
Slide 4: Training the Diffusion Model
Assertion: Diffusion models learn unstructured perturbations through iterative noise transformation.
- Evidence:
- Forward process adds noise; reverse process removes it.
- Training maximizes log-likelihood between input and reconstructed data.
Visuals:
- Flowchart of the diffusion training process.
- Key equations (e.g., Eq. \ref{forward_kernel} and \ref{reverse_kernel}) simplified with annotations.
Slide 5: Generating New Perturbations
Assertion: The trained diffusion model generates diverse and flexible perturbations.
- Evidence:
- Outputs are probabilistic, enabling variability.
- Perturbation level controlled by adjusting time steps.
Visuals:
- Illustration of forward/reverse process with arrows and annotations.
- Graph showing interpolation from partial time steps.
Slide 6: Ensuring Valid Perturbations
Assertion: Generated perturbations must meet robust control requirements.
- Evidence:
- No additional right-hand plane poles.
- Supremum gain of Δ below threshold β.
Visuals:
- Diagram of pole-zero constraints.
- Workflow for verifying Δ and fitting transfer functions.
Slide 7: Advantages of Diffusion Models NOT INCLUDED SO FAR
Assertion: Diffusion models provide a novel solution for generating unstructured perturbations.
- Evidence:
- Introduce non-deterministic variability into perturbations.
- Overcome the limitations of traditional structured approaches.
Visuals:
- Comparative chart: structured vs. unstructured methods.
- Examples of perturbed frequency responses generated by the model.
Research Tasks
Slide 1: Research Tasks Overview
Assertion: This research aims to address verification challenges through structured tasks.
- Evidence: Four key research tasks support the proposed outcomes:
- Mission-Beneficiary Fit
- Find Robust Systems
- Create Diffusion Model
- Analyze and Disseminate Results
Visuals:
- A process diagram summarizing the four tasks.
Slide 2: Mission-Beneficiary Fit
Assertion: Understanding beneficiaries ensures relevance and impact of this research.
- Evidence:
- Beneficiary Identification: Research how control engineers might use this work.
- Value Proposition: Define and align capabilities with verification needs.
Visuals:
- Chart or table identifying beneficiaries and their verification needs.
Slide 3: Find Robust Systems
Assertion: Identifying relevant plants ensures practical applicability of results.
- Evidence:
- Literature Review: Investigate industrial applications of robust control verification.
- Create Example Plants: Reconstruct models of prominent systems for demonstrations.
Visuals:
- Example of a controlled industrial process.
- Flowchart of the literature review and modeling process.
Slide 4: Create Diffusion Model
Assertion: A diffusion model is central to generating unstructured perturbations.
- Evidence:
- Identify Model Structure: Choose an architecture (e.g., U-Net).
- Train Model: Develop training data and optimize performance.
- Generate Perturbations: Apply the model to example plants.
Visuals:
- Diagram of a U-Net-based architecture.
- Example of generated unstructured perturbations.
Slide 5: Analyze and Disseminate Results
Assertion: Communicating findings ensures broader adoption and state-of-the-art advancements.
- Evidence:
- Publish results in academic journals.
- Demonstrate impact on robustness verification practices.
Visuals:
- Example journal or conference targets.
- Overview of the dissemination process.
Metrics of Success
Slide 1: Metrics of Success Overview
Assertion: Project success will be evaluated through milestone tracking and outcome-based metrics.
- Evidence:
- Goals and Outcomes: Milestones tied to the objectives of this research.
- Unstructured Perturbation Evaluation: Metrics to assess diffusion model output.
Visuals:
- High-level flowchart showing the two categories of success metrics.
Slide 2: Goals and Outcomes
Assertion: The research aims to deliver specific capabilities for creating unstructured perturbations.
- Evidence:
- Approximate unstructured sets through numerous perturbed plants.
- Perturb nominal plants using the diffusion model.
- Generate frequency-domain responses from training data.
Visuals:
- Table summarizing the three goals and their significance.
- Conceptual graphic of a nominal plant with perturbed versions around it.
Slide 3: Unstructured Perturbation Evaluation
Assertion: The diffusion model's success will be judged on distribution and diversity of perturbations.
- Evidence:
- Distribution: Verify uniform coverage of the multiplicative uncertainty disk.
- Diversity: Assess non-parametric, dissimilar perturbations among examples.
Visuals:
- Example complex plane with plotted perturbed plants.
- Graph comparing similarity metrics across perturbations.
Slide 4: Statistical Evaluation (Optional Deep Dive)
Assertion: Statistical analysis ensures robustness and diversity in generated perturbations.
- Evidence:
- Standard statistical tests applied to the perturbation set.
- Covariance vectors calculated for key frequency ranges.
Visuals:
- Example statistical output or covariance plot for one frequency band.
- Caption explaining its role in validating uniform coverage.
Risks and Contingencies
Slide 1: Risks and Contingencies Overview
Assertion: This research has identified key risks and developed contingencies to address them.
- Evidence:
- Computational demands of diffusion models.
- Training data sufficiency.
- Generalization of interpolation methods to perturbations.
Visuals:
- A risk-contingency matrix outlining the key challenges and corresponding mitigations.
Slide 2: Risk 1 - Computational Demands
Assertion: Diffusion models may require significant computational resources during training and inference.
- Evidence:
- Reverse process inference is computationally intensive due to per-step calculations.
- Training complexity scales with model structure and feature count.
Contingencies:
- Utilize the University of Pittsburgh’s CRC supercomputing resources.
- Reduce data features while monitoring model performance.
Visuals:
- Diagram comparing computational cost across time steps.
- Icon of computational resources with CRC logo or similar.
Slide 3: Risk 2 - Insufficient Training Data
Assertion: Structured perturbations alone may not condition the model adequately.
- Evidence:
- Structured perturbations simplify training but may lack diversity.
Contingencies:
- Augment training with manually or algorithmically generated
\Deltaexamples (e.g., bounded by supermum gain\beta). - Diversify training data sources to improve robustness.
Visuals:
- Example of structured vs. manual perturbation samples on the complex plane.
- Flowchart showing training data augmentation process.
Slide 4: Risk 3 - Interpolation Limitations
Assertion: Interpolation methods may fail to regulate perturbations effectively.
- Evidence:
- Image-based interpolation success may not generalize to this domain.
Contingencies:
- Implement $r(\mathcal{P}_t)$-based reverse process steering for controlled perturbations【cite sources】.
- Explore alternative interpolation techniques tailored to frequency domain applications.
Visuals:
- Conceptual illustration of
r(\mathcal{P}_t)steering function in reverse process. - Example showing failure of simple interpolation and correction with
r(\mathcal{P}_t).
Slide 5: Risk Mitigation Framework (Optional Summary Slide)
Assertion: Addressing risks proactively ensures project success.
- Evidence:
- Computational strategies, diversified training, and alternative steering methods safeguard outcomes.
Visuals:
- Funnel graphic showing risks addressed through mitigations leading to project success.