diff --git a/4 Qualifying Exam/2 Writing/3. QE Research Approach.md b/4 Qualifying Exam/2 Writing/3. QE Research Approach.md index 8bcf0dc0..85f72d87 100644 --- a/4 Qualifying Exam/2 Writing/3. QE Research Approach.md +++ b/4 Qualifying Exam/2 Writing/3. QE Research Approach.md @@ -29,10 +29,13 @@ Something to justify, why diffusion model as opposed to other generative AI 6. Diffusion model uses two processes 7. a forward process that introduces noise into an 'input' 8. this forward process does this using several small steps of noise -9. This noise is a gaussian noise -10. over time this forward process degrades the input until it is unrecognizable -11. This takes several several iterations, but is dependent on the amount of noise in each step -12. The markov chain that this creates is also gaussian at every step, until the noise at the end of the day is some gaussian distribution. Usually mean 0 std. dev beta -13. A reverse process that tries to remove the noise -14. But if we destroy the input how can we do this? -15. Well we train a neural network as a denoiser. \ No newline at end of file +9. These are called timesteps +10. This noise is a gaussian noise +11. over time this forward process degrades the input until it is unrecognizable +12. This takes several several iterations, but is dependent on the amount of noise in each step +13. The markov chain that this creates is also gaussian at every step, until the noise at the end of the day is some gaussian distribution. Usually mean 0 std. dev beta +14. A reverse process that tries to remove the noise +15. But if we destroy the input how can we do this? +16. Well we train a neural network as a denoiser. +17. Because the diffusion model forward steps are small and gaussian, we can know the reverse step is also a gaussian distribution. +18. So for our neural network, what we're trying to learn is the mean and standard deviation of the reverse steps for a given timestep. \ No newline at end of file