From fd5014eb15521753f0371cd0e8a81dfaabf4c1c3 Mon Sep 17 00:00:00 2001 From: Split Date: Fri, 13 Feb 2026 11:52:58 -0500 Subject: [PATCH] Digitize BSP class notes (4 files) --- .../continuous-mixed-random-variables.md | 40 ++++++++++++++++ .../joint-distributions.md | 37 +++++++++++++++ .../monte-carlo-integration.md | 47 +++++++++++++++++++ .../variance-and-convergence.md | 26 ++++++++++ 4 files changed, 150 insertions(+) create mode 100644 Fleeting Notes/Class/Bayesian Signal Processing/continuous-mixed-random-variables.md create mode 100644 Fleeting Notes/Class/Bayesian Signal Processing/joint-distributions.md create mode 100644 Fleeting Notes/Class/Bayesian Signal Processing/monte-carlo-integration.md create mode 100644 Fleeting Notes/Class/Bayesian Signal Processing/variance-and-convergence.md diff --git a/Fleeting Notes/Class/Bayesian Signal Processing/continuous-mixed-random-variables.md b/Fleeting Notes/Class/Bayesian Signal Processing/continuous-mixed-random-variables.md new file mode 100644 index 0000000..4c07e8e --- /dev/null +++ b/Fleeting Notes/Class/Bayesian Signal Processing/continuous-mixed-random-variables.md @@ -0,0 +1,40 @@ +# Continuous and Mixed Random Variables + +**Source:** probabilitycourse.com → Hossein Pishro-Nik +**Date:** Monday January 26th + +--- + +## What is a continuous random variable? + +> A random variable $X$ is **continuous** if its cumulative distribution function $F_X(x)$ is continuous for $x \in \mathbb{R}$. + +--- + +## Probability Density Functions + +Continuous variables lend themselves to **Probability Density Functions** (PDFs). A PDF is defined as: + +$$f_X(x) = \frac{dF_X(x)}{dx}$$ + +--- + +## Example: Uniform Distribution + +**Consider** a uniform distribution of $x$ between $[a,b]$: + +**CDF:** +$$F_X = \begin{cases} 0, & x < a \\ \frac{x-a}{b-a}, & a \leq x \leq b \\ 1, & x > b \end{cases}$$ + +**PDF:** +$$f_X = \begin{cases} 0, & x < a \\ \frac{1}{b-a}, & a \leq x \leq b \\ 0, & x > b \end{cases}$$ + +*Note: $F_X$ is a continuous S-curve from 0 to 1 between $a$ and $b$; $f_X$ is a rectangular function with height $\frac{1}{b-a}$ between $a$ and $b$.* + +--- + +## Probability from PDF + +Now that we've got some machinery, we can define: + +$$P(x \in [a,b]) = \int_a^b f_X(x) \, dx$$ diff --git a/Fleeting Notes/Class/Bayesian Signal Processing/joint-distributions.md b/Fleeting Notes/Class/Bayesian Signal Processing/joint-distributions.md new file mode 100644 index 0000000..720527a --- /dev/null +++ b/Fleeting Notes/Class/Bayesian Signal Processing/joint-distributions.md @@ -0,0 +1,37 @@ +# Joint Distributions + +**Source:** probabilitycourse.com + +Joint distributions are **multivariate probability distributions**. + +--- + +## Conditional Probability + +$$P(A|B) = \frac{P(A \cap B)}{P(B)}, \text{ when } P(B) > 0$$ + +### Example: Fair Die + +If this is a fair die, what's the PMF of the outcomes given the event $A = \{x < 5\}$? + +$$P(A) = \frac{4}{6}$$ + +$$P_{X|A}(1) = \frac{P(X = 1 \cap x < 5)}{P(x < 5)} = \frac{\frac{1}{6}}{\frac{4}{6}} = \frac{1}{4}$$ + +$$P_{X|A}(2) = P_{X|A}(3) = P_{X|A}(4) = P_{X|A}(1) = \frac{1}{4}$$ + +$$P_{X|A}(5) = \frac{P(x = 5 \cap x < 5)}{P(x < 5)} = \frac{0}{\frac{4}{6}} = 0$$ + +$$P_{X|A}(6) = 0$$ + +--- + +## Two Random Variables + +When working with two random variables: + +$$P_{X|Y}(x_i | y_j) = P(X = x_i | Y = y_j)$$ + +$$= \frac{P(X = x_i \cap Y = y_j)}{P_Y(y_j)}$$ + +$$= \frac{P_{XY}(x_i, y_j)}{P_Y(y_j)}$$ diff --git a/Fleeting Notes/Class/Bayesian Signal Processing/monte-carlo-integration.md b/Fleeting Notes/Class/Bayesian Signal Processing/monte-carlo-integration.md new file mode 100644 index 0000000..79c489f --- /dev/null +++ b/Fleeting Notes/Class/Bayesian Signal Processing/monte-carlo-integration.md @@ -0,0 +1,47 @@ +# Monte Carlo Integration + +$$I = \int_a^b f(x) \, dx$$ + +--- + +## Riemann Integration + +Pick $x_i = hi + a$ + +$$I = \sum_{i=1}^{n} f(x_i) \frac{b-a}{n}$$ + +This isn't the only way to integrate. + +--- + +## Monte Carlo Integration + +Sample $X \sim U[a,b]$ + +$$\hat{I} = \frac{b-a}{n} \sum_{i=1}^{n} f(x_i)$$ + +> *This is technically a random variable.* + +### Expectation of the Estimator + +$$E\{f(x)\} = \int_a^b f(x) \cdot U[a,b] \, dx = \int_a^b f(x) \frac{1}{b-a} \, dx = \frac{1}{b-a} \int_a^b f(x) \, dx$$ + +$$E\{\hat{I}\} = \frac{b-a}{n} \sum_{i=1}^{n} E\{f(x)\}$$ + +$$= \frac{b-a}{n} \sum_{i=1}^{n} \left( \frac{1}{b-a} \int_a^b f(x) \, dx \right)$$ + +$$= \frac{b-a}{n} \cdot \frac{n}{b-a} \int_a^b f(x) \, dx$$ + +$$E\{\hat{I}\} = \int_a^b f(x) \, dx$$ + +**Key insight:** The Monte Carlo estimator is unbiased — its expected value equals the true integral. + +--- + +## Multi-Dimensional Case + +$$I = \iiint f(\vec{x}) \, d\vec{x}$$ + +$$\hat{I} = \frac{V}{n} \sum_{i=1}^{n} f(\vec{x}_i)$$ + +where $V$ is the volume of the integration region. diff --git a/Fleeting Notes/Class/Bayesian Signal Processing/variance-and-convergence.md b/Fleeting Notes/Class/Bayesian Signal Processing/variance-and-convergence.md new file mode 100644 index 0000000..17fe4eb --- /dev/null +++ b/Fleeting Notes/Class/Bayesian Signal Processing/variance-and-convergence.md @@ -0,0 +1,26 @@ +# Variance and Convergence + +## Definition of Variance + +$$\text{Var}(X) = E\{(x - \mu)^2\}$$ + +--- + +## Variance of the Monte Carlo Estimator + +$$\text{Var}(\hat{I}) = \frac{(b-a)^2}{n} \text{Var}(f(x))$$ + +### Convergence Rate + +Standard deviation scales as: +$$\sigma \sim \frac{1}{\sqrt{n}}$$ + +This is good for a small number of dimensions. + +**However:** With higher dimensionality, variance gets exponentially worse (curse of dimensionality). + +--- + +## Multi-Dimensional Case + +Combine with Monte Carlo Integration techniques (importance sampling, stratified sampling, etc.) to manage variance in high dimensions.