Digitize BSP class notes (4 files)

This commit is contained in:
Split 2026-02-13 11:52:58 -05:00
parent 584c99ad10
commit fd5014eb15
4 changed files with 150 additions and 0 deletions

View File

@ -0,0 +1,40 @@
# Continuous and Mixed Random Variables
**Source:** probabilitycourse.com → Hossein Pishro-Nik
**Date:** Monday January 26th
---
## What is a continuous random variable?
> A random variable $X$ is **continuous** if its cumulative distribution function $F_X(x)$ is continuous for $x \in \mathbb{R}$.
---
## Probability Density Functions
Continuous variables lend themselves to **Probability Density Functions** (PDFs). A PDF is defined as:
$$f_X(x) = \frac{dF_X(x)}{dx}$$
---
## Example: Uniform Distribution
**Consider** a uniform distribution of $x$ between $[a,b]$:
**CDF:**
$$F_X = \begin{cases} 0, & x < a \\ \frac{x-a}{b-a}, & a \leq x \leq b \\ 1, & x > b \end{cases}$$
**PDF:**
$$f_X = \begin{cases} 0, & x < a \\ \frac{1}{b-a}, & a \leq x \leq b \\ 0, & x > b \end{cases}$$
*Note: $F_X$ is a continuous S-curve from 0 to 1 between $a$ and $b$; $f_X$ is a rectangular function with height $\frac{1}{b-a}$ between $a$ and $b$.*
---
## Probability from PDF
Now that we've got some machinery, we can define:
$$P(x \in [a,b]) = \int_a^b f_X(x) \, dx$$

View File

@ -0,0 +1,37 @@
# Joint Distributions
**Source:** probabilitycourse.com
Joint distributions are **multivariate probability distributions**.
---
## Conditional Probability
$$P(A|B) = \frac{P(A \cap B)}{P(B)}, \text{ when } P(B) > 0$$
### Example: Fair Die
If this is a fair die, what's the PMF of the outcomes given the event $A = \{x < 5\}$?
$$P(A) = \frac{4}{6}$$
$$P_{X|A}(1) = \frac{P(X = 1 \cap x < 5)}{P(x < 5)} = \frac{\frac{1}{6}}{\frac{4}{6}} = \frac{1}{4}$$
$$P_{X|A}(2) = P_{X|A}(3) = P_{X|A}(4) = P_{X|A}(1) = \frac{1}{4}$$
$$P_{X|A}(5) = \frac{P(x = 5 \cap x < 5)}{P(x < 5)} = \frac{0}{\frac{4}{6}} = 0$$
$$P_{X|A}(6) = 0$$
---
## Two Random Variables
When working with two random variables:
$$P_{X|Y}(x_i | y_j) = P(X = x_i | Y = y_j)$$
$$= \frac{P(X = x_i \cap Y = y_j)}{P_Y(y_j)}$$
$$= \frac{P_{XY}(x_i, y_j)}{P_Y(y_j)}$$

View File

@ -0,0 +1,47 @@
# Monte Carlo Integration
$$I = \int_a^b f(x) \, dx$$
---
## Riemann Integration
Pick $x_i = hi + a$
$$I = \sum_{i=1}^{n} f(x_i) \frac{b-a}{n}$$
This isn't the only way to integrate.
---
## Monte Carlo Integration
Sample $X \sim U[a,b]$
$$\hat{I} = \frac{b-a}{n} \sum_{i=1}^{n} f(x_i)$$
> *This is technically a random variable.*
### Expectation of the Estimator
$$E\{f(x)\} = \int_a^b f(x) \cdot U[a,b] \, dx = \int_a^b f(x) \frac{1}{b-a} \, dx = \frac{1}{b-a} \int_a^b f(x) \, dx$$
$$E\{\hat{I}\} = \frac{b-a}{n} \sum_{i=1}^{n} E\{f(x)\}$$
$$= \frac{b-a}{n} \sum_{i=1}^{n} \left( \frac{1}{b-a} \int_a^b f(x) \, dx \right)$$
$$= \frac{b-a}{n} \cdot \frac{n}{b-a} \int_a^b f(x) \, dx$$
$$E\{\hat{I}\} = \int_a^b f(x) \, dx$$
**Key insight:** The Monte Carlo estimator is unbiased — its expected value equals the true integral.
---
## Multi-Dimensional Case
$$I = \iiint f(\vec{x}) \, d\vec{x}$$
$$\hat{I} = \frac{V}{n} \sum_{i=1}^{n} f(\vec{x}_i)$$
where $V$ is the volume of the integration region.

View File

@ -0,0 +1,26 @@
# Variance and Convergence
## Definition of Variance
$$\text{Var}(X) = E\{(x - \mu)^2\}$$
---
## Variance of the Monte Carlo Estimator
$$\text{Var}(\hat{I}) = \frac{(b-a)^2}{n} \text{Var}(f(x))$$
### Convergence Rate
Standard deviation scales as:
$$\sigma \sim \frac{1}{\sqrt{n}}$$
This is good for a small number of dimensions.
**However:** With higher dimensionality, variance gets exponentially worse (curse of dimensionality).
---
## Multi-Dimensional Case
Combine with Monte Carlo Integration techniques (importance sampling, stratified sampling, etc.) to manage variance in high dimensions.