2026-01-21 10:56:16 -05:00

2.8 KiB

BSP - Random Variables

Discrete Random Variables

Imagine two sets, A and B.

There is A+B, which the set of elements in A or B, and AB = A \union B, which is the elements in A and B.

Suppose our set S is {1, 2, 3, 4, 5, 6} where A is the evens, and B is less than 5. AB is {2, 4}, while A+B is {1, 2, 3, 4, 6}.

Sometimes -A is known as NOT A, or \bar A. \bar S = {}.

Probability and Random Variables

Set \Omega is the set of all possible outcomes of an experiment.

Events of some subset of outcomes are called \omega = {odd, even}

A trial is a single performance of an experiment (single sample).

An experiment is to observe an single outcome \zeta_i.

Experiment E means a set S of outcomes \zeta, certain subsets of S can be considered events.

The space S is the certain event. Basically if it's in S, it happened. The null space is events that are impossible. E.g. can't get a 7 on a six-sided die.

If event \zeta_i consists of a single outcome then it is an elementary event.

Events A and B are mutually exclusive if they have no common elements (AB = null space)

Defining exactly what the events and outcomes are is super important. It's very easy to get paradoxical results if careful choice of outcome and event is not chosen.

Example

Assign each event a number A \rightarrow P(A)

Axioms:

  1. P(A) \geq 0
  2. P(S) = 1
  3. if AB = 0 then P(A+B) = P(A) + P(B)

Corollaries:

  1. P(0) = 0
  2. P(A) = 1 - P(\bar A) \leq 1 fundamentally the probability of an event is some number between 1 and 0
  3. If A and B are not mutually exclusive, then P(A+B) = P(A) + P(B) - P(AB) \leq P(A) + P(B)
  4. If $B \subset A \rightarrow P(A) = P(B) + P(\bar A B) \geq P(B)$

Conditional Probability

Given an event B: P(B)\geq 0, we define conditional probability as P(A|B) = \frac{P(AB)}{P(B)}.

Sometimes people write joint probability as P(A,B) which is exactly the same as P(AB).

Given n mutually exclusive events A_i,...,A_n where $A_1

  • A_2 + ... + A_n = S$. For an arbitrary event B,

P(B) = \sum_i P(A_i, B) P(B) = \sum_i P(B | A_i) P(A_i)

We know B = BS = B(A_1+...+A_n). With some rearranging knowing each of the A's are independent, BA_i and BA_j are independent for all i\neq j.

Thus, P(B) = P(BA_1) + ... + P(BA_n) but we showed earlier there's a different way to find P(B).

if B = S, P(S|A_i) = 1, then P(S) = 1 = \sum_i P(A_i) if B = A_i, P(A_j|A_i) = 1 when i = j, 0 otherwise

[!important] Development of Bayes' Theorem

Bayesian statistics are a way of thinking about a degree of belief in a probability, not an estimation of the probability from a number of experiments.

P(AB) = P(A|B)P(B) = P(B|A)P(A)

Then Bayes Theorem becomes P(A|B) = \frac{P(B|A) P(A)}{P(B)}