Haleshot commited on
Commit
9ae2eda
·
unverified ·
1 Parent(s): 57f80f7

refine text for clarity

Browse files
probability/13_bernoulli_distribution.py CHANGED
@@ -10,7 +10,7 @@
10
 
11
  import marimo
12
 
13
- __generated_with = "0.11.22"
14
  app = marimo.App(width="medium", app_title="Bernoulli Distribution")
15
 
16
 
@@ -20,15 +20,15 @@ def _(mo):
20
  r"""
21
  # Bernoulli Distribution
22
 
23
- _This notebook is a computational companion to ["Probability for Computer Scientists"](https://chrispiech.github.io/probabilityForComputerScientists/en/part2/bernoulli/), by Stanford professor Chris Piech._
24
 
25
  ## Parametric Random Variables
26
 
27
- There are many classic and commonly-seen random variable abstractions that show up in the world of probability. At this point, we'll learn about several of the most significant parametric discrete distributions.
28
 
29
- When solving problems, if you can recognize that a random variable fits one of these formats, then you can use its pre-derived Probability Mass Function (PMF), expectation, variance, and other properties. Random variables of this sort are called **parametric random variables**. If you can argue that a random variable falls under one of the studied parametric types, you simply need to provide parameters.
30
 
31
- > A good analogy is a `class` in programming. Creating a parametric random variable is very similar to calling a constructor with input parameters.
32
  """
33
  )
34
  return
@@ -40,18 +40,16 @@ def _(mo):
40
  r"""
41
  ## Bernoulli Random Variables
42
 
43
- A **Bernoulli random variable** (also called a boolean or indicator random variable) is the simplest kind of parametric random variable. It can take on two values: 1 and 0.
44
 
45
- It takes on a 1 if an experiment with probability $p$ resulted in success and a 0 otherwise.
46
 
47
- Some example uses include:
 
 
 
48
 
49
- - A coin flip (heads = 1, tails = 0)
50
- - A random binary digit
51
- - Whether a disk drive crashed
52
- - Whether someone likes a Netflix movie
53
-
54
- Here $p$ is the parameter, but different instances of Bernoulli random variables might have different values of $p$.
55
  """
56
  )
57
  return
@@ -167,9 +165,11 @@ def _(expected_value, p_slider, plt, probabilities, values, variance):
167
  def _(mo):
168
  mo.md(
169
  r"""
170
- ## Proof: Expectation of a Bernoulli
 
 
171
 
172
- If $X$ is a Bernoulli with parameter $p$, $X \sim \text{Bern}(p)$:
173
 
174
  \begin{align}
175
  E[X] &= \sum_x x \cdot (X=x) && \text{Definition of expectation} \\
@@ -178,11 +178,7 @@ def _(mo):
178
  &= p && \text{Remove the 0 term}
179
  \end{align}
180
 
181
- ## Proof: Variance of a Bernoulli
182
-
183
- If $X$ is a Bernoulli with parameter $p$, $X \sim \text{Bern}(p)$:
184
-
185
- To compute variance, first compute $E[X^2]$:
186
 
187
  \begin{align}
188
  E[X^2]
@@ -206,18 +202,16 @@ def _(mo):
206
  def _(mo):
207
  mo.md(
208
  r"""
209
- ## Indicator Random Variable
210
-
211
- > **Definition**: An indicator variable is a Bernoulli random variable which takes on the value 1 if an **underlying event occurs**, and 0 _otherwise_.
212
 
213
- Indicator random variables are a convenient way to convert the "true/false" outcome of an event into a number. That number may be easier to incorporate into an equation.
214
 
215
- A random variable $I$ is an indicator variable for an event $A$ if $I = 1$ when $A$ occurs and $I = 0$ if $A$ does not occur. Indicator random variables are Bernoulli random variables, with $p = P(A)$. $I_A$ is a common choice of name for an indicator random variable.
216
 
217
- Here are some properties of indicator random variables:
218
 
219
- - $P(I=1)=P(A)$
220
- - $E[I]=P(A)$
221
  """
222
  )
223
  return
 
10
 
11
  import marimo
12
 
13
+ __generated_with = "0.12.6"
14
  app = marimo.App(width="medium", app_title="Bernoulli Distribution")
15
 
16
 
 
20
  r"""
21
  # Bernoulli Distribution
22
 
23
+ > _Note:_ This notebook builds on concepts from ["Probability for Computer Scientists"](https://chrispiech.github.io/probabilityForComputerScientists/en/part2/bernoulli/) by Chris Piech.
24
 
25
  ## Parametric Random Variables
26
 
27
+ Probability has a bunch of classic random variable patterns that show up over and over. Let's explore some of the most important parametric discrete distributions.
28
 
29
+ Bernoulli is honestly the simplest distribution you'll ever see, but it's ridiculously powerful in practice. What makes it fascinating to me is how it captures any yes/no scenario: success/failure, heads/tails, 1/0.
30
 
31
+ I think of these distributions as the atoms of probability they're the fundamental building blocks that everything else is made from.
32
  """
33
  )
34
  return
 
40
  r"""
41
  ## Bernoulli Random Variables
42
 
43
+ A Bernoulli random variable boils down to just two possible values: 1 (success) or 0 (failure). dead simple, but incredibly useful.
44
 
45
+ Some everyday examples where I see these:
46
 
47
+ - Coin flip (heads=1, tails=0)
48
+ - Whether that sketchy email is spam
49
+ - If someone actually clicks my ad
50
+ - Whether my code compiles first try (almost always 0 for me)
51
 
52
+ All you need (the classic expression) is a single parameter $p$ - the probability of success.
 
 
 
 
 
53
  """
54
  )
55
  return
 
165
  def _(mo):
166
  mo.md(
167
  r"""
168
+ ## Expectation and Variance of a Bernoulli
169
+
170
+ > _Note:_ The following derivations are included as reference material. The credit for these mathematical formulations belongs to ["Probability for Computer Scientists"](https://chrispiech.github.io/probabilityForComputerScientists/en/part2/bernoulli/) by Chris Piech.
171
 
172
+ Let's work through why $E[X] = p$ for a Bernoulli:
173
 
174
  \begin{align}
175
  E[X] &= \sum_x x \cdot (X=x) && \text{Definition of expectation} \\
 
178
  &= p && \text{Remove the 0 term}
179
  \end{align}
180
 
181
+ And for variance, we first need $E[X^2]$:
 
 
 
 
182
 
183
  \begin{align}
184
  E[X^2]
 
202
  def _(mo):
203
  mo.md(
204
  r"""
205
+ ## Indicator Random Variables
 
 
206
 
207
+ Indicator variables are a clever trick I like to use they turn events into numbers. Instead of dealing with "did the event happen?" (yes/no), we get "1" if it happened and "0" if it didn't.
208
 
209
+ Formally: an indicator variable $I$ for event $A$ equals 1 when $A$ occurs and 0 otherwise. These are just bernoulli variables where $p = P(A)$. people often use notation like $I_A$ to name them.
210
 
211
+ Two key properties that make them super useful:
212
 
213
+ - $P(I=1)=P(A)$ - probability of getting a 1 is just the probability of the event
214
+ - $E[I]=P(A)$ - the expected value equals the probability (this one's a game-changer!)
215
  """
216
  )
217
  return