Spaces:
Running
Running
some more changes (apt)
Browse filesSigned-off-by: Srihari Thyagarajan <[email protected]>
probability/21_logistic_regression.py
CHANGED
@@ -12,7 +12,7 @@
|
|
12 |
|
13 |
import marimo
|
14 |
|
15 |
-
__generated_with = "0.12.
|
16 |
app = marimo.App(width="medium", app_title="Logistic Regression")
|
17 |
|
18 |
|
@@ -73,44 +73,106 @@ def _(mo):
|
|
73 |
r"""
|
74 |
## Why NOT Linear Regression?
|
75 |
|
76 |
-
Can't we really use linear regression to address classification? The answer is NO!
|
77 |
|
78 |
-
|
|
|
79 |
|
80 |
-
|
81 |
-
$$p = \beta_0 + \beta_1 \cdot x_{tumor\_size}$$
|
82 |
-
|
83 |
-
This doesn't seem to be feasible as the right side, in principle, belongs to $\mathbb{R}$ (any real number) & the left side belongs to $(0,1)$ (a probability).
|
84 |
|
85 |
Can we convert $(\beta_0 + \beta_1 \cdot x_{tumor\_size})$ to something belonging to $(0,1)$? That may work as an estimate of a probability! The answer is YES!
|
86 |
|
87 |
We need a converter (a function), say, $g()$ that will connect $p \in (0,1)$ to $(\beta_0 + \beta_1 \cdot x_{tumor\_size}) \in \mathbb{R}$.
|
88 |
|
89 |
-
|
90 |
"""
|
91 |
)
|
92 |
return
|
93 |
|
94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
@app.cell(hide_code=True)
|
96 |
def _(mo):
|
97 |
mo.md(
|
98 |
r"""
|
99 |
-
|
100 |
|
101 |
-
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
-
$$P(Y=1|\mathbf{X}=\mathbf{x}) = \sigma(z) \text{ where } z = \theta_0 + \sum_{i=1}^m \theta_i x_i$$
|
104 |
|
105 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
|
107 |
$$P(Y=1|\mathbf{X}=\mathbf{x}) =\sigma(\mathbf{\theta}^T\mathbf{x}) \quad \text{ where we always set $x_0$ to be 1}$$
|
108 |
|
109 |
$$P(Y=0|\mathbf{X}=\mathbf{x}) =1-\sigma(\mathbf{\theta}^T\mathbf{x}) \quad \text{ by total law of probability}$$
|
110 |
|
111 |
-
|
112 |
|
113 |
-
|
114 |
"""
|
115 |
)
|
116 |
return
|
@@ -120,9 +182,9 @@ def _(mo):
|
|
120 |
def _(mo):
|
121 |
mo.md(
|
122 |
r"""
|
123 |
-
###
|
124 |
|
125 |
-
|
126 |
|
127 |
Recall the prediction rule:
|
128 |
$$\text{predicted class} =
|
@@ -131,9 +193,12 @@ def _(mo):
|
|
131 |
0, & \text{otherwise}
|
132 |
\end{cases}$$
|
133 |
|
134 |
-
|
|
|
|
|
|
|
135 |
|
136 |
-
|
137 |
"""
|
138 |
)
|
139 |
return
|
@@ -147,7 +212,7 @@ def _(mo):
|
|
147 |
|
148 |
@app.cell(hide_code=True)
|
149 |
def _(mo, np, plt):
|
150 |
-
# show relevant
|
151 |
|
152 |
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
|
153 |
|
@@ -213,172 +278,22 @@ def _(mo):
|
|
213 |
def _(mo):
|
214 |
mo.md(
|
215 |
r"""
|
216 |
-
|
217 |
-
|
218 |
-
Before we get started I want to make sure that we are all on the same page with respect to notation. In logistic regression, $\theta$ is a vector of parameters of length $m$ and we are going to learn the values of those parameters based off of $n$ training examples. The number of parameters should be equal to the number of features of each datapoint.
|
219 |
-
|
220 |
-
Two pieces of notation that we use often in logistic regression that you may not be familiar with are:
|
221 |
-
|
222 |
-
$$\mathbf{\theta}^T\mathbf{x} = \sum_{i=1}^m \theta_i x_i = \theta_1 x_1 + \theta_2 x_2 + \dots + \theta_m x_m \quad \text{dot product, aka weighted sum}$$
|
223 |
-
|
224 |
-
$$\sigma(z) = \frac{1}{1+ e^{-z}} \quad \text{sigmoid function}$$
|
225 |
-
|
226 |
-
The sigmoid function is a special function that maps any real number to a probability between 0 and 1. It has an S-shaped curve and is particularly useful for binary classification problems.
|
227 |
-
"""
|
228 |
-
)
|
229 |
-
return
|
230 |
-
|
231 |
-
|
232 |
-
@app.cell(hide_code=True)
|
233 |
-
def _(mo, np, plt):
|
234 |
-
# Plot the sigmoid function
|
235 |
-
|
236 |
-
_fig, ax = plt.subplots(figsize=(10, 6))
|
237 |
-
|
238 |
-
# Generate x values
|
239 |
-
x = np.linspace(-10, 10, 1000)
|
240 |
-
|
241 |
-
# Compute sigmoid
|
242 |
-
def sigmoid(z):
|
243 |
-
return 1 / (1 + np.exp(-z))
|
244 |
-
|
245 |
-
y = sigmoid(x)
|
246 |
-
|
247 |
-
# Plot sigmoid function
|
248 |
-
ax.plot(x, y, 'b-', linewidth=2)
|
249 |
-
|
250 |
-
# Add horizontal lines at y=0 and y=1
|
251 |
-
ax.axhline(y=0, color='k', linestyle='-', alpha=0.3)
|
252 |
-
ax.axhline(y=1, color='k', linestyle='-', alpha=0.3)
|
253 |
-
ax.axhline(y=0.5, color='r', linestyle='--', alpha=0.5)
|
254 |
-
|
255 |
-
# Add vertical line at x=0
|
256 |
-
ax.axvline(x=0, color='k', linestyle='-', alpha=0.3)
|
257 |
-
|
258 |
-
# Add annotations
|
259 |
-
ax.text(1, 0.85, r'$\sigma(z) = \frac{1}{1 + e^{-z}}$', fontsize=14)
|
260 |
-
ax.text(-9, 0.1, 'As z → -∞, σ(z) → 0', fontsize=12)
|
261 |
-
ax.text(3, 0.9, 'As z → ∞, σ(z) → 1', fontsize=12)
|
262 |
-
ax.text(0.5, 0.4, 'σ(0) = 0.5', fontsize=12)
|
263 |
-
|
264 |
-
# Set labels and title
|
265 |
-
ax.set_xlabel('z', fontsize=14)
|
266 |
-
ax.set_ylabel('σ(z)', fontsize=14)
|
267 |
-
ax.set_title('Sigmoid Function', fontsize=16)
|
268 |
-
|
269 |
-
# Set axis limits
|
270 |
-
ax.set_xlim(-10, 10)
|
271 |
-
ax.set_ylim(-0.1, 1.1)
|
272 |
-
|
273 |
-
# Add grid
|
274 |
-
ax.grid(True, alpha=0.3)
|
275 |
-
|
276 |
-
mo.mpl.interactive(_fig)
|
277 |
-
|
278 |
-
mo.md(r"""
|
279 |
-
**Figure**: The sigmoid function maps any real number to a value between 0 and 1, making it perfect for representing probabilities.
|
280 |
-
|
281 |
-
/// note
|
282 |
-
For more information about the sigmoid function and its applications in deep learning, head over to [this detailed notebook](http://marimo.app/https://github.com/marimo-team/deepml-notebooks/blob/main/problems/problem-22/notebook.py) for more insights.
|
283 |
-
///
|
284 |
-
""")
|
285 |
-
return ax, sigmoid, x, y
|
286 |
-
|
287 |
-
|
288 |
-
@app.cell(hide_code=True)
|
289 |
-
def _(mo):
|
290 |
-
mo.md(
|
291 |
-
r"""
|
292 |
-
## Log Likelihood
|
293 |
-
|
294 |
-
In order to choose values for the parameters of logistic regression we use Maximum Likelihood Estimation (MLE). As such we are going to have two steps: (1) write the log-likelihood function and (2) find the values of $\theta$ that maximize the log-likelihood function.
|
295 |
-
|
296 |
-
The labels that we are predicting are binary, and the output of our logistic regression function is supposed to be the probability that the label is one. This means that we can (and should) interpret each label as a Bernoulli random variable: $Y \sim \text{Bern}(p)$ where $p = \sigma(\theta^T \textbf{x})$.
|
297 |
-
|
298 |
-
To start, here is a super slick way of writing the probability of one datapoint (recall this is the equation form of the probability mass function of a Bernoulli):
|
299 |
-
|
300 |
-
$$P(Y=y | X = \mathbf{x}) = \sigma({\mathbf{\theta}^T\mathbf{x}})^y \cdot \left[1 - \sigma({\mathbf{\theta}^T\mathbf{x}})\right]^{(1-y)}$$
|
301 |
-
|
302 |
-
Now that we know the probability mass function, we can write the likelihood of all the data:
|
303 |
-
|
304 |
-
$$L(\theta) = \prod_{i=1}^n P(Y=y^{(i)} | X = \mathbf{x}^{(i)}) \quad \text{The likelihood of independent training labels}$$
|
305 |
-
|
306 |
-
$$= \prod_{i=1}^n \sigma({\mathbf{\theta}^T\mathbf{x}^{(i)}})^{y^{(i)}} \cdot \left[1 - \sigma({\mathbf{\theta}^T\mathbf{x}^{(i)}})\right]^{(1-y^{(i)})} \quad \text{Substituting the likelihood of a Bernoulli}$$
|
307 |
-
|
308 |
-
And if you take the log of this function, you get the reported Log Likelihood for Logistic Regression. The log likelihood equation is:
|
309 |
-
|
310 |
-
$$LL(\theta) = \sum_{i=1}^n y^{(i)} \log \sigma(\mathbf{\theta}^T\mathbf{x}^{(i)}) + (1-y^{(i)}) \log [1 - \sigma(\mathbf{\theta}^T\mathbf{x}^{(i)})]$$
|
311 |
-
|
312 |
-
Recall that in MLE the only remaining step is to choose parameters ($\theta$) that maximize log likelihood.
|
313 |
-
"""
|
314 |
-
)
|
315 |
-
return
|
316 |
-
|
317 |
-
|
318 |
-
@app.cell(hide_code=True)
|
319 |
-
def _(mo):
|
320 |
-
mo.md(
|
321 |
-
r"""
|
322 |
-
## Gradient of Log Likelihood
|
323 |
|
324 |
-
|
325 |
|
326 |
$$\frac{\partial LL(\theta)}{\partial \theta_j} = \sum_{i=1}^n \left[
|
327 |
-
y^{(i)} - \sigma(\
|
328 |
\right] x_j^{(i)}$$
|
329 |
|
330 |
-
This
|
331 |
-
"""
|
332 |
-
)
|
333 |
-
return
|
334 |
-
|
335 |
-
|
336 |
-
@app.cell(hide_code=True)
|
337 |
-
def _(mo):
|
338 |
-
mo.md(
|
339 |
-
r"""
|
340 |
-
## Gradient Descent Optimization
|
341 |
-
|
342 |
-
Our goal is to choose parameters ($\theta$) that maximize likelihood, and we know the partial derivative of log likelihood with respect to each parameter. We are ready for our optimization algorithm.
|
343 |
-
|
344 |
-
In the case of logistic regression, we can't solve for $\theta$ mathematically. Instead, we use a computer to chose $\theta$. To do so we employ an algorithm called gradient descent (a classic in optimization theory). The idea behind gradient descent is that if you continuously take small steps downhill (in the direction of your negative gradient), you will eventually make it to a local minima. In our case we want to maximize our likelihood. As you can imagine, minimizing a negative of our likelihood will be equivalent to maximizing our likelihood.
|
345 |
-
|
346 |
-
The update to our parameters that results in each small step can be calculated as:
|
347 |
-
|
348 |
-
$$\theta_j^{\text{ new}} = \theta_j^{\text{ old}} + \eta \cdot \frac{\partial LL(\theta^{\text{ old}})}{\partial \theta_j^{\text{ old}}}$$
|
349 |
|
350 |
-
|
351 |
-
y^{(i)} - \sigma(\mathbf{\theta}^T\mathbf{x}^{(i)})
|
352 |
-
\right] x_j^{(i)}$$
|
353 |
-
|
354 |
-
Where $\eta$ is the magnitude of the step size that we take. If you keep updating $\theta$ using the equation above you will converge on the best values of $\theta$. You now have an intelligent model. Here is the gradient ascent algorithm for logistic regression in pseudo-code:
|
355 |
"""
|
356 |
)
|
357 |
return
|
358 |
|
359 |
|
360 |
-
@app.cell(hide_code=True)
|
361 |
-
def _(mo):
|
362 |
-
# Create a stylized pseudocode display
|
363 |
-
mo.md(r"""
|
364 |
-
```
|
365 |
-
Initialize: θⱼ = 0 for all 0 ≤ j ≤ m
|
366 |
-
|
367 |
-
Repeat many times:
|
368 |
-
gradient[j] = 0 for all 0 ≤ j ≤ m
|
369 |
-
|
370 |
-
For each training example (x, y):
|
371 |
-
For each parameter j:
|
372 |
-
gradient[j] += xⱼ(y - 1/(1+e^(-θᵀx)))
|
373 |
-
|
374 |
-
θⱼ += η * gradient[j] for all 0 ≤ j ≤ m
|
375 |
-
```
|
376 |
-
|
377 |
-
**Pro-tip:** Don't forget that in order to learn the value of θ₀ you can simply define x₀ to always be 1.
|
378 |
-
""")
|
379 |
-
return
|
380 |
-
|
381 |
-
|
382 |
@app.cell(hide_code=True)
|
383 |
def _(controls, mo, widget):
|
384 |
# create the layout
|
@@ -506,111 +421,6 @@ def _(LogisticRegression, mo, np, plt, run_button, widget):
|
|
506 |
)
|
507 |
|
508 |
|
509 |
-
@app.cell(hide_code=True)
|
510 |
-
def _(mo):
|
511 |
-
mo.md(
|
512 |
-
r"""
|
513 |
-
## Derivations
|
514 |
-
|
515 |
-
In this section we provide the mathematical derivations for the gradient of log-likelihood. The derivations are worth knowing because these ideas are heavily used in Artificial Neural Networks.
|
516 |
-
|
517 |
-
Our goal is to calculate the derivative of the log likelihood with respect to each theta. To start, here is the definition for the derivative of a sigmoid function with respect to its inputs:
|
518 |
-
|
519 |
-
$$\frac{\partial}{\partial z} \sigma(z) = \sigma(z)[1 - \sigma(z)] \quad \text{to get the derivative with respect to $\theta$, use the chain rule}$$
|
520 |
-
|
521 |
-
Take a moment and appreciate the beauty of the derivative of the sigmoid function. The reason that sigmoid has such a simple derivative stems from the natural exponent in the sigmoid denominator.
|
522 |
-
"""
|
523 |
-
)
|
524 |
-
return
|
525 |
-
|
526 |
-
|
527 |
-
@app.cell(hide_code=True)
|
528 |
-
def _(mo):
|
529 |
-
mo.md(
|
530 |
-
r"""
|
531 |
-
### Detailed Derivation
|
532 |
-
|
533 |
-
Since the likelihood function is a sum over all of the data, and in calculus the derivative of a sum is the sum of derivatives, we can focus on computing the derivative of one example. The gradient of theta is simply the sum of this term for each training datapoint.
|
534 |
-
|
535 |
-
First I am going to show you how to compute the derivative the hard way. Then we are going to look at an easier method. The derivative of gradient for one datapoint $(\mathbf{x}, y)$:
|
536 |
-
|
537 |
-
$$\begin{align}
|
538 |
-
\frac{\partial LL(\theta)}{\partial \theta_j} &= \frac{\partial }{\partial \theta_j} y \log \sigma(\mathbf{\theta}^T\mathbf{x}) + \frac{\partial }{\partial \theta_j} (1-y) \log [1 - \sigma(\mathbf{\theta}^T\mathbf{x})] \quad \text{derivative of sum of terms}\\
|
539 |
-
&=\left[\frac{y}{\sigma(\theta^T\mathbf{x})} - \frac{1-y}{1-\sigma(\theta^T\mathbf{x})} \right] \frac{\partial}{\partial \theta_j} \sigma(\theta^T \mathbf{x}) \quad \text{derivative of log $f(x)$}\\
|
540 |
-
&=\left[\frac{y}{\sigma(\theta^T\mathbf{x})} - \frac{1-y}{1-\sigma(\theta^T\mathbf{x})} \right] \sigma(\theta^T \mathbf{x}) [1 - \sigma(\theta^T \mathbf{x})]\mathbf{x}_j \quad \text{chain rule + derivative of sigma}\\
|
541 |
-
&=\left[
|
542 |
-
\frac{y - \sigma(\theta^T\mathbf{x})}{\sigma(\theta^T \mathbf{x}) [1 - \sigma(\theta^T \mathbf{x})]}
|
543 |
-
\right] \sigma(\theta^T \mathbf{x}) [1 - \sigma(\theta^T \mathbf{x})]\mathbf{x}_j \quad \text{algebraic manipulation}\\
|
544 |
-
&= \left[y - \sigma(\theta^T\mathbf{x}) \right] \mathbf{x}_j \quad \text{cancelling terms}
|
545 |
-
\end{align}$$
|
546 |
-
"""
|
547 |
-
)
|
548 |
-
return
|
549 |
-
|
550 |
-
|
551 |
-
@app.cell(hide_code=True)
|
552 |
-
def _(mo):
|
553 |
-
mo.md(
|
554 |
-
r"""
|
555 |
-
### Derivatives Without Tears
|
556 |
-
|
557 |
-
That was the hard way. Logistic regression is the building block of [Artificial Neural Networks](https://en.wikipedia.org/wiki/Neural_network_(machine_learning)). If we want to scale up, we are going to have to get used to an easier way of calculating derivatives. For that we are going to have to welcome back our old friend the chain rule. By the chain rule:
|
558 |
-
|
559 |
-
$$\begin{align}
|
560 |
-
\frac{\partial LL(\theta)}{\partial \theta_j} &=
|
561 |
-
\frac{\partial LL(\theta)}{\partial p}
|
562 |
-
\cdot \frac{\partial p}{\partial \theta_j}
|
563 |
-
\quad \text{Where } p = \sigma(\theta^T\textbf{x})\\
|
564 |
-
&=
|
565 |
-
\frac{\partial LL(\theta)}{\partial p}
|
566 |
-
\cdot \frac{\partial p}{\partial z}
|
567 |
-
\cdot \frac{\partial z}{\partial \theta_j}
|
568 |
-
\quad \text{Where } z = \theta^T\textbf{x}
|
569 |
-
\end{align}$$
|
570 |
-
|
571 |
-
Chain rule is the decomposition mechanism of calculus. It allows us to calculate a complicated partial derivative $\frac{\partial LL(\theta)}{\partial \theta_j}$ by breaking it down into smaller pieces.
|
572 |
-
|
573 |
-
$$\begin{align}
|
574 |
-
LL(\theta) &= y \log p + (1-y) \log (1 - p) \quad \text{Where } p = \sigma(\theta^T\textbf{x}) \\
|
575 |
-
\frac{\partial LL(\theta)}{\partial p} &= \frac{y}{p} - \frac{1-y}{1-p} \quad \text{By taking the derivative}
|
576 |
-
\end{align}$$
|
577 |
-
|
578 |
-
$$\begin{align}
|
579 |
-
p &= \sigma(z) \quad \text{Where }z = \theta^T\textbf{x}\\
|
580 |
-
\frac{\partial p}{\partial z} &= \sigma(z)[1- \sigma(z)] \quad \text{By taking the derivative of the sigmoid}
|
581 |
-
\end{align}$$
|
582 |
-
|
583 |
-
$$\begin{align}
|
584 |
-
z &= \theta^T\textbf{x} \quad \text{As previously defined}\\
|
585 |
-
\frac{\partial z}{\partial \theta_j} &= \textbf{x}_j \quad \text{ Only $\textbf{x}_j$ interacts with $\theta_j$}
|
586 |
-
\end{align}$$
|
587 |
-
|
588 |
-
Each of those derivatives was much easier to calculate. Now we simply multiply them together.
|
589 |
-
|
590 |
-
$$\begin{align}
|
591 |
-
\frac{\partial LL(\theta)}{\partial \theta_j} &=
|
592 |
-
\frac{\partial LL(\theta)}{\partial p}
|
593 |
-
\cdot \frac{\partial p}{\partial z}
|
594 |
-
\cdot \frac{\partial z}{\partial \theta_j} \\
|
595 |
-
&=
|
596 |
-
\Big[\frac{y}{p} - \frac{1-y}{1-p}\Big]
|
597 |
-
\cdot \sigma(z)[1- \sigma(z)]
|
598 |
-
\cdot \textbf{x}_j \quad \text{By substituting in for each term} \\
|
599 |
-
&=
|
600 |
-
\Big[\frac{y}{p} - \frac{1-y}{1-p}\Big]
|
601 |
-
\cdot p[1- p]
|
602 |
-
\cdot \textbf{x}_j \quad \text{Since }p = \sigma(z)\\
|
603 |
-
&=
|
604 |
-
[y(1-p) - p(1-y)]
|
605 |
-
\cdot \textbf{x}_j \quad \text{Multiplying in} \\
|
606 |
-
&= [y - p]\textbf{x}_j \quad \text{Expanding} \\
|
607 |
-
&= [y - \sigma(\theta^T\textbf{x})]\textbf{x}_j \quad \text{Since } p = \sigma(\theta^T\textbf{x})
|
608 |
-
\end{align}$$
|
609 |
-
"""
|
610 |
-
)
|
611 |
-
return
|
612 |
-
|
613 |
-
|
614 |
@app.cell(hide_code=True)
|
615 |
def _(mo):
|
616 |
mo.md(
|
|
|
12 |
|
13 |
import marimo
|
14 |
|
15 |
+
__generated_with = "0.12.5"
|
16 |
app = marimo.App(width="medium", app_title="Logistic Regression")
|
17 |
|
18 |
|
|
|
73 |
r"""
|
74 |
## Why NOT Linear Regression?
|
75 |
|
76 |
+
Can't we really use linear regression to address classification? The answer is NO! The key issue is that probabilities must be between 0 and 1 and linear regression can output any real number.
|
77 |
|
78 |
+
If we tried using linear regression directly:
|
79 |
+
$$p = \beta_0 + \beta_1 \cdot x_{feature}$$
|
80 |
|
81 |
+
This creates a problem: the right side can produce any value in $\mathbb{R}$ (all real numbers), but a probability $p$ must be confined to the range $(0,1)$.
|
|
|
|
|
|
|
82 |
|
83 |
Can we convert $(\beta_0 + \beta_1 \cdot x_{tumor\_size})$ to something belonging to $(0,1)$? That may work as an estimate of a probability! The answer is YES!
|
84 |
|
85 |
We need a converter (a function), say, $g()$ that will connect $p \in (0,1)$ to $(\beta_0 + \beta_1 \cdot x_{tumor\_size}) \in \mathbb{R}$.
|
86 |
|
87 |
+
The solution is to use a "link function" that maps from any real number to a valid probability range. This is where the sigmoid function comes in.
|
88 |
"""
|
89 |
)
|
90 |
return
|
91 |
|
92 |
|
93 |
+
@app.cell(hide_code=True)
|
94 |
+
def _(mo, np, plt):
|
95 |
+
# plot sigmoid to evidentiate above statements
|
96 |
+
_fig, ax = plt.subplots(figsize=(10, 6))
|
97 |
+
|
98 |
+
# x values
|
99 |
+
x = np.linspace(-10, 10, 1000)
|
100 |
+
|
101 |
+
# sigmoid formula
|
102 |
+
def sigmoid(z):
|
103 |
+
return 1 / (1 + np.exp(-z))
|
104 |
+
|
105 |
+
y = sigmoid(x)
|
106 |
+
|
107 |
+
# plot
|
108 |
+
ax.plot(x, y, 'b-', linewidth=2)
|
109 |
+
|
110 |
+
ax.axhline(y=0, color='k', linestyle='-', alpha=0.3)
|
111 |
+
ax.axhline(y=1, color='k', linestyle='-', alpha=0.3)
|
112 |
+
ax.axhline(y=0.5, color='r', linestyle='--', alpha=0.5)
|
113 |
+
|
114 |
+
# vertical line at x=0
|
115 |
+
ax.axvline(x=0, color='k', linestyle='-', alpha=0.3)
|
116 |
+
|
117 |
+
# annotations
|
118 |
+
ax.text(1, 0.85, r'$\sigma(z) = \frac{1}{1 + e^{-z}}$', fontsize=14)
|
119 |
+
ax.text(-9, 0.1, 'As z → -∞, σ(z) → 0', fontsize=12)
|
120 |
+
ax.text(3, 0.9, 'As z → ∞, σ(z) → 1', fontsize=12)
|
121 |
+
ax.text(0.5, 0.4, 'σ(0) = 0.5', fontsize=12)
|
122 |
+
|
123 |
+
# labels and title
|
124 |
+
ax.set_xlabel('z', fontsize=14)
|
125 |
+
ax.set_ylabel('σ(z)', fontsize=14)
|
126 |
+
ax.set_title('Sigmoid Function', fontsize=16)
|
127 |
+
|
128 |
+
# axis limits set
|
129 |
+
ax.set_xlim(-10, 10)
|
130 |
+
ax.set_ylim(-0.1, 1.1)
|
131 |
+
|
132 |
+
# grid
|
133 |
+
ax.grid(True, alpha=0.3)
|
134 |
+
|
135 |
+
mo.mpl.interactive(_fig)
|
136 |
+
return ax, sigmoid, x, y
|
137 |
+
|
138 |
+
|
139 |
@app.cell(hide_code=True)
|
140 |
def _(mo):
|
141 |
mo.md(
|
142 |
r"""
|
143 |
+
**Figure**: The sigmoid function maps any real number to a value between 0 and 1, making it perfect for representing probabilities.
|
144 |
|
145 |
+
/// note
|
146 |
+
For more information about the sigmoid function, head over to [this detailed notebook](http://marimo.app/https://github.com/marimo-team/deepml-notebooks/blob/main/problems/problem-22/notebook.py) for more insights.
|
147 |
+
///
|
148 |
+
"""
|
149 |
+
)
|
150 |
+
return
|
151 |
|
|
|
152 |
|
153 |
+
@app.cell(hide_code=True)
|
154 |
+
def _(mo):
|
155 |
+
mo.md(
|
156 |
+
r"""
|
157 |
+
## The Core Concept (math)
|
158 |
+
|
159 |
+
Logistic regression models the probability of class 1 using the sigmoid function:
|
160 |
+
|
161 |
+
$$P(Y=1|X=x) = \sigma(z) \text{ where } z = \theta_0 + \sum_{i=1}^m \theta_i x_i$$
|
162 |
+
|
163 |
+
The sigmoid function $\sigma(z)$ transforms any real number into a probability between 0 and 1:
|
164 |
+
|
165 |
+
$$\sigma(z) = \frac{1}{1+ e^{-z}}$$
|
166 |
+
|
167 |
+
This can be written more compactly using vector notation:
|
168 |
|
169 |
$$P(Y=1|\mathbf{X}=\mathbf{x}) =\sigma(\mathbf{\theta}^T\mathbf{x}) \quad \text{ where we always set $x_0$ to be 1}$$
|
170 |
|
171 |
$$P(Y=0|\mathbf{X}=\mathbf{x}) =1-\sigma(\mathbf{\theta}^T\mathbf{x}) \quad \text{ by total law of probability}$$
|
172 |
|
173 |
+
Where $\theta$ represents the model parameters that need to be learned from data, and $x$ is the feature vector (with $x_0=1$ to account for the intercept term).
|
174 |
|
175 |
+
> **Note:** For the detailed mathematical derivation of how these parameters are learned through Maximum Likelihood Estimation (MLE) and Gradient Descent (GD), please refer to [Chris Piech's original material](https://chrispiech.github.io/probabilityForComputerScientists/en/part5/log_regression/). The mathematical details are elegant but beyond the scope of this topic (which is confined to Logistic Regression).
|
176 |
"""
|
177 |
)
|
178 |
return
|
|
|
182 |
def _(mo):
|
183 |
mo.md(
|
184 |
r"""
|
185 |
+
### Linear Decision Boundary
|
186 |
|
187 |
+
A key characteristic of logistic regression is that it creates a linear decision boundary. When the model predicts, it's effectively dividing the feature space with a straight line (in 2D) or hyperplane (in higher dimensions). It is actually a straight line (of the form $y = mx + c$).
|
188 |
|
189 |
Recall the prediction rule:
|
190 |
$$\text{predicted class} =
|
|
|
193 |
0, & \text{otherwise}
|
194 |
\end{cases}$$
|
195 |
|
196 |
+
For a two-feature model, the decision boundary where $P(Y=1|X=x) = 0.5$ occurs at:
|
197 |
+
$$\theta_0 + \theta_1 x_1 + \theta_2 x_2 = 0$$
|
198 |
+
|
199 |
+
A simple logistic regression predicts the class label by identifying the regions on either side of a straight line (or hyperplane in general), hence it's a _linear_ classifier.
|
200 |
|
201 |
+
This linear nature makes logistic regression effective for linearly separable classes but limited when dealing with more complex patterns.
|
202 |
"""
|
203 |
)
|
204 |
return
|
|
|
212 |
|
213 |
@app.cell(hide_code=True)
|
214 |
def _(mo, np, plt):
|
215 |
+
# show relevant comparison to the above concepts/statements
|
216 |
|
217 |
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
|
218 |
|
|
|
278 |
def _(mo):
|
279 |
mo.md(
|
280 |
r"""
|
281 |
+
Logistic regression is typically trained using MLE - finding the parameters $\theta$ that make our observed data most probable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
282 |
|
283 |
+
The optimization process generally uses GD (or its variants) to iteratively improve the parameters. The gradient has a surprisingly elegant form:
|
284 |
|
285 |
$$\frac{\partial LL(\theta)}{\partial \theta_j} = \sum_{i=1}^n \left[
|
286 |
+
y^{(i)} - \sigma(\theta^T x^{(i)})
|
287 |
\right] x_j^{(i)}$$
|
288 |
|
289 |
+
This shows that the update to each parameter depends on the prediction error (actual - predicted) multiplied by the feature value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
290 |
|
291 |
+
For those interested in the complete mathematical derivation, including log likelihood calculation and the detailed steps of GD (and relevant pseudocode followed for training), please see the [original lecture notes](https://chrispiech.github.io/probabilityForComputerScientists/en/part5/log_regression/).
|
|
|
|
|
|
|
|
|
292 |
"""
|
293 |
)
|
294 |
return
|
295 |
|
296 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
297 |
@app.cell(hide_code=True)
|
298 |
def _(controls, mo, widget):
|
299 |
# create the layout
|
|
|
421 |
)
|
422 |
|
423 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
424 |
@app.cell(hide_code=True)
|
425 |
def _(mo):
|
426 |
mo.md(
|