04 - Logistic Regression - lec. 5,6
ucla | CS M146 | 2023-04-17T15:31
Table of Contents
Supplemental
- any event s.t.
- sum of probs
- logistic regression is a classification model
- is always base i.e.
- loss functionin classification is binary or softmax cross-entropy loss
Lecture
Classification using Probability
instead of predicting the class, predict the probability that inctance belongs to that class i.e. $P(y | \bm x)$ |
- binary calssification: as events for an input
Logistic Regression
Logistic (Sigmoid) Regression Model/Func
hypothesis function is the probability in [0,1] i.e. $P_{\bm\theta}(y=1 | \bm x)$ |
Interpreting Hypothesis function
- hypo func gives probability label=1 given some input, e.g.
Non-Linear Decision Boundary
- we can applya basis function expansion to features just like we did for linear regression
- NOTE: Loss functions don’t need to be averaged bc minimization via gradient descent will work the same regardless
Loss Function
- loss of a single instance
$\ell(y^{(i)},\bm x^{(i)},\bm\theta)=\begin{cases}-\log \big(h_{\bm\theta}(\bm x^{(i)})\big) & y^{(i)}=1$
Intuition behind loss
- non-linear loss implies largely wrong guesses result in much higher loss than less wrong guesses
Regularized Loss Function
- note the L2 norm is from index 1 to
- we don’t regularize basis
Gradient Desceent
- weight updates (simultaneous) - similar as lin. reg. and perceptrons
Multi-Class Classsification
Discussion
Resources
📌
**SUMMARY
**