Multiplicative Updates for L1-Regularized Linear and Logistic Regression thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Multiplicative Updates for L1-Regularized Linear and Logistic Regression

Published on Oct 08, 200710044 Views

Multiplicative update rules have proven useful in many areas of machine learning. Simple to implement, guaranteed to converge, they account in part for the widespread popularity of algorithms such

Related categories

Chapter list

Multiplicative updates for L1-regularized regression00:00
Trends in data analysis00:14
How do we scale?00:57
Searching for sparse models01:42
An unexpected connection02:31
This talk02:59
Part I. Multiplicative updates03:32
Nonnegative quadratic programming (NQP)03:34
Matrix decomposition04:38
Multiplicative update05:30
Matrix decomposition (a)05:55
Multiplicative update (a)06:01
Fixed points06:45
Attractive properties for NQP07:36
Part II. Sparse regression08:13
Linear regression08:20
Regularization08:50
L2 versus L109:13
Reformulation as NQP09:58
L2 versus L1 (a)10:15
Reformulation as NQP (a)10:32
L1 norm as NQP11:04
Why reformulate?11:36
Logistic regression12:00
Part III. Experimental results12:41
Convergence to sparse solution12:54
Primal-dual convergence14:37
Large-scale implementation15:32
Discussion17:18
Large-scale implementation (a)18:35