Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension

Published on Feb 25, 20075402 Views

We design an on-line algorithm for Principal Component Analysis. The instances are projected into a probabilistically chosen low dimensional subspace. The total expected quadratic approximation error

Related categories

Chapter list

On-line PCA00:06
Predicting as well as the best expert00:37
Goal of expert setting00:42
Dot loss01:22
Goal of expert setting02:13
Dot loss02:17
The works02:24
Total loss bound03:17
Alternates to the softmin03:49
Outline04:17
Variance04:19
Outline04:43
Variance of unit vectors04:47
Variance minimization problem05:17
Mixtures of directions/dyads = density matrix06:07
Density matrices07:50
Variance minimization with density matrices09:03
Expert setting retained as diagonal case09:29
Variance minimization with density matrices09:49
Expert setting retained as diagonal case10:29
Deriving the algorithm10:59
Bound generalizes12:55
Outline13:10
PCA13:11
Rewrite quadratic loss as linear loss13:31
So far14:48
Minimizing loss of m = n − k experts15:26
New trick: cap weights16:36
Weights  1m19:11
New trick: cap weights19:36
Weights  1m19:40
Why capping?20:23
Lift sets of expert alg. to matrices21:12
Update and Winnow-like bound21:55
Two families again22:50
Outline23:39
What’s next?23:42
Update and Winnow-like bound25:45