Learning with Square Loss: Localization through Offset Rademacher Complexity thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Learning with Square Loss: Localization through Offset Rademacher Complexity

Published on Aug 20, 20152055 Views

We consider regression with square loss and general classes of functions without the boundedness assumption. We introduce a notion of offset Rademacher complexity that provides a transparent way to st

Related categories

Chapter list

Learning with Square Loss: Localization through Offset Rademacher Complexity00:00
Untitled00:29
Goals01:06
Local Rademacher averages for ERM analysis01:57
First, consider convex F02:25
Offset Rademacher03:24
Untitled04:17
Intuition04:52
Next: prove (Py) for non-convex classes05:54
The Star algorithm - 106:44
The Star algorithm - 207:44
The Star algorithm - 307:49
The Star algorithm - 407:51
The Star algorithm - 508:10
Key geometric inequality09:07
Proof of key geometric inequality09:53
Corollary11:14
Bounded case: warm up11:59
High probability statement for unbounded functions - 112:32
High probability statement for unbounded functions - 214:28
Critical radius15:27
Example: linear regression16:14
Example: finite aggregation16:27
Lemma (Chaining)16:34
Example: nonparametric function classes16:40
Lower bound16:47
Thanks!16:53