video thumbnail
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Deep Learning:Theoretical Motivations

Published on 2015-09-1388292 Views

Presentation

Deep Learning: Theoretical Motiovations00:00
Breakthrough09:48:20
Automating Feature Discovery25:23:29
Why is deep learning working so well?70:27:20
Machine Learning, AI & No Free Lunch73:10:31
Goal Hierarchy125:18:01
Why are classical nonparametric not cutting it? 189:45:34
ML 101. What We Are Fighting Against: The Curse of Dimensionality194:23:38
Not Dimensionality so much as Number of Variations248:24:14
Putting Probability Mass where Structure is Plausible278:57:38
Bypassing the curse of dimensionality361:01:39
Learning multiple levels of representation454:45:50
The Power of Distributed Representations478:03:36
Non-distributed representations481:26:50
The need for distributed representations543:15:05
Classical Symbolic AI vs Representation Learning873:11:26
Neural Language Models: fighting one exponential by another one!918:32:32
Neural word embeddings: visualization directions = Learned Attributes943:35:31
Analogical Representations for Free952:39:27
The Next Challenge: Rich Semantic Representations for Word Sequences995:38:51
The Power of Deep Representations996:03:03
The Depth Prior can be Exponentially Advantageous1003:02:32
“Shallow” computer program1053:49:52
“Deep” computer program1059:53:35
Sharing Components in a Deep Architecture1095:18:44
New theoretical result1123:32:10
The Mirage of Convexity1142:21:42
A Myth is Being Debunked: Local Minima in Neural Nets1161:20:52
Saddle Points1178:29:00
Saddle Points During Training1313:17:06
Low Index Critical Points1367:46:35
Saddle-Free Optimization1368:00:54
Other Priors That Work with Deep Distributed Representations1368:17:21
How do humans generalize from very few examples?1375:17:38
Sharing Statistical Strength by SemiSupervised Learning1424:59:31
Multi-Task Learning1432:58:58
Google Image Search: Different object types represented in the same space1449:56:10
Maps Between Representations1458:06:11
Multi-Task Learning with Different Inputs for Different Tasks1463:53:51
Why Latent Factors & Unsupervised Representation Learning? Because of Causality1470:45:02