Scalable Training of L1-regularized Log-linear Models thumbnail
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Scalable Training of L1-regularized Log-linear Models

Published on Jun 23, 20077664 Views

The l-bfgs limited-memory quasi-Newton method is the algorithm of choice for optimizing the parameters of large-scale log-linear models with L2 regularization, but it cannot be used for an L1-regulari

Related categories