Skip navigation
The Australian National University

Generalized Conditional Gradient for Sparse Approximation

Xinhua Zhang (NICTA)

NICTA SML SEMINAR

DATE: 2012-10-18
TIME: 11:15:00 - 12:00:00
LOCATION: NICTA - 7 London Circuit
CONTACT: JavaScript must be enabled to display this email address.

ABSTRACT:
Sparse learning models typically combine a smooth loss with a nonsmooth penalty which promotes sparse solutions (eg. trace norm). Although recent developments in sparse approximation have offered promising solution methods, current approaches either apply only to matrix-norm emph{constrained} problems or provide suboptimal convergence rates. In this talk, we will first propose a boosting method for emph{regularized} learning that guarantees $epsilon$ accuracy within $O(1/epsilon)$ iterations. Then we will show how this method can be generalized to the framework of aconditional gradienta, which leads to a unified treatment of general sparse inducing penalties. In practice, we further accelerate the performance by interlacing sparse approximation with fixed-rank local optimization---exploiting a simpler local objective than previous work. The resulting algorithm clearly outperforms state-of-the-art solvers (eg Nesterov and other greedy methods) on large-scale problems, such as collaborative filtering (MovieLens 10M) and multiclass image categorization on ImageNet. Joint work with Yaoliang Yu and Dale Schuurmans at Alberta Innovates Center for Machine Learning. Recent NIPS paper at http://webdocs.cs.ualberta.ca/~xinhua2/papers/ZhaYuSch12.pdf
BIO:



Updated:  18 October 2012 / Responsible Officer:  JavaScript must be enabled to display this email address. / Page Contact:  JavaScript must be enabled to display this email address.