Composite Objective Mirror Descent

Composite Objective Mirror Descent

John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Ambuj Tewari

To appear, Conference on Learning Theory (COLT 2010)

We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known first-order algorithms, such as the projected gradient method, mirror descent, and forward-backward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as L1, mixed norm, and trace-norm.