Projected Subgradient Methods for Learning Sparse Gaussians

Projected Subgradient Methods for Learning Sparse Gaussians

John Duchi, Stephen Gould, and Daphne Koller

Conference on Uncertainty in Artificial Intelligence (UAI 2008)

Gaussian Markov random fields (GMRFs) are useful in a broad range of applications. In this paper we tackle the problem of learning a sparse GMRF in a high-dimensional space. Our approach uses the L1-norm as a regularization on the inverse covariance matrix. We utilize a novel projected gradient method, which is faster than previous methods both in practice and in the asymptotic complexity. We also extend the L1-regularized objective to the problem of sparsifying entire blocks within the inverse covariance matrix; our methods generalize fairly easily to this case, while other methods do not. We demonstrate that our extensions give better generalization performance on two real domains---biological network analysis and a 2D-shape modeling image task.