Large Scale Matrix Analysis and InferenceA NIPS 2013 Workshop
Monday December 9th 2013
Lake Tahoe, Nevada, United States
Reza Bosagh Zadeh | Gunnar Carlsson | Wouter M. Koolen | Michael Mahoney | Manfred Warmuth
Much of Machine Learning is based on Linear Algebra. Often, the prediction is a function of a dot product between the parameter vector and the feature vector. This essentially assumes some kind of independence between the features. In contrast, matrix parameters can be used to learn interrelations between features: The (i,j)th entry of the parameter matrix represents how feature i is related to feature j.
This richer modeling has become very popular. In some applications, like PCA and collaborative filtering, the explicit goal is inference of a matrix parameter. Yet in others, like direction learning and topic modeling, the matrix parameter instead pops up in the algorithms as the natural tool to represent uncertainty.
The emergence of large matrices in many applications has brought with it a slew of new algorithms and tools. Over the past few years, matrix analysis and numerical linear algebra on large matrices has become a thriving field. Also manipulating such large matrices makes it necessary to to think about computer systems issues.
This workshop aims to bring closer researchers in large scale machine learning and large scale numerical linear algebra to foster cross-talk between the two fields. The goal is to encourage machine learning researchers to work on numerical linear algebra problems, to inform machine learning researchers about new developments on large scale matrix analysis, and to identify unique challenges and opportunities. The workshop will conclude with a session of contributed posters.
Questions of Interest
Goals of the Workshop
This workshop will consist of invited talks and paper submissions for a poster session. The target audience of this workshop includes industry and academic researchers interested in machine learning, large distributed systems, numerical linear algebra, and related fields.
Nathan Srebro (TTI): An overview of matrix problems. PCA, CCA, PLS, matrix regularization, main update families [slides]
Satyen Kale (Yahoo Labs): Near-Optimal Algorithms for Online Matrix Prediction [slides]
Ben Recht (Wisconsin): Large scale matrix algorithms and matrix completion [slides]
David Woodruff (IBM): Sketching as a Tool for Numerical Linear Algebra [slides]
Yiannis Koutis (Puerto Rico-Rio Piedras): Spectral sparsification of graphs: an overview of theory and practical methods [slides]
Malik Magdon-Ismail (RPI): Efficiently implementing sparsity in learning [slides]
Ashish Goel (Stanford): Primal-Dual Graph Algorithms for MapReduce [slides]
Matei Zaharia (MIT): Large-scale matrix operations using a data-flow engine [slides]
Manfred Warmuth (UCSC): Large Scale Matrix Analysis and Inference [slides]
Schedule7.30-8.00 Overview by organizers
8.00-8.30 Invited Talk: Nathan Srebro
8:30-9.00 Invited Talk: Satyen Kale
9.00-9.30 COFFEE BREAK
9.30-10.00 Invited Talk: Ben Recht
10.00-10.30 Invited Talk: David Woodruff
15.30-16.00 Invited Talk: Yiannis Koutis
16.00-16.30 Invited Talk: Malik Magdon-Ismail
16.30-17.00 COFFEE BREAK
17.00-17.30 Invited Talk: Ashish Goel
17.30-18.00 Invited Talk: Matei Zaharia
18.00- Poster Session
Accepted PapersThe following papers will be presented at the poster session.
Linear Bandits, Matrix Completion, and Recommendation Systems [pdf]
Efficient coordinate-descent for orthogonal matrices through Givens rotations [pdf]
Improved Greedy Algorithms for Sparse Approximation of a Matrix in terms of Another Matrix [pdf]
Preconditioned Krylov solvers for kernel regression [pdf]
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms [pdf][supplementary]
Dimension Independent Matrix Square using MapReduce [pdf]