Large Scale Matrix Analysis and InferenceA NIPS 2013 WorkshopMonday December 9th 2013 Lake Tahoe, Nevada, United States Organizers: Reza Bosagh Zadeh  Gunnar Carlsson  Wouter M. Koolen  Michael Mahoney  Manfred Warmuth
Much of Machine Learning is based on Linear Algebra. Often, the prediction is a function of a dot product between the parameter vector and the feature vector. This essentially assumes some kind of independence between the features. In contrast, matrix parameters can be used to learn interrelations between features: The (i,j)th entry of the parameter matrix represents how feature i is related to feature j. This richer modeling has become very popular. In some applications, like PCA and collaborative filtering, the explicit goal is inference of a matrix parameter. Yet in others, like direction learning and topic modeling, the matrix parameter instead pops up in the algorithms as the natural tool to represent uncertainty. The emergence of large matrices in many applications has brought with it a slew of new algorithms and tools. Over the past few years, matrix analysis and numerical linear algebra on large matrices has become a thriving field. Also manipulating such large matrices makes it necessary to to think about computer systems issues. This workshop aims to bring closer researchers in large scale machine learning and large scale numerical linear algebra to foster crosstalk between the two fields. The goal is to encourage machine learning researchers to work on numerical linear algebra problems, to inform machine learning researchers about new developments on large scale matrix analysis, and to identify unique challenges and opportunities. The workshop will conclude with a session of contributed posters. Questions of Interest
Goals of the WorkshopThis workshop will consist of invited talks and paper submissions for a poster session. The target audience of this workshop includes industry and academic researchers interested in machine learning, large distributed systems, numerical linear algebra, and related fields.
TalksNathan Srebro (TTI): An overview of matrix problems. PCA, CCA, PLS, matrix regularization, main update families [slides] Satyen Kale (Yahoo Labs): NearOptimal Algorithms for Online Matrix Prediction [slides] Ben Recht (Wisconsin): Large scale matrix algorithms and matrix completion [slides] David Woodruff (IBM): Sketching as a Tool for Numerical Linear Algebra [slides] Yiannis Koutis (Puerto RicoRio Piedras): Spectral sparsification of graphs: an overview of theory and practical methods [slides] Malik MagdonIsmail (RPI): Efficiently implementing sparsity in learning [slides] Ashish Goel (Stanford): PrimalDual Graph Algorithms for MapReduce [slides] Matei Zaharia (MIT): Largescale matrix operations using a dataflow engine [slides] Manfred Warmuth (UCSC): Large Scale Matrix Analysis and Inference [slides] Schedule7.308.00 Overview by organizers8.008.30 Invited Talk: Nathan Srebro 8:309.00 Invited Talk: Satyen Kale 9.009.30 COFFEE BREAK 9.3010.00 Invited Talk: Ben Recht 10.0010.30 Invited Talk: David Woodruff 15.3016.00 Invited Talk: Yiannis Koutis 16.0016.30 Invited Talk: Malik MagdonIsmail 16.3017.00 COFFEE BREAK 17.0017.30 Invited Talk: Ashish Goel 17.3018.00 Invited Talk: Matei Zaharia 18.00 Poster Session Accepted PapersThe following papers will be presented at the poster session.Linear Bandits, Matrix Completion, and Recommendation Systems [pdf] Efficient coordinatedescent for orthogonal matrices through Givens rotations [pdf] Improved Greedy Algorithms for Sparse Approximation of a Matrix in terms of Another Matrix [pdf] Preconditioned Krylov solvers for kernel regression [pdf] Probabilistic LowRank Matrix Completion with Adaptive Spectral Regularization Algorithms [pdf][supplementary] Dimension Independent Matrix Square using MapReduce [pdf] 
