Time | Talk |
---|---|

10:00 -11:00 | Tutorial: Ravi Kannan Sampling in large matrices |

11:00 -11:30 | Santosh Vempala Related paper: Matrix approximation and projective clustering via volume sampling |

11:30 -12:00 | Petros Drineas Subspace sampling and relative error matrix approximation |

1:30 - 2:30 | Tutorial: Dianne O'Leary Matrix factorizations for information retrieval |

2:30 - 3:00 | Pete Stewart Sparse reduced rank approximations to sparse matrices |

3:00 - 3:30 | Haesun Park Adaptive discriminant analysis by regularized minimum squared errors |

4:00 - 4:30 | Michael Mahoney CUR matrix decompositions for improved data analysis |

4:30 - 5:00 | Daniel Spielman Fast algorithms for graph partitioning, sparsifications, and solving SDD systems |

5:00 - 5:30 | Anna Gilbert/Martin Strauss List decoding of noisy Reed-Muller-like codes |

5:30 - 6:00 | Bob Plemmons Low-rank nonnegative factorizations for spectral imaging applications |

6:00 - 6:30 | Art Owen A hybrid of multivariate regression and factor analysis |

Time | Talk |
---|---|

9:00 -10:00 | Tutorial: Prabhakar Raghavan The changing face of web search |

10:00 -10:30 | Tong Zhang Statistical ranking problem |

11:00 -11:30 | Michael Berry Text-mining approaches for email surveillance |

11:30 -12:00 | Hongyuan Zha Incorporating query difference for learning retrieval functions |

12:00 -12:30 | Trevor Hastie/Ping Li Efficient L2 and L1 dimension reduction in massive databases |

2:00 - 3:00 | Tutorial: Muthu Muthukrishnan An algorithmer's view of sparse approximation problems |

3:00 - 3:30 | Inderjit Dhillon Kernel learning with Bregman matrix divergences |

3:30 - 4:00 | Bruce Hendrickson Latent semantic analysis and Fiedler retrieval |

4:30 - 5:00 | Piotr Indyk Near optimal hashing algorithms for approximate near(est) neighbor problem |

5:00 - 5:30 | Moses Charikar Compact data representations and their applications |

5:30 - 6:00 | Sudipto Guha At the confluence of streams; order, information, and signals |

6:00 - 6:30 | Frank McSherry Preserving privacy in large-scale data analysis |

Time | Talk |
---|---|

9:00 -10:00 | Tutorial: Dimitris Achlioptas Applications of random matrices in spectral computations and machine learning |

10:00 -10:30 | Tomaso Poggio Learning: theory, engineering applications, and neuroscience |

11:00 -11:30 | Stephen Smale Related paper: Finding the homology of submanifolds with high confidence from random samples |

11:30 -12:00 | Gunnar Carlsson Algebraic topology and analysis of high dimensional data |

12:00 -12:30 | Vin de Silva Point-cloud topology via harmonic forms |

2:00 - 2:30 | Dan Boley Fast clustering leads to fast support vector machine training and more |

2:30 - 3:00 | Chris Ding On the equivalence of (semi-)nonnegative matrix factorization and k-means |

3:00 - 3:30 | Al Inselberg Parallel coordinates: visualization & data mining for high dimensiona datasets |

3:30 - 4:00 | Joel Tropp One sketch for all: a sublinear approximation scheme for heavy hitters |

5:00 - 5:30 | Rob Tibshirani Prediction by supervised principal components |

5:30 - 6:00 | Tao Yang/Apostolos Gerasoulis |

If you have problems downloading the slides please contact David Gleich at dgleich@stanford.edu

Updated Monday: July 6, 2006 at 10:11 PDT.