Project ideas (2018)
Robust deconvolution problems: in some siliconbased devices, fabrication is
done by applying light of varying frequencies through a given mask. This project
aims to develop numerical methods for efficiently and robustly finding a mask to
realize a desired structure on a device. Come to John's office hours if
interested.
Hit and run should be fast and fun. Hit and run and other Markov Chain Monte
Carlo (MCMC) procedures for sampling from convex bodies have seen substantial
theoretical development, but their practical upside in optimization has been
limited. This project will produce several reference implementations of hit and
run algorithms, as well as cutting plane methods using them, in attempt to make
them practically viable.
Operators for convex optimization. Convex optimization algorithms are built by
assuming certain small convex problems are easy to solve. For example,
(projected) subgradient descent assumes it is easy to solve quadratic problems
over the constraint set; ADMM assumes it is easy to minimize individual terms in
an objective (plus a quadratic); Mirror Descent assumes minimizing a Bregman
divergence plus linear term is easy; FrankWolfe or conditional gradient methods
that minimizing a linear function over constraints is easy. In this project, you
will propose and investigate some type of alternative operator and see what its
effects are for optimization. One example might be the ability to check, in any
given direction, what the closest point with higher function value than the
current iterate.
Finding eigenvalues in truncated Krylov spaces. The
Lanczos method is, in theory, a very efficient recursion for computing the largest eigenvalues of a high dimensional matrix. However, in practice it suffers from numerical instability. Moreover, compared to an idetical number of power iterations, it requires either twice as many matrixvector products or substantially more memory. In this project, you will explore a method that evolves a fixedsize subspace of leading eigenvectors, interpolating between power iterations and Lanczos, and hopefully combining the robustness of the former with the efficiency of the latter. Contact Yair Carmon <yairc@stanford.edu> if interested.
“Convex Until Proven Guilty”. Implement a practical version of the algorithm proposed here in an optimization framework such as Optim.jl or scipy.optimize. In order to make it work, you will have to think carefully about adaptively picking step size parameters, e.g. via line search. Compare your implementation against LBFGS and nonlinear conjugate gradient for the problem of fitting Gaussian Processes, or any other large scale nonconvex, smooth and nonstochastic optimization. Contact Yair Carmon <yairc@stanford.edu> if interested.
Compressive sensing using generative models and sparsity assumptions. The goal of compressive sensing is to recover a highdimensional signal with much fewer linear measurements. Typically, structural assumptions based on sparsity are imposed to guarantee unique recovery using convex optimization (LASSO). Recent work has replaced such rigid assumptions with a generative modeling prior resulting in a nonconvex, differentiable recovery objective, which has good formal guarantees and works well empirically. Here, we will aim to investigate the (stochastic) optimization algorithms of a variant of the above approaches that recovers signals deviating in sparse directions from the range of a differentiable generative model. Contact Aditya Grover <adityag@cs.stanford.edu> if interested.
Project ideas (2015; some of these may still be unsolved)
Factoring out common functionality of cvxpy and convex.jl: The goal of this
project is to create an LLVM for CVXlike systems, i.e., a C/C library that
factors out the low level operations that all such systems perform. The library
will make it much easier to develop new CVXlike systems and could also provide
novel functionality such as algebraic simplification of optimization problems.
Contact Steven Diamond <stevend2@stanford.edu> if interested.
Optimization over the union of convex sets (i.e., optimization over a
disjunction): even though NPhard in general, these 'disjunctive programs’ can
be converted into a mixedinteger conic programs (MICP) and solved using MILP
and MISOCP solvers. Contact Nick Moehle <moehle@stanford.edu> and Steven Diamond
<stevend2@stanford.edu> if interested.
POGSrelated project: Add CHOLMOD (both CPU and GPU versions) as a solver for
the linear system. This would make POGS a lot more applicable in typical cone
problem settings, where large sparse systems arise. At the moment POGS only
supports an indirect solver for sparse matrices. Contact Chris Fougner
<fougner@stanford.edu> and Baris Ungun <ungun@stanford.edu> if interested.
POGS Wrapper for Python and Julia. There is already a flat C interface, so
creating wrappers for those two languages should be pure CpythonPython or Julia
respectively. This would allow people (eg. Baris) to specify problems in about
510 lines of JuliaPython versus 5060 lines of C (not to mention the data
handling overhead). Specifying a Lasso problem using the MATLAB wrapper takes 5
lines of code. Also create a cone interface for POGS. Contact Chris Fougner
<fougner@stanford.edu> and Baris Ungun <ungun@stanford.edu> if interested.
Implement a number of parallel least squares solvers in a language or
framework of your choice (eg C, julia, Elemental) and compare their performance
on shared or distributed memory architectures. Examples of parallel least
squares solvers include conjugate gradients or lsqr with parallel matrixvector
multiply; a randomized Kaczmarz algorithm; an asynchronous randomized Kaczmarz
algorithm; or stochastic dual coordinate ascent (SDCA). Contact Madeleine Udell
<udell@stanford.edu> if interested.
