On Low-rank Trace Regression Under General Sampling Distribution

Publication
arXiv preprint arXiv:1904.08576

In this paper, we study the trace regression when a matrix of parameters $\mathbf{B}^\star$ is estimated via convex relaxation of rank-penalized regression or non-convex optimization. It is known that these estimators satisfy near optimal error bounds under assumptions on rank, coherence, or spikiness of $\mathbf{B}^\star$. We first introduce a general notion of spikiness for $\mathbf{B}^\star$ and prove non-asymptotic error bounds for the estimation error. Our approach relies on a generic recipe to prove restricted strong convexity for the sampling operator of the trace regression. Second, we prove similar error bounds when the regularization parameter is chosen via cross-validation. This result is significant in that existing theory on cross-validated estimators (Satyen Kale and Vassilvitskii (2011); Kumar et al. (2013); Abou-Moustafa and Szepesvari (2017)) do not apply to our setting since our estimators are not known to satisfy their required notion of stability. Third, we study applications of our general results to four sub-problems of (1) matrix completion, (2) multi-task learning, (3) compressed sensing with Gaussian ensembles, and (4) compressed sensing with factored measurements. For (1), (3), and (4) we recover matching error bounds as those found in the literature, and for (2) we obtain (to the best of our knowledge) the first such error bound.

Related