# Systems Optimization Laboratory

## LSLQ: Sparse Equations and Least Squares

• AUTHORS: Ron Estrin, Dominique Orban. Michael Saunders.
• CONTRIBUTORS:
• CONTENTS: Implementation of a conjugate-gradient type method for solving sparse linear equations and sparse least-squares problems: \begin{align*} \text{Solve } & Ax=b \\ \text{or minimize } & \|Ax-b\|^2 \\ \text{or minimize } & \|Ax-b\|^2 + \lambda^2 \|x\|^2, \end{align*} where the matrix $$A$$ may be square or rectangular (over-determined or under-determined), and may have any rank. It is represented by a routine for computing $$Av$$ and $$A^T u$$ for given vectors $$v$$ and $$u$$.

The scalar $$\lambda$$ is a damping parameter. If $$\lambda > 0$$, the solution is "regularized" in the sense that a unique solution always exists, and $$\|x\|$$ is bounded.

The method is based on the Golub-Kahan bidiagonalization process. It is algebraically equivalent to applying SYMMLQ to the normal equation $$(A^T A + \lambda^2 I) x = A^T b,$$ but has better numerical properties, especially if $$A$$ is ill-conditioned.

LSLQ reduces the error $$\|x - x_k\|$$ monotonically.

If $$A$$ is symmetric, use SYMMLQ, MINRES, or MINRES-QLP.

If $$A$$ has low column rank and $$\lambda=0$$, the solution is not unique. LSLQ returns the solution of minimum length. Thus for under-determined systems, it solves the problem $$\min \|x\| \text{ subject to } Ax=b$$. More generally, it solves the problem $$\min \|x\| \text{ subject to } A^T Ax=A^T b$$, where $$A$$ may have any shape or rank.

For $$\min \|x\| \text{ subject to } Ax=b$$, consider using CRAIG or LNLQ.

• REFERENCES:
R. Estrin, D. Orban, and M. A. Saunders, LSLQ: An iterative method for linear least squares with an error minimization property, SIMAX 40(1), 254-275 (2019).
• RELEASE:
2019: Julia version.
2019: Matlab version.