A Primal-Dual Potential Reduction Method for Problems Involving Matrix Inequalities
Mathematical Programming, Series B, 69(1):205-236, 1995.
We describe a potential reduction method for convex optimization problems
involving matrix inequalities. The method is based on the theory developed by
Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear
programming. A worst-case analysis shows that the number of iterations grows as
the square root of the problem size, but in practice it appears to grow more
slowly. As in other interior-point methods the overall computational effort is
therefore dominated by the least-squares system that must be solved in each
iteration. A type of conjugate-gradient algorithm can be used for this purpose,
which results in important savings for two reasons. First, it allows us to take
advantage of the special structure the problems often have (e.g., Lyapunov or
algebraic Riccati inequalities). Second, we show that the polynomial bound on
the number of iterations remains valid even if the conjugate-gradient algorithm
is not run until completion, which in practice can greatly reduce the
computational effort per iteration. We describe in detail how the algorithm
works for optimization problems with L Lyapunov inequalities, each of size
m. We prove an overall worst-case operation count of O(m^{5.5}L^{1.5}). The
average case complexity appears to be closer to O(m^4L^{1.5}). This estimate
is justified by extensive numerical experimentation, and is consistent with
other researchers’ experience with the practical performance of interior-point
algorithms for linear programming. This result means the computational cost of
extending current control theory that is based on the solution of Lyapunov or
Riccati equations to a theory that is based on the solution of (multiple,
coupled) Lyapunov or Riccati inequalities is modest.