Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications

S. Boyd and L. Vandenberghe

Plenary lecture presented at IEEE Conference on Decision and Control, Las Vegas, December 2002.

In this talk I will give an overview of some major developments in convex optimization that have emerged over the last ten years or so. The basic idea is that convex problems are fundamentally tractable, in theory and in practice. The polynomial worst-case complexity results of linear programming have been extended to nonlinear convex optimization, and interior-point methods for nonlinear convex optimization achieve efficiencies approaching that of modern linear programming solvers. Moreover, special purpose implementations for large-scale applications can take advantage of many different types of problem structure. Several new classes of standard convex optimization problems have emerged, including semidefinite programming and linear matrix inequalities, which are well known in control, and also determinant maximization, second-order cone programming, and geometric programming, which are less well known. Like linear and quadratic programming, we have a fairly complete duality theory, and very effective numerical methods for these problem classes. There has been a steadily expanding list of new applications of convex optimization, in areas such as circuit design, signal processing, statistics, communications, and other fields. Convex optimization is also emerging as an important tool for hard, non-convex problems. Convex relaxations of hard problems provide a general approach for handling hard optimization problems, with applications in combinatorial optimization, analysis and design of nonlinear and uncertain systems, and robust optimization.