Systems Optimization Laboratory
Stanford, CA 943054121 USA

PDCO: PrimalDual interior method for Convex Objectives
 AUTHOR:
Michael Saunders
CONTRIBUTORS: Bunggyoo Kim,
Chris Maes,
Santiago Akle,
Matt Zahr
 CONTENTS:
A primaldual interior method for solving
linearly constrained optimization problems
with a convex objective function \( \phi(x) \) (preferably separable):
\begin{align*}
\text{minimize } & \phi(x) + \frac12 \D_1x\^2 + \frac12 \r\^2
\\ \text{subject to } & Ax + D_2r = b
\\ & l \le x \le u,
\end{align*}
where both \(x\) and \(r\) are variables.
The \(m \times n\) matrix \(A\) may be a Matlab sparse matrix
or a function handle for computing \(Ax\) and \(A^Ty\).
The positivedefinite diagonal matrices \(D_1\), \(D_2\) provide
primal and dual regularization. \(D_2\) determines whether
each row of \(Ax \approx b\) should be satisfied accurately
or in the leastsquares sense. (Each element of \(D_2\)
is typically in the range \([10^{4},1]\).)
NNLS: Nonnegative least squares problems are specified via
\(D_2 = I\), \(l=0\), \(u=\) infinity.
Set input parameters d2=1, bl=zeros(n,1), bu=inf(n,1).
BP, BPDN, LASSO: The original Basis Pursuit and Basis Pursuit Denoising
research used an early version of PDCO, as described in [1] below.
The L1 norm \(\x\_1\) is implemented by setting \(x = uv\) with
\(u,v \ge 0\). The linear constraints become
\( Au  Av + D_2r = b \), and the convex objective term is
\( \phi(x) = \lambda 1^T u + \lambda 1^T v \)
for some positive value of \(\lambda\). This is one form of LASSO
(Tibshirani 1996).
For BP, set \(D_1 = D_2 = 10^{3} I\) say (by setting d1=1e3, d2=1e3).
For BPDN, set \(D_2 = I\) (by setting d1=1e3, d2=1).
 REFERENCES:
[1] S. S. Chen, D. L. Donoho, and M. A. Saunders.
Atomic decomposition by Basis Pursuit,
SIAM Review 43(1), 129159 (2001).
[2] PDCO is documented in the MATLAB and LaTeX files below.
It uses established primaldual technology, with choice
of direct or iterative method for computing search directions.
Special feature: Iterative (and inexact) computation
of search directions using LSMR, for the case where \(A\) is a function
(linear operator).
 RELEASE:
16 Oct 2002: First version, derived from PDSCO.
\(D_1\), \(D_2\) and general bounds implemented.
23 Sep 2003: Maximum entropy test problems included
(joint work with John Tomlin, IBM Almaden).
11 Feb 2005: Slight changes for MATLAB 7.0.
11 Feb 2005: Removed ENTROPY.big from zip file
(too big for those who don't want it). Link is below.
03 Apr 2010: Nonseparable objectives; function handles;
zero bounds treated directly.
19 Apr 2010: PDCO replaces PDSCO now that zero bounds are treated properly.
02 Jun 2011: Beware: the backtracking linesearch sometimes inhibits convergence.
28 Apr 2012: LSMR replaces LSQR (Method=3) for iterative computation of search directions.
28 Apr 2012: MINRES (Method=4) is a new option, although
LSMR (Method=3) should be somewhat better in general.
13 Nov 2013: LPnetlib.m added for running LP problems from
Tim Davis's UFL sparse matrix collection.
22 Nov 2013: Method=22 implemented: uses MA57 via ldl(K2) on SQD system.
MA57 makes use of multicores if available. Good for QP and
other convex problems with explicit sparse Hessians.
24 Nov 2013: pdco4: distribution files reorganized.
30 May 2015:
pdco is now a Git repository
(initialized with pdco4).
15 Jun 2018: pdco5 contributed by Aekaansh Verma (Mechanical Engineering, Stanford University), includes new Method options.
Method = 1,2,3,4 have always computed \(\Delta y\) before \(\Delta x\).
Method = 11,12,13,14 (new options) compute \(\Delta x\) before \(\Delta y\).
Sometimes these will be more efficient than 1,2,3,4 (e.g., if \(m > n\)).
See pdco5.zip below (not yet included in github).
DOWNLOADS:
