03-Convex-Optimization-Problems

This is the third notebook of the series going through Convex Optimizaion. The topics here are following MOOC Convex Optimization course by Stephen Boyd at Stanford.

Convex Optimization in standard form

A convex optimization problem is one of the form:

where $f_0,\dots,f_m$ are convex functions and equality constraints are affine.

• feasible set of a convex optimization problem is convex: $D=\bigcap_{i=0}^m \textbf{dom }f_i \cap \bigcap_{j=0}^p \textbf{dom }h_j$

Local and global optima

For any convex optimization problem, any locally optimal point is also (globally) optimal.

Suppose:

$f_0(x)=\text{inf }\{f_0(z)|z \text{ feasible }, \parallel z-x \parallel_2 \leq R\}$ for some $R$. Suppose $x$ is not globally optimal. Then there is a feasible $y$ such that $f_0(y)<f_0(x)$ and $\parallel y-x\parallel_2>R$. Now consider the point $z$: $z=(1-\theta)x+\theta y, \theta=\frac{R}{2\parallel y-x\parallel_2}$ and therefore $\parallel z-x\parallel_2=R/2<R$.

By convexity of $f_0$ we have: $f_0(z)\leq (1-\theta)f_0(x)+\theta f_0(y)<f_0(x)$ Which is a contradiction to earlier assumption and therefore there exist no feasible $y$ with $f_0(y)<f_0(x)$.

Optimality criterion for differentiable $f_0$

Suppose $f_0$ in a convex optimization problem is differentiable, so that for all $x,y \in \textbf{dom } f_0$,

$f_0(y)\geq f_0(x)+\nabla f_0(x)^T(y-x)$

Let $X$ denote the feasible set,

$X=\{x \vert f_i(x) \leq 0,i=1,\dots,m,h_i(x)=0,i=1,\dots,p\}$

Then $x$ is optimal iff $x\in X$ and:

$\nabla f_0(x)^T(y-x)\geq 0 \text{ for all } y\in X$.

Problems with equality constraints only

Based on the optimizality condition for $x$ we have: %

Which can be shown that :

This derives the Lagrange Multiplier optimality.

Minimization over nonnegative orthant

Based on the optimizality condition for $x \succeq 0$ we have: %

Which can be shown that holds when:

Because both $x$ and $\nabla f_0(x)$ are nonnegative in every element therefore it must be the case that sparsity patterns of two vectors are complimentary.

Equivalent convex problems

Two problems are (informally) equivalent if the solution of one is readily obtained from the solution of the other, and vice versa.

Eliminating equality constraints.

All equality constrainst are of the form:

which can be eliminated by finding a particular $x_0$ of $Ax=b$, and a mtrix F whose range is the nullspace of A.

Introducing equality constraints

is equivalent to:

Introducing slack variables for linear inequalities

is equivalent to:

epigraph form

Any convex optimization problem can be converted to epigraph form:

therefore linear objective is universal for convex optimization

Minimizing over some variables

%

is equivalent

where $\tilde{f}_0(x_1)=inf_{x_2} f_0(x_0,x_1)$

Quasiconvex optimization

For these problems $x$ is optimal if:

Notes:

• The condition is only sufficient for optimality.
• The condition requires $\nabla f_0(x)$ be nonzero.

Linear Programming

In [1]:

In [7]:

C:
[[7]
[8]
[9]]
b:
[10  8]
G:
[[2 3 2]
[1 1 2]]
optimal point:
[ 2.00e+00]
[ 5.40e-07]
[ 3.00e+00]


In [8]:

Chebyshev center of polyhedron

Chebyshev center of

is center of largest inscribed ball:

$a_i^T x\leq b_i$ for all $x \in B$ iff: $\text{sup } {a_i^T(x_c+u) \vert \parallel u \parallel_2 \leq r }=a_i^Tx_c+r \parallel a_i \parallel_2 \leq b_i$ hence: %

In [4]:

0.447213535807
[-9.50e-08]
[-6.87e-18]


$P \in S_{+}^n$ and therefore objective is convex.

• Least square is an example of QP:

In [5]:

Optimal Point:
[ 4.92e+00]
[ 2.71e+00]


Tags:

Categories:

Updated: