Date post: | 17-Dec-2015 |
Category: |
Documents |
Upload: | lucinda-howard |
View: | 265 times |
Download: | 1 times |
What, Why and How What is convex optimization Why study convex optimization How to study convex optimization
Analytical Solution of Least-squares
f 0(x) = jjAx ¡ bjj22 = (Ax ¡ b)>(Ax ¡ b)
x = (A>A)¡ 1A>b
@f 0(x)@x = 2A>(Ax ¡ b) = 0
Mathematical Optimization
Convex Optimization
Least-squares LP
Solving Optimization Problems
Nonlinear Optimization
• Analytical solution• Good algorithms and software• High accuracy and high reliability• Time complexity:
Mathematical Optimization
Convex Optimization
Least-squares LP
Nonlinear Optimization
knC 2
A mature technology!
• No analytical solution• Algorithms and software• Reliable and efficient• Time complexity:
Mathematical Optimization
Convex Optimization
Least-squares LP
Nonlinear Optimization
mnC 2
Also a mature technology!
Mathematical Optimization
Convex Optimization
Nonlinear Optimization
Almost a mature technology!
Least-squares LP
• No analytical solution• Algorithms and software• Reliable and efficient• Time complexity (roughly)
},,max{ 23 Fmnn
Mathematical Optimization
Convex Optimization
Nonlinear Optimization
Far from a technology! (something to avoid)
Least-squares LP
• Sadly, no effective methods to solve• Only approaches with some compromise• Local optimization: “more art than technology” • Global optimization: greatly compromised efficiency • Help from convex optimization
1) Initialization 2) Heuristics 3) Bounds
Why Study Convex Optimization
If not, ……
-- Section 1.3.2, p8, Convex Optimization
there is little chance you can solve it.
Two Directions As potential users of convex optimization
As researchers developing convex programming algorithms
Recognizing least-squares problems Straightforward: verify
the objective to be a quadratic function the quadratic form is positive semidefinite
Standard techniques increase flexibility Weighted least-squares
Regularized least-squares
Recognizing LP problems
Example: Sum of residuals approximation
Chebyshev or minimax approximation
t = maxi ja>i x ¡ bi j
ti = jri j