Zero-Convex Functions, Perturbation Resilience,and Subgradient Projections for
Feasibility-Seeking Methods
Daniel Reem(joint work with Yair Censor)
Department of Mathematics, The Technion, Haifa, Israel
E-mail: [email protected]
http://w3.impa.br/~dream
4 July 2016: 28th European Conference of OperationalResearch (EURO 2016), Poznan, Poland
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 1 / 24
http://w3.impa.br/~dream
The convex feasibility problem: a short reminder
Given a family (Cj)j∈J of closed and convex subsets in a given space, sayRd , to compute (approximately) a point
y ∈ C :=⋂j∈J
Cj
assuming C 6= ∅.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 2 / 24
The convex feasibility problem: a short reminder
Given a family (Cj)j∈J of closed and convex subsets in a given space, sayRd , to compute (approximately) a point
y ∈ C :=⋂j∈J
Cj
assuming C 6= ∅.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 2 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: Motivation
Represents the solution set of a system of (convex) inequalities
g1(x) ≤ 0...
gn(x) ≤ 0
whenever Cj = {x : gj(x) ≤ 0} for some function gj .
Has been used in the analysis of various phenomena, including:
sensor networks;
computerized tomography;
data compression;
molecular biology (example in the paper);
many more
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 3 / 24
CFP: methods
Mainly iterative algorithms (e.g., projections on the subsets).
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 4 / 24
CFP: methods
Mainly iterative algorithms (e.g., projections on the subsets).
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 4 / 24
Subgradient projections: Definition and advantage
an operation of the form
Aj(x) = x − αt, α ≥ 0, t ∈ ∂gj(x).
less computational demanding than standard projection on a set.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 5 / 24
Subgradient projections: Definition and advantage
an operation of the form
Aj(x) = x − αt, α ≥ 0, t ∈ ∂gj(x).
less computational demanding than standard projection on a set.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 5 / 24
Subgradient projections: Definition and advantage
an operation of the form
Aj(x) = x − αt, α ≥ 0, t ∈ ∂gj(x).
less computational demanding than standard projection on a set.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 5 / 24
Subgradient projections: Definition and advantage
an operation of the form
Aj(x) = x − αt, α ≥ 0, t ∈ ∂gj(x).
less computational demanding than standard projection on a set.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 5 / 24
Subgradient projections: Definition and advantage
an operation of the form
Aj(x) = x − αt, α ≥ 0, t ∈ ∂gj(x).
less computational demanding than standard projection on a set.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 5 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent:
computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof:
resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization:
a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Resilience of iterative algorithms: meaning andmotivation
Meaning: convergence is conserved despite perturbations.
Motivation:
imprecision is inherent: computational errors, noise, etc.
lack of proof: resilience of many algorithms has not been proved.
Superiorization: a recent optimization methodology which usesperturbations in an active way in order to obtain “superior” solutions.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 6 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence:
global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
Main results: a schematic description
Introducing and discussing in a quite detailed way the class ofzero-convex functions, a rich class of functions holding a promisingpotential
Discussing the SSP method for solving the CFP in a general setting:
zero-convex functions
domain: closed and convex subset of a real Hilbert space
Certain perturbations are allowed without losing convergence
infinitely many sets are allowed
general control sequence (beyond cyclic and almost cyclic)
Convergence: global and weak, sometimes also strong
Computational simulations: for a problem in molecular biology
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 7 / 24
The class of zero-convex functions
Definition
H is a real Hilbert space.
Ω ⊆ H is nonempty and convex.
Given g : Ω→ R, its 0-level-set is
g≤0 = {x ∈ Ω | g(x) ≤ 0}.
g is said to be zero-convex at the point y ∈ Ω if there exists avector t ∈ H (called a 0-subgradient of g at y) satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 8 / 24
The class of zero-convex functions
Definition
H is a real Hilbert space.
Ω ⊆ H is nonempty and convex.
Given g : Ω→ R, its 0-level-set is
g≤0 = {x ∈ Ω | g(x) ≤ 0}.
g is said to be zero-convex at the point y ∈ Ω if there exists avector t ∈ H (called a 0-subgradient of g at y) satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 8 / 24
The class of zero-convex functions
Definition
H is a real Hilbert space.
Ω ⊆ H is nonempty and convex.
Given g : Ω→ R, its 0-level-set is
g≤0 = {x ∈ Ω | g(x) ≤ 0}.
g is said to be zero-convex at the point y ∈ Ω if there exists avector t ∈ H (called a 0-subgradient of g at y) satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 8 / 24
The class of zero-convex functions
Definition
H is a real Hilbert space.
Ω ⊆ H is nonempty and convex.
Given g : Ω→ R, its 0-level-set is
g≤0 = {x ∈ Ω | g(x) ≤ 0}.
g is said to be zero-convex at the point y ∈ Ω if there exists avector t ∈ H (called a 0-subgradient of g at y) satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 8 / 24
The class of zero-convex functions
Definition
H is a real Hilbert space.
Ω ⊆ H is nonempty and convex.
Given g : Ω→ R, its 0-level-set is
g≤0 = {x ∈ Ω | g(x) ≤ 0}.
g is said to be zero-convex at the point y ∈ Ω if there exists avector t ∈ H (called a 0-subgradient of g at y) satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 8 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,
the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions (Cont.)
The set of all 0-subgradients of g at y is called thezero-subdifferential of g at y and denoted by ∂0g(y).
A function g satisfying
g(y) + 〈t, x − y〉 ≤ 0 ∀x ∈ g≤0.
for all y ∈ Ω will be called 0-convex.
Other notions of subdifferentials exist in the literature, e.g.,the standard subdifferential
the Clarke subdifferential
the Quasi-subdifferential
Mordukhovich’s Subdifferential
etc.
Our one seems to be new.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 9 / 24
0-convex functions: geometric illustration
The hyperplane M = {x ∈ H : 〈t, x − y〉 = −g(y)} separates g≤0 and y :
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 10 / 24
0-convex functions: geometric illustration
The hyperplane M = {x ∈ H : 〈t, x − y〉 = −g(y)} separates g≤0 and y :
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 10 / 24
0-convex functions: geometric illustration
The hyperplane M = {x ∈ H : 〈t, x − y〉 = −g(y)} separates g≤0 and y :
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 10 / 24
Zero-convex functions: main characterization
Proposition
If g is zero-convex, then its zero-level-set g≤0 is convex.
If g≤0 is closed and convex, then g is zero-convex. In fact, we have aformula for the 0-subgradients using separating hyperplanes.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 11 / 24
Zero-convex functions: main characterization
Proposition
If g is zero-convex, then its zero-level-set g≤0 is convex.
If g≤0 is closed and convex, then g is zero-convex. In fact, we have aformula for the 0-subgradients using separating hyperplanes.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 11 / 24
Zero-convex functions: main characterization
Proposition
If g is zero-convex, then its zero-level-set g≤0 is convex.
If g≤0 is closed and convex, then g is zero-convex. In fact, we have aformula for the 0-subgradients using separating hyperplanes.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 11 / 24
Zero-convex functions: examples
Example
Any convex function g : Rn → R
Example
Any nonpositive function g is 0-convex at each y with t = 0.
Example
Any lower semiconrinuous quasiconvex function is zero-convex.
Such functions frequently appear in generalized convexity theory.
In particular, certain quadratic functions in subsets of Rm (economics)
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 12 / 24
Zero-convex functions: examples
Example
Any convex function g : Rn → R
Example
Any nonpositive function g is 0-convex at each y with t = 0.
Example
Any lower semiconrinuous quasiconvex function is zero-convex.
Such functions frequently appear in generalized convexity theory.
In particular, certain quadratic functions in subsets of Rm (economics)
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 12 / 24
Zero-convex functions: examples
Example
Any convex function g : Rn → R
Example
Any nonpositive function g is 0-convex at each y with t = 0.
Example
Any lower semiconrinuous quasiconvex function is zero-convex.
Such functions frequently appear in generalized convexity theory.
In particular, certain quadratic functions in subsets of Rm (economics)
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 12 / 24
Zero-convex functions: examples
Example
Any convex function g : Rn → R
Example
Any nonpositive function g is 0-convex at each y with t = 0.
Example
Any lower semiconrinuous quasiconvex function is zero-convex.
Such functions frequently appear in generalized convexity theory.
In particular, certain quadratic functions in subsets of Rm (economics)
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 12 / 24
Zero-convex functions: examples
Example
Any convex function g : Rn → R
Example
Any nonpositive function g is 0-convex at each y with t = 0.
Example
Any lower semiconrinuous quasiconvex function is zero-convex.
Such functions frequently appear in generalized convexity theory.
In particular, certain quadratic functions in subsets of Rm (economics)
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 12 / 24
Zero-convex functions: examples
Example
Any convex function g : Rn → R
Example
Any nonpositive function g is 0-convex at each y with t = 0.
Example
Any lower semiconrinuous quasiconvex function is zero-convex.
Such functions frequently appear in generalized convexity theory.
In particular, certain quadratic functions in subsets of Rm (economics)
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 12 / 24
0-convex functions: additional examples (Cont.)
Example
Multivariate polynomials: e.g., g : R2 → R defined by
g(x1, x2) = x21 + x
22 − x41x42 + x61x62/4− 0.3.
This g is zero-convex but not quasiconvex.
Figure: The reverse perspective.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 13 / 24
0-convex functions: additional examples (Cont.)
Example
Multivariate polynomials:
e.g., g : R2 → R defined by
g(x1, x2) = x21 + x
22 − x41x42 + x61x62/4− 0.3.
This g is zero-convex but not quasiconvex.
Figure: The reverse perspective.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 13 / 24
0-convex functions: additional examples (Cont.)
Example
Multivariate polynomials: e.g., g : R2 → R defined by
g(x1, x2) = x21 + x
22 − x41x42 + x61x62/4− 0.3.
This g is zero-convex but not quasiconvex.
Figure: The reverse perspective.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 13 / 24
0-convex functions: additional examples (Cont.)
Example
Multivariate polynomials: e.g., g : R2 → R defined by
g(x1, x2) = x21 + x
22 − x41x42 + x61x62/4− 0.3.
This g is zero-convex but not quasiconvex.
Figure: The reverse perspective.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 13 / 24
0-convex functions: additional examples (Cont.)
Example
The Voronoi function:
p ∈ Ω and A ⊆ H are given.
the distance d(p,A) between p and A is positive.
g : Ω→ R is defined by
g(x) := d(x , p)− d(x ,A) ∀x ∈ Ω.
g is zero-convex but usually not quasiconvex
g≤0 is the Voronoi cell of p with respect to A.
Remark: Voronoi diagrams appear in numerous places in science andtechnology and have diverse applications.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 14 / 24
0-convex functions: additional examples (Cont.)
Example
The Voronoi function:
p ∈ Ω and A ⊆ H are given.
the distance d(p,A) between p and A is positive.
g : Ω→ R is defined by
g(x) := d(x , p)− d(x ,A) ∀x ∈ Ω.
g is zero-convex but usually not quasiconvex
g≤0 is the Voronoi cell of p with respect to A.
Remark: Voronoi diagrams appear in numerous places in science andtechnology and have diverse applications.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 14 / 24
0-convex functions: additional examples (Cont.)
Example
The Voronoi function:
p ∈ Ω and A ⊆ H are given.
the distance d(p,A) between p and A is positive.
g : Ω→ R is defined by
g(x) := d(x , p)− d(x ,A) ∀x ∈ Ω.
g is zero-convex but usually not quasiconvex
g≤0 is the Voronoi cell of p with respect to A.
Remark: Voronoi diagrams appear in numerous places in science andtechnology and have diverse applications.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 14 / 24
0-convex functions: additional examples (Cont.)
Example
The Voronoi function:
p ∈ Ω and A ⊆ H are given.
the distance d(p,A) between p and A is positive.
g : Ω→ R is defined by
g(x) := d(x , p)− d(x ,A) ∀x ∈ Ω.
g is zero-convex but usually not quasiconvex
g≤0 is the Voronoi cell of p with respect to A.
Remark: Voronoi diagrams appear in numerous places in science andtechnology and have diverse applications.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 14 / 24
0-convex functions: additional examples (Cont.)
Example
The Voronoi function:
p ∈ Ω and A ⊆ H are given.
the distance d(p,A) between p and A is positive.
g : Ω→ R is defined by
g(x) := d(x , p)− d(x ,A) ∀x ∈ Ω.
g is zero-convex but usually not quasiconvex
g≤0 is the Voronoi cell of p with respect to A.
Remark: Voronoi diagrams appear in numerous places in science andtechnology and have diverse applications.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 14 / 24
0-convex functions: additional examples (Cont.)
Example
The Voronoi function:
p ∈ Ω and A ⊆ H are given.
the distance d(p,A) between p and A is positive.
g : Ω→ R is defined by
g(x) := d(x , p)− d(x ,A) ∀x ∈ Ω.
g is zero-convex but usually not quasiconvex
g≤0 is the Voronoi cell of p with respect to A.
Remark: Voronoi diagrams appear in numerous places in science andtechnology and have diverse applications.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 14 / 24
0-convex functions: additional examples (Cont.)
Example
The Voronoi function:
p ∈ Ω and A ⊆ H are given.
the distance d(p,A) between p and A is positive.
g : Ω→ R is defined by
g(x) := d(x , p)− d(x ,A) ∀x ∈ Ω.
g is zero-convex but usually not quasiconvex
g≤0 is the Voronoi cell of p with respect to A.
Remark: Voronoi diagrams appear in numerous places in science andtechnology and have diverse applications.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 14 / 24
The algorithm
Algorithm
The Sequential Subgradient Projections (SSP) Method withPerturbations
Initialization: x0 ∈ Ω is arbitrary.
Iterative Step:
xn+1 =
PΩ(xn − λn
gi(n)(xn)
‖ tn ‖2tn + bn
), if gi(n)(xn) > 0,
xn, if gi(n)(xn) ≤ 0,
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 15 / 24
The algorithm
Algorithm
The Sequential Subgradient Projections (SSP) Method withPerturbations
Initialization: x0 ∈ Ω is arbitrary.
Iterative Step:
xn+1 =
PΩ(xn − λn
gi(n)(xn)
‖ tn ‖2tn + bn
), if gi(n)(xn) > 0,
xn, if gi(n)(xn) ≤ 0,
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 15 / 24
The algorithm
Algorithm
The Sequential Subgradient Projections (SSP) Method withPerturbations
Initialization: x0 ∈ Ω is arbitrary.
Iterative Step:
xn+1 =
PΩ(xn − λn
gi(n)(xn)
‖ tn ‖2tn + bn
), if gi(n)(xn) > 0,
xn, if gi(n)(xn) ≤ 0,
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 15 / 24
The algorithm
Algorithm
The Sequential Subgradient Projections (SSP) Method withPerturbations
Initialization: x0 ∈ Ω is arbitrary.
Iterative Step:
xn+1 =
PΩ(xn − λn
gi(n)(xn)
‖ tn ‖2tn + bn
), if gi(n)(xn) > 0,
xn, if gi(n)(xn) ≤ 0,
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 15 / 24
The algorithm (Cont.)
λn = relaxation parameters ∈ (�1, 2− �2),
tn = 0-subgradients ∈ ∂0gi(n)(xn),
bn = error terms.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 16 / 24
The algorithm (Cont.)
λn = relaxation parameters ∈ (�1, 2− �2),
tn = 0-subgradients ∈ ∂0gi(n)(xn),
bn = error terms.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 16 / 24
The algorithm (Cont.)
λn = relaxation parameters ∈ (�1, 2− �2),
tn = 0-subgradients ∈ ∂0gi(n)(xn)
,
bn = error terms.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 16 / 24
The algorithm (Cont.)
λn = relaxation parameters ∈ (�1, 2− �2),
tn = 0-subgradients ∈ ∂0gi(n)(xn),
bn = error terms.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 16 / 24
Algorithm: geometric illustration when Ω = H
Mn=an arbitrary separating (closed) hyperplane between xn and g≤0i(n),
mn=the projection of xn on Mn.
Then:xn+1 = (1− λn)xn + λnmn + bn.
Figure: Illustration when 0 < λn < 1 and Ω = H.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 17 / 24
Algorithm: geometric illustration when Ω = H
Mn=an arbitrary separating (closed) hyperplane between xn and g≤0i(n)
,
mn=the projection of xn on Mn.
Then:xn+1 = (1− λn)xn + λnmn + bn.
Figure: Illustration when 0 < λn < 1 and Ω = H.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 17 / 24
Algorithm: geometric illustration when Ω = H
Mn=an arbitrary separating (closed) hyperplane between xn and g≤0i(n),
mn=the projection of xn on Mn.
Then:xn+1 = (1− λn)xn + λnmn + bn.
Figure: Illustration when 0 < λn < 1 and Ω = H.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 17 / 24
Algorithm: geometric illustration when Ω = H
Mn=an arbitrary separating (closed) hyperplane between xn and g≤0i(n),
mn=the projection of xn on Mn.
Then:xn+1 = (1− λn)xn + λnmn + bn.
Figure: Illustration when 0 < λn < 1 and Ω = H.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 17 / 24
The algorithm (Cont.)
Control Sequence: more general than cyclic and almost cyclic
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 18 / 24
The algorithm (Cont.)
Control Sequence: more general than cyclic and almost cyclic
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 18 / 24
Conditions for convergence
Condition
C =⋂j∈J
Cj =⋂
g≤0j 6= ∅.
Condition
Each function gj is 0-convex, uniformly continuous on closed and boundedsubsets, and weakly sequential lower semicontinuous.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 19 / 24
Conditions for convergence
Condition
C =⋂j∈J
Cj =⋂
g≤0j 6= ∅.
Condition
Each function gj is 0-convex, uniformly continuous on closed and boundedsubsets, and weakly sequential lower semicontinuous.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 19 / 24
Conditions for convergence
Condition
C =⋂j∈J
Cj =⋂
g≤0j 6= ∅.
Condition
Each function gj is 0-convex, uniformly continuous on closed and boundedsubsets, and weakly sequential lower semicontinuous.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 19 / 24
Conditions for convergence (Cont.)
Condition
For a fixed M > d(x0,C ), the following inequality is satisfied
‖ bn ‖≤ min(M,
�1�2h2n
2(5M + 4hn)
), ∀n ∈ N,
where
hn =
{gi(n)(xn)/‖tn‖, if gi(n)(xn) > 0,0, if gi(n)(xn) ≤ 0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 20 / 24
Conditions for convergence (Cont.)
Condition
For a fixed M > d(x0,C ), the following inequality is satisfied
‖ bn ‖≤ min(M,
�1�2h2n
2(5M + 4hn)
), ∀n ∈ N,
where
hn =
{gi(n)(xn)/‖tn‖, if gi(n)(xn) > 0,0, if gi(n)(xn) ≤ 0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 20 / 24
Conditions for convergence (Cont.)
Condition
For a fixed M > d(x0,C ), the following inequality is satisfied
‖ bn ‖≤ min(M,
�1�2h2n
2(5M + 4hn)
), ∀n ∈ N,
where
hn =
{gi(n)(xn)/‖tn‖, if gi(n)(xn) > 0,0, if gi(n)(xn) ≤ 0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 20 / 24
Conditions for convergence (Cont.)
Condition
There exists a K > 0 such that ‖tn‖ ≤ K for all n ∈ N.
Holds in many cases (examples mentioned in the paper).
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 21 / 24
Conditions for convergence (Cont.)
Condition
There exists a K > 0 such that ‖tn‖ ≤ K for all n ∈ N.
Holds in many cases (examples mentioned in the paper).
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 21 / 24
Conditions for convergence (Cont.)
Condition
There exists a K > 0 such that ‖tn‖ ≤ K for all n ∈ N.
Holds in many cases (examples mentioned in the paper).
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 21 / 24
The convergence theorem
Theorem
Under the above conditions, the algorithm converges weakly to a point
y ∈ F := B[x0, 2M] ∩ C
from any initial point x0. If int(F ) 6= ∅, then the convergence is strong.
Clarification: B[x0, 2M] is the closed ball of radius 2M and center x0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 22 / 24
The convergence theorem
Theorem
Under the above conditions, the algorithm converges weakly to a point
y ∈ F := B[x0, 2M] ∩ C
from any initial point x0.
If int(F ) 6= ∅, then the convergence is strong.
Clarification: B[x0, 2M] is the closed ball of radius 2M and center x0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 22 / 24
The convergence theorem
Theorem
Under the above conditions, the algorithm converges weakly to a point
y ∈ F := B[x0, 2M] ∩ C
from any initial point x0. If int(F ) 6= ∅, then the convergence is strong.
Clarification: B[x0, 2M] is the closed ball of radius 2M and center x0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 22 / 24
The convergence theorem
Theorem
Under the above conditions, the algorithm converges weakly to a point
y ∈ F := B[x0, 2M] ∩ C
from any initial point x0. If int(F ) 6= ∅, then the convergence is strong.
Clarification:
B[x0, 2M] is the closed ball of radius 2M and center x0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 22 / 24
The convergence theorem
Theorem
Under the above conditions, the algorithm converges weakly to a point
y ∈ F := B[x0, 2M] ∩ C
from any initial point x0. If int(F ) 6= ∅, then the convergence is strong.
Clarification: B[x0, 2M] is the closed ball of radius 2M and center x0.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 22 / 24
A remark on approximate minimization
Assume:
f : Ω→ R is quasiconvex, uniformly continuous on bounded sets, etc;
C =⋂
j∈J g≤0j ;
Goal: to find an α-approximate minimizer of f over C ⊆ Ωassuming α is an upper bound for inf f ;
Solution: to apply the algorithm with g−1 = f − α (stillquasiconvex and hence 0-convex) and gj , j ∈ J (now J ∪ {−1} is thenew index set). We obtain x ∈ C s.t. f (x) ≤ α.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 23 / 24
A remark on approximate minimization
Assume:
f : Ω→ R is quasiconvex, uniformly continuous on bounded sets, etc;
C =⋂
j∈J g≤0j ;
Goal: to find an α-approximate minimizer of f over C ⊆ Ωassuming α is an upper bound for inf f ;
Solution: to apply the algorithm with g−1 = f − α (stillquasiconvex and hence 0-convex) and gj , j ∈ J (now J ∪ {−1} is thenew index set). We obtain x ∈ C s.t. f (x) ≤ α.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 23 / 24
A remark on approximate minimization
Assume:
f : Ω→ R is quasiconvex, uniformly continuous on bounded sets, etc;
C =⋂
j∈J g≤0j ;
Goal: to find an α-approximate minimizer of f over C ⊆ Ωassuming α is an upper bound for inf f ;
Solution: to apply the algorithm with g−1 = f − α (stillquasiconvex and hence 0-convex) and gj , j ∈ J (now J ∪ {−1} is thenew index set). We obtain x ∈ C s.t. f (x) ≤ α.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 23 / 24
A remark on approximate minimization
Assume:
f : Ω→ R is quasiconvex, uniformly continuous on bounded sets, etc;
C =⋂
j∈J g≤0j ;
Goal: to find an α-approximate minimizer of f over C ⊆ Ωassuming α is an upper bound for inf f ;
Solution: to apply the algorithm with g−1 = f − α (stillquasiconvex and hence 0-convex) and gj , j ∈ J (now J ∪ {−1} is thenew index set). We obtain x ∈ C s.t. f (x) ≤ α.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 23 / 24
A remark on approximate minimization
Assume:
f : Ω→ R is quasiconvex, uniformly continuous on bounded sets, etc;
C =⋂
j∈J g≤0j ;
Goal: to find an α-approximate minimizer of f over C ⊆ Ωassuming α is an upper bound for inf f ;
Solution: to apply the algorithm with g−1 = f − α (stillquasiconvex and hence 0-convex) and gj , j ∈ J (now J ∪ {−1} is thenew index set). We obtain x ∈ C s.t. f (x) ≤ α.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 23 / 24
A remark on approximate minimization
Assume:
f : Ω→ R is quasiconvex, uniformly continuous on bounded sets, etc;
C =⋂
j∈J g≤0j ;
Goal: to find an α-approximate minimizer of f over C ⊆ Ωassuming α is an upper bound for inf f ;
Solution: to apply the algorithm with g−1 = f − α (stillquasiconvex and hence 0-convex) and gj , j ∈ J (now J ∪ {−1} is thenew index set).
We obtain x ∈ C s.t. f (x) ≤ α.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 23 / 24
A remark on approximate minimization
Assume:
f : Ω→ R is quasiconvex, uniformly continuous on bounded sets, etc;
C =⋂
j∈J g≤0j ;
Goal: to find an α-approximate minimizer of f over C ⊆ Ωassuming α is an upper bound for inf f ;
Solution: to apply the algorithm with g−1 = f − α (stillquasiconvex and hence 0-convex) and gj , j ∈ J (now J ∪ {−1} is thenew index set). We obtain x ∈ C s.t. f (x) ≤ α.
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 23 / 24
The End
The paper and the talk can be found online:
Math. Prog. (Ser. A) 152 (2015), 339-380,
arXiv:1405.1501
http://w3.impa.br/~dream/talks
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 24 / 24
http://w3.impa.br/~dream/talks
The End
The paper and the talk can be found online:
Math. Prog. (Ser. A) 152 (2015), 339-380,
arXiv:1405.1501
http://w3.impa.br/~dream/talks
Censor, Reem (Haifa, Technion) 0-convex, perturbation, subgrad. proj. July 2016 24 / 24
http://w3.impa.br/~dream/talks