Submodular Maximization applied toMarketing Over Social Networks
Vahab MirrokniGoogle Research, NYC
Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Marketing over Social Networks
I Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Marketing over Social Networks
I Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Marketing over Social Networks
I Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Application 2: Guaranteed Banner AdvertisementI Online Advertisement: $20 billion annual revenue!I Banner Advertisement: 22% of the current revenue.
bjbi
Advertisers
Impressions
I Guaranteed Delivery: We pay a penalty for not meeting thedemand of each advertiser.
I Problem: Which set of advertisers should we commit to?
Application 2: Guaranteed Banner AdvertisementI Online Advertisement: $20 billion annual revenue!I Banner Advertisement: 22% of the current revenue.
bjbi
Advertisers
Impressions
I Guaranteed Delivery: We pay a penalty for not meeting thedemand of each advertiser.
I Problem: Which set of advertisers should we commit to?
Application 2: Guaranteed Banner AdvertisementI Online Advertisement: $20 billion annual revenue!I Banner Advertisement: 22% of the current revenue.
bjbi
Advertisers
Impressions
I Guaranteed Delivery: We pay a penalty for not meeting thedemand of each advertiser.
I Problem: Which set of advertisers should we commit to?
Application 2: Guaranteed Banner AdvertisementI Online Advertisement: $20 billion annual revenue!I Banner Advertisement: 22% of the current revenue.
bjbi
Advertisers
Impressions
I Guaranteed Delivery: We pay a penalty for not meeting thedemand of each advertiser.
I Problem: Which set of advertisers should we commit to?
MarketingOver
Social Networks
GuaranteedBanner AdAllocation
RevenueMaximization
MarketingOver
Social Networks
GuaranteedBanner AdAllocation
RevenueMaximization
SubmodularMaximization
Outline
I Submodularity: Definitions, Applications.I Maximizing non-monotone Submodular Functions
I Approximation Algorithms.I Hardness Results.
I Application 1: Marketing over Social Networks
I Application 2: Guaranteed Banner ad Allocation.
Submodular FunctionsModel the law of diminishing returns and/or economies of scale.
Generalize Concave functions to set functions: f is definedon subsets of X .
DefinitionA set function f : 2X → R is submodular iff for any S ,T ,
f (S) + f (T ) ≥ f (S ∪ T ) + f (S ∩ T ).
Decreasing Marginal Value.
j
S
T
f is submodular, iff ∀S ⊂ T , j /∈ T :
f (S ∪ {j})− f (S) ≥ f (T ∪ {j})− f (T )
Submodular FunctionsModel the law of diminishing returns and/or economies of scale.Generalize Concave functions to set functions: f is definedon subsets of X .
S2 → f (S2)
S1 → f (S1)x
f (x) X
∆x∆x ∆x
∆f1
∆f2
∆f1 > ∆f2
DefinitionA set function f : 2X → R is submodular iff for any S ,T ,
f (S) + f (T ) ≥ f (S ∪ T ) + f (S ∩ T ).
Decreasing Marginal Value.
j
S
T
f is submodular, iff ∀S ⊂ T , j /∈ T :
f (S ∪ {j})− f (S) ≥ f (T ∪ {j})− f (T )
Submodular FunctionsModel the law of diminishing returns and/or economies of scale.Generalize Concave functions to set functions: f is definedon subsets of X .
DefinitionA set function f : 2X → R is submodular iff for any S ,T ,
f (S) + f (T ) ≥ f (S ∪ T ) + f (S ∩ T ).
Decreasing Marginal Value.
j
S
T
f is submodular, iff ∀S ⊂ T , j /∈ T :
f (S ∪ {j})− f (S) ≥ f (T ∪ {j})− f (T )
Submodular FunctionsModel the law of diminishing returns and/or economies of scale.Generalize Concave functions to set functions: f is definedon subsets of X .
DefinitionA set function f : 2X → R is submodular iff for any S ,T ,
f (S) + f (T ) ≥ f (S ∪ T ) + f (S ∩ T ).
Decreasing Marginal Value.
j
S
T
f is submodular, iff ∀S ⊂ T , j /∈ T :
f (S ∪ {j})− f (S) ≥ f (T ∪ {j})− f (T )
Examples of Submodular FunctionsI Simple Examples: Additive functions, Concave functions.
I Additive function: f (S) =∑
j∈S cj .I Concave function: f (S) = g(|S |) for a concave function g .
I Set coverage:f (S) = |⋃j∈S Aj |.
I Cuts in graphs and hypergraphs:f (S) = e(S ,S)
I Other places:I Maximum Facility location.I Utility functions in economics, Rank function of matroids.
Two categories:
I monotone functions, i.e, f (S) ≤ f (T ) for S ⊂ T .
I non-monotone functions: Cut functions, Maximum FacilityLocation.
Examples of Submodular FunctionsI Simple Examples: Additive functions, Concave functions.
I Additive function: f (S) =∑
j∈S cj .I Concave function: f (S) = g(|S |) for a concave function g .
I Set coverage:f (S) = |⋃j∈S Aj |.
A1
A3
A4
A5
A6
A2
S = {2, 3, 6}f (S) = |A2∪A3∪A6|
I Cuts in graphs and hypergraphs:f (S) = e(S ,S)
I Other places:I Maximum Facility location.I Utility functions in economics, Rank function of matroids.
Two categories:I monotone functions, i.e, f (S) ≤ f (T ) for S ⊂ T .I non-monotone functions: Cut functions, Maximum Facility
Location.
Examples of Submodular FunctionsI Simple Examples: Additive functions, Concave functions.
I Additive function: f (S) =∑
j∈S cj .I Concave function: f (S) = g(|S |) for a concave function g .
I Set coverage:f (S) = |⋃j∈S Aj |.
I Cuts in graphs and hypergraphs:f (S) = e(S ,S)
S S
f (S) = e(S ,S)
S S
g(S) = e(S → S)
I Other places:I Maximum Facility location.I Utility functions in economics, Rank function of matroids.
Two categories:I monotone functions, i.e, f (S) ≤ f (T ) for S ⊂ T .I non-monotone functions: Cut functions, Maximum Facility
Location.
Examples of Submodular FunctionsI Simple Examples: Additive functions, Concave functions.
I Additive function: f (S) =∑
j∈S cj .I Concave function: f (S) = g(|S |) for a concave function g .
I Set coverage:f (S) = |⋃j∈S Aj |.
I Cuts in graphs and hypergraphs:f (S) = e(S ,S)
I Other places:I Maximum Facility location.I Utility functions in economics, Rank function of matroids.
Two categories:
I monotone functions, i.e, f (S) ≤ f (T ) for S ⊂ T .
I non-monotone functions: Cut functions, Maximum FacilityLocation.
Examples of Submodular FunctionsI Simple Examples: Additive functions, Concave functions.
I Additive function: f (S) =∑
j∈S cj .I Concave function: f (S) = g(|S |) for a concave function g .
I Set coverage:f (S) = |⋃j∈S Aj |.
I Cuts in graphs and hypergraphs:f (S) = e(S ,S)
I Other places:I Maximum Facility location.I Utility functions in economics, Rank function of matroids.
Two categories:
I monotone functions, i.e, f (S) ≤ f (T ) for S ⊂ T .
I non-monotone functions: Cut functions, Maximum FacilityLocation.
Submodular MaximizationGiven: Value oracle, f : 2X → R is submodular.
S f (S)SubmodularOracle
Value Query
Goal: Maximize f (S) over all sets S ⊆ X , using small number ofqueries.
I Min-Cut is poly-time solvable.
I Submodular function minimization is poly-time solvable.[Schrijver, Fleischer-Fujishige-Iwata 2001].
I Max-Cut is NP-hard.
I α-approximation: output is at least α times OPT.
I For most of this talk, assume f : 2X → R+ is non-negative.
Submodular MaximizationGiven: Value oracle, f : 2X → R is submodular.
S f (S)SubmodularOracle
Value Query
Goal: Maximize f (S) over all sets S ⊆ X , using small number ofqueries.
I Min-Cut is poly-time solvable.
I Submodular function minimization is poly-time solvable.[Schrijver, Fleischer-Fujishige-Iwata 2001].
I Max-Cut is NP-hard.
I α-approximation: output is at least α times OPT.
I For most of this talk, assume f : 2X → R+ is non-negative.
Submodular MaximizationGiven: Value oracle, f : 2X → R is submodular.
S f (S)SubmodularOracle
Value Query
Goal: Maximize f (S) over all sets S ⊆ X , using small number ofqueries.
I Min-Cut is poly-time solvable.
I Submodular function minimization is poly-time solvable.[Schrijver, Fleischer-Fujishige-Iwata 2001].
I Max-Cut is NP-hard.
I α-approximation: output is at least α times OPT.
I For most of this talk, assume f : 2X → R+ is non-negative.
Submodular MaximizationGiven: Value oracle, f : 2X → R is submodular.
S f (S)SubmodularOracle
Value Query
Goal: Maximize f (S) over all sets S ⊆ X , using small number ofqueries.
I Min-Cut is poly-time solvable.
I Submodular function minimization is poly-time solvable.[Schrijver, Fleischer-Fujishige-Iwata 2001].
I Max-Cut is NP-hard.
I α-approximation: output is at least α times OPT.
I For most of this talk, assume f : 2X → R+ is non-negative.
Applications of Submodular Maximization
I Maximizing Monotone Submodular Functions (with cardinalityconstraint):
I Maximum Set Coverage.I Maximizing influence in social networks (Kempe, Kleinberg,
Tardos, KDD).I Optimal sensor installation for outbreak detection (LKGFVG,
KDD).
I Maximizing Non-monotone Submodular Functions.I Maximum Facility LocationI Segmentation Problems (Kleinberg, Papadimitriou, Raghavan,
JACM).I Least core value of general supermodular cost games (Schulz,
Uhan, APPROX).I Optimal marketing over social networks (Hartline, M.,
Sundararajan, WWW)I Revenue maximization for banner advertisement (Feige,
Immorlica, M., Nazerzadeh, WWW)
Applications of Submodular Maximization
I Maximizing Monotone Submodular Functions (with cardinalityconstraint):
I Maximum Set Coverage.I Maximizing influence in social networks (Kempe, Kleinberg,
Tardos, KDD).I Optimal sensor installation for outbreak detection (LKGFVG,
KDD).
I Maximizing Non-monotone Submodular Functions.I Maximum Facility LocationI Segmentation Problems (Kleinberg, Papadimitriou, Raghavan,
JACM).I Least core value of general supermodular cost games (Schulz,
Uhan, APPROX).I Optimal marketing over social networks (Hartline, M.,
Sundararajan, WWW)I Revenue maximization for banner advertisement (Feige,
Immorlica, M., Nazerzadeh, WWW)
Outline
I Submodularity.I Maximizing non-monotone Submodular Functions
I Approximation Algorithms.I Hardness Results.
I Application 1: Marketing over Social Networks
I Application 2: Guaranteed Banner ad Allocation.
Related work
I Maximizing monotone submodular functions with cardinalityconstraints, (|S | ≤ k): Greedy (1− 1/e)-approximation.[Nemhauser-Wolsey-Fisher ’78, Feige ’98]
I Maximizing non-monotone submodular functions has beenstudied in OR.[Lee-Nemhauser-Wang ’96, Goldengorkin-Tijssen-Tso’99]No guaranteed approximation factor known.
I For special cases, known approximation algorithms:I 0.878-approx for Max Cut [Goemans-Williamson ’95]I 0.874-approx for Max Di-Cut [Livnat-Lewin-Zwick ’02]I (1− 1/2k−1)-approx for Max Cut in k-uniform hypergraphs;I NP-hard to improve 7/8 approximation for k = 4 [Hastad ’01].
Related work
I Maximizing monotone submodular functions with cardinalityconstraints, (|S | ≤ k): Greedy (1− 1/e)-approximation.[Nemhauser-Wolsey-Fisher ’78, Feige ’98]
I Maximizing non-monotone submodular functions has beenstudied in OR.[Lee-Nemhauser-Wang ’96, Goldengorkin-Tijssen-Tso’99]No guaranteed approximation factor known.
I For special cases, known approximation algorithms:I 0.878-approx for Max Cut [Goemans-Williamson ’95]I 0.874-approx for Max Di-Cut [Livnat-Lewin-Zwick ’02]I (1− 1/2k−1)-approx for Max Cut in k-uniform hypergraphs;I NP-hard to improve 7/8 approximation for k = 4 [Hastad ’01].
Related work
I Maximizing monotone submodular functions with cardinalityconstraints, (|S | ≤ k): Greedy (1− 1/e)-approximation.[Nemhauser-Wolsey-Fisher ’78, Feige ’98]
I Maximizing non-monotone submodular functions has beenstudied in OR.[Lee-Nemhauser-Wang ’96, Goldengorkin-Tijssen-Tso’99]No guaranteed approximation factor known.
I For special cases, known approximation algorithms:I 0.878-approx for Max Cut [Goemans-Williamson ’95]I 0.874-approx for Max Di-Cut [Livnat-Lewin-Zwick ’02]I (1− 1/2k−1)-approx for Max Cut in k-uniform hypergraphs;I NP-hard to improve 7/8 approximation for k = 4 [Hastad ’01].
Our Results: Non-negative submodular functions
Feige, M., Vondrak (FOCS,07)
1. Approximation Algorithms for Maximizing non-monotonenon-negative submodular functions:Examples: Cut functions, Marketing over social Networks,core value of supermodular games.
I 0.33-approximation (deterministic local search).I 0.40-approximation (randomized ”smooth local search”).I 0.50-approximation for symmetric functions.
2. Hardness Results:
I For any fixed ε > 0, a (1/2 + ε)-approximation would requireexponentially many queries.
I Submodular functions with succinct representation: NP-hardto achieve (3/4 + ε)-approximation.
Our Results: Non-negative submodular functions
Feige, M., Vondrak (FOCS,07)
1. Approximation Algorithms for Maximizing non-monotonenon-negative submodular functions:Examples: Cut functions, Marketing over social Networks,core value of supermodular games.
I 0.33-approximation (deterministic local search).
I 0.40-approximation (randomized ”smooth local search”).I 0.50-approximation for symmetric functions.
2. Hardness Results:
I For any fixed ε > 0, a (1/2 + ε)-approximation would requireexponentially many queries.
I Submodular functions with succinct representation: NP-hardto achieve (3/4 + ε)-approximation.
Our Results: Non-negative submodular functions
Feige, M., Vondrak (FOCS,07)
1. Approximation Algorithms for Maximizing non-monotonenon-negative submodular functions:Examples: Cut functions, Marketing over social Networks,core value of supermodular games.
I 0.33-approximation (deterministic local search).I 0.40-approximation (randomized ”smooth local search”).
I 0.50-approximation for symmetric functions.
2. Hardness Results:
I For any fixed ε > 0, a (1/2 + ε)-approximation would requireexponentially many queries.
I Submodular functions with succinct representation: NP-hardto achieve (3/4 + ε)-approximation.
Our Results: Non-negative submodular functions
Feige, M., Vondrak (FOCS,07)
1. Approximation Algorithms for Maximizing non-monotonenon-negative submodular functions:Examples: Cut functions, Marketing over social Networks,core value of supermodular games.
I 0.33-approximation (deterministic local search).I 0.40-approximation (randomized ”smooth local search”).I 0.50-approximation for symmetric functions.
2. Hardness Results:
I For any fixed ε > 0, a (1/2 + ε)-approximation would requireexponentially many queries.
I Submodular functions with succinct representation: NP-hardto achieve (3/4 + ε)-approximation.
Our Results: Non-negative submodular functions
Feige, M., Vondrak (FOCS,07)
1. Approximation Algorithms for Maximizing non-monotonenon-negative submodular functions:Examples: Cut functions, Marketing over social Networks,core value of supermodular games.
I 0.33-approximation (deterministic local search).I 0.40-approximation (randomized ”smooth local search”).I 0.50-approximation for symmetric functions.
2. Hardness Results:
I For any fixed ε > 0, a (1/2 + ε)-approximation would requireexponentially many queries.
I Submodular functions with succinct representation: NP-hardto achieve (3/4 + ε)-approximation.
Our Results: Non-negative submodular functions
Feige, M., Vondrak (FOCS,07)
1. Approximation Algorithms for Maximizing non-monotonenon-negative submodular functions:Examples: Cut functions, Marketing over social Networks,core value of supermodular games.
I 0.33-approximation (deterministic local search).I 0.40-approximation (randomized ”smooth local search”).I 0.50-approximation for symmetric functions.
2. Hardness Results:
I For any fixed ε > 0, a (1/2 + ε)-approximation would requireexponentially many queries.
I Submodular functions with succinct representation: NP-hardto achieve (3/4 + ε)-approximation.
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
1
2
3
S = {4}f (S) = 10
10 11 12
4
5
6
7
9
S
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
1
2
3
S = {4, 5}f (S) = 17
10 11 12
4
5
6
7
9
S
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
1
2
3
S = {4, 5, 7}f (S) = 23
10 11 12
4
5
6
7
9
S
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
1
2
3
S = {4, 5, 7, 3}f (S) = 27
10 11 12
4
5
6
7
9
S
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
1
2
3
S = {4, 7, 3}f (S) = 30
10 11 12
4
5
6
7
9
S
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
1
2
3
S = {4, 7, 3, 6}f (S) = 33
10 11 12
4
5
6
7
9
S
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
1
2
3
S = {4, 3, 6}f (L) = f (S) = 34
10 11 12
4
5
6
7
9
L = S
Output L or L.
Submodular Maximization: Local SearchLocal Operation:
I Add v : S ′ = S ∪ {v}.I Remove v : S ′ = S\{v}.
Improving local operation if f (S ′) > f (S).
Local Search Algorithm:
I Start with S = {v} where v is the singleton of max value.I While there is an improving local operation,
1. Perform a local operation s.t. f (S ′) > (1 + ε/n2)f (S).
I Return the better of f (S) and f (S).
Theorem (Feige, M., Vondrak)
The Local Search Algorithm returns at least (13 − ε
n )OPT ;for symmetric functions, at least (1
2 − εn )OPT (tight analysis).
A Structure Lemma
LemmaFor a local optimal solution L, and any subset C ⊂ L or a supersetL ⊂ C , f (L) ≥ f (C ).
If T0 ⊂ T1 ⊂ T2 ⊂ · · · ⊂ Tk = L ⊂ Tk+1 ⊂ · · · ⊂ Tn, then,
f (T0) ≤ f (T1) · · · ≤ f (Tk−1) ≤ f (L) ≥ f (Tk+1) ≥ . . . ≥ f (Tn)
Proof: Base of Induction: f (Tk−1) ≤ f (L) ≥ f (Tk+1).If Ti+1\Ti = ai , then by submodularity and local optimality of L:
f (Ti+1)− f (Ti ) ≥ f (L)− f (L\{ai}) ≥ 0⇒ f (Ti+1) ≥ f (Ti )
Therefore,f (T0) ≤ f (T1) ≤ f (T2) · · · ≤ f (L)
A Structure Lemma
LemmaFor a local optimal solution L, and any subset C ⊂ L or a supersetL ⊂ C , f (L) ≥ f (C ).
If T0 ⊂ T1 ⊂ T2 ⊂ · · · ⊂ Tk = L ⊂ Tk+1 ⊂ · · · ⊂ Tn, then,
f (T0) ≤ f (T1) · · · ≤ f (Tk−1) ≤ f (L) ≥ f (Tk+1) ≥ . . . ≥ f (Tn)
T2 Tk = LT3T1 Tk−1
a2
ak
a1
a3
Tn = X
Proof: Base of Induction: f (Tk−1) ≤ f (L) ≥ f (Tk+1).If Ti+1\Ti = ai , then by submodularity and local optimality of L:
f (Ti+1)− f (Ti ) ≥ f (L)− f (L\{ai}) ≥ 0⇒ f (Ti+1) ≥ f (Ti )
Therefore,f (T0) ≤ f (T1) ≤ f (T2) · · · ≤ f (L)
A Structure Lemma
LemmaFor a local optimal solution L, and any subset C ⊂ L or a supersetL ⊂ C , f (L) ≥ f (C ).
If T0 ⊂ T1 ⊂ T2 ⊂ · · · ⊂ Tk = L ⊂ Tk+1 ⊂ · · · ⊂ Tn, then,
f (T0) ≤ f (T1) · · · ≤ f (Tk−1) ≤ f (L) ≥ f (Tk+1) ≥ . . . ≥ f (Tn)
Proof: Base of Induction: f (Tk−1) ≤ f (L) ≥ f (Tk+1).If Ti+1\Ti = ai , then by submodularity and local optimality of L:
f (Ti+1)− f (Ti ) ≥ f (L)− f (L\{ai}) ≥ 0⇒ f (Ti+1) ≥ f (Ti )
Therefore,f (T0) ≤ f (T1) ≤ f (T2) · · · ≤ f (L)
Proof of The Structure Lemma
Theorem (Feige, M., Vondrak)
The Local Search Algorithm returns at least (13 − ε
n )OPT .
Proof:Find a local optimum L and return either L or L. Consider theactual optimum C . By structure lemma,
I f (L) ≥ f (C ∩ L)
I f (L) ≥ f (C ∪ L)
Hence, again by submodularity,
2f (L) + f (L) ≥ f (C ∩ L) + f (C ∪ L) + f (L)
≥ f (C ∩ L) + f (C ∩ L) + f (X )
≥ f (C ) + f (∅) ≥ OPT .
Consequently, either f (L) or f (L) must be at least 13OPT .
Proof of The Structure Lemma
Theorem (Feige, M., Vondrak)
The Local Search Algorithm returns at least (13 − ε
n )OPT .
Proof:Find a local optimum L and return either L or L. Consider theactual optimum C . By structure lemma,
I f (L) ≥ f (C ∩ L)
I f (L) ≥ f (C ∪ L)
Hence, again by submodularity,
2f (L) + f (L) ≥ f (C ∩ L) + f (C ∪ L) + f (L)
≥ f (C ∩ L) + f (C ∩ L) + f (X )
≥ f (C ) + f (∅) ≥ OPT .
Consequently, either f (L) or f (L) must be at least 13OPT .
Our Results
Feige, M., Vondrak [FMV]
1. Approximation Algorithms for Maximizing non-negativesubmodular functions:
I 0.33-approximation (deterministic local search).I 0.40-approximation (randomized ”smooth local search”).I 0.50-approximation for symmetric functions
2. Hardness Results:
I A (1/2 + ε)-approximation would require exponentially manyqueries.
I Submodular functions with succinct representation: NP-hardto achieve (3/4 + ε)-approximation.
Random Subsets
A
A(p)
A(p) is a random subset of A:
each element is picked with prob-
ability p
Sampling lemma
E[f (A(p))] ≥ pf (A) + (1− p)f (∅)
The Smooth Local Search Algorithm
I For any set A, let RA = A(2/3) ∪ A(1/3), a random set.
A
RARA is a random
set with some bias
in picking elements
from A
Let Φ(A) = E[f (RA)] - a smoothed variant of f (A).
Algorithm:
I Perform local search with respect to Φ(A).
I When a local optimum is found, return RA or A.
Theorem (Feige, M., Vondrak)
The Smooth Local Search algorithm returns at least 0.40OPT .
The Smooth Local Search Algorithm
I For any set A, let RA = A(2/3) ∪ A(1/3), a random set.
A
RARA is a random
set with some bias
in picking elements
from A
Let Φ(A) = E[f (RA)] - a smoothed variant of f (A).Algorithm:
I Perform local search with respect to Φ(A).
I When a local optimum is found, return RA or A.
Theorem (Feige, M., Vondrak)
The Smooth Local Search algorithm returns at least 0.40OPT .
The Smooth Local Search Algorithm
I For any set A, let RA = A(2/3) ∪ A(1/3), a random set.
A
RARA is a random
set with some bias
in picking elements
from A
Let Φ(A) = E[f (RA)] - a smoothed variant of f (A).Algorithm:
I Perform local search with respect to Φ(A).
I When a local optimum is found, return RA or A.
Theorem (Feige, M., Vondrak)
The Smooth Local Search algorithm returns at least 0.40OPT .
Proof Sketch of the Algorithm
Analysis: more complicated,using the sampling lemma for 3 sets. Let
I A = local optimum found by our algorithm
I R = A(2/3) ∪ A(1/3), our random set
I C = optimum
Main claims:
1. E[f (R)] ≥ E[f (R ∪ (A ∩ C ))].
2. E[f (R)] ≥ E[f (R ∩ (A ∪ C ))].
3. 920E[f (R∪(A∩C ))]+ 9
20E[f (R∩(A∪C ))]+ 110 f (A) ≥ 0.4OPT .
Our Results
1. Approximation Algorithms for Maximizing non-monotonesubmodular functions:
I 0.33-approximation (deterministic local search).I 0.40-approximation (randomized ”smooth local search”).I 0.5-approximation for symmetric functions.
2. Hardness Results:
I It is impossible to improve the factor (1/2).I A (1/2 + ε)-approximation would require exponentially many
queries.I Submodular functions with succinct representation: NP-hard
to achieve (3/4 + ε)-approximation.
Proof Idea for the Hardness ResultGoal: Find two functions f and g which look identical to a typicalquery with high probability, and max g(S)
.= 2 max f (S).
Consider two functions f1, g1 : [0, 1]2 → R+:
1. f1(x , y) = (x + y)(2− x − y).2. g1(x , y) = 2x(1− y) + 2(1− x)y .
Observe: f1(x , x) = g1(x , x) ⇒ modify g1(x , y) to g2(x , y) s.t.f1(x , y) = g2(x , y) for |x − y | < ε.
Mapping to set functions: Let X = X1 ∪ X2, and
f (S) = f1(|S∩X1||X1| ,
|S∩X2||X2|
), and g(S) = g2
(|S∩X1||X1| ,
|S∩X2||X2|
).
Proof Idea for the Hardness ResultGoal: Find two functions f and g which look identical to a typicalquery with high probability, and max g(S)
.= 2 max f (S).
Consider two functions f1, g1 : [0, 1]2 → R+:
1. f1(x , y) = (x + y)(2− x − y).2. g1(x , y) = 2x(1− y) + 2(1− x)y .
Observe: f1(x , x) = g1(x , x) ⇒ modify g1(x , y) to g2(x , y) s.t.f1(x , y) = g2(x , y) for |x − y | < ε.
Mapping to set functions: Let X = X1 ∪ X2, and
f (S) = f1(|S∩X1||X1| ,
|S∩X2||X2|
), and g(S) = g2
(|S∩X1||X1| ,
|S∩X2||X2|
).
Proof Idea for the Hardness ResultGoal: Find two functions f and g which look identical to a typicalquery with high probability, and max g(S)
.= 2 max f (S).
Consider two functions f1, g1 : [0, 1]2 → R+:
1. f1(x , y) = (x + y)(2− x − y). (”complete graph cut”)2. g1(x , y) = 2x(1− y) + 2(1− x)y . (”bipartite graph cut”)
Observe: f1(x , x) = g1(x , x) ⇒ modify g1(x , y) to g2(x , y) s.t.f1(x , y) = g2(x , y) for |x − y | < ε.
Mapping to set functions: Let X = X1 ∪ X2, and
f (S) = f1(|S∩X1||X1| ,
|S∩X2||X2|
), and g(S) = g2
(|S∩X1||X1| ,
|S∩X2||X2|
).
S x y
X1 X2
1− x 1− y
Similar Technique for Combinatorial Auctions
I Using a similar technique, we can show information theoreticlower bounds for combinatorial auctions.
Buyers
f3(S3) f4(S4) f5(S5) f6(S6)f1(S1) f2(S2)
S1 S5 S6S4S3S2
I Goal: Partition items to maximize social welfare, i.e,∑
i fi (Si ).
Theorem (M., Schapira, Vondrak, EC08)
Achieving factor better than 1− 1e needs exponential number of
value queries.
Extra Constraints
I Cardinality Constraints: |S | ≤ k .
I Knapsack Constaints:∑
i∈S wi ≤ C .
I Matroid Constraints.
I Known results: Monotone Submodular functions:I Matroid or cardinality constraints: 1− 1
e (NWF78, Vondrak08)I k Knapsack constraints: 1− 1
e . (Sviridenko01, KST09)I k Matroid constraints: 1
k+1 (NWF78).
I (Lee, M., Natarajan, Sviridenko (STOC 2009))Maximizing non-monotone submodular functions:
I One matroid or cardinality constraints: 14 .
I k Knapsack constraints: 15 .
I k Matroid constraints: 1k+2+ 1
k
.
I k Partition matroid constraints: 1k+1+ 1
k−1
.
Extra Constraints
I Cardinality Constraints: |S | ≤ k .
I Knapsack Constaints:∑
i∈S wi ≤ C .
I Matroid Constraints.I Known results: Monotone Submodular functions:
I Matroid or cardinality constraints: 1− 1e (NWF78, Vondrak08)
I k Knapsack constraints: 1− 1e . (Sviridenko01, KST09)
I k Matroid constraints: 1k+1 (NWF78).
I (Lee, M., Natarajan, Sviridenko (STOC 2009))Maximizing non-monotone submodular functions:
I One matroid or cardinality constraints: 14 .
I k Knapsack constraints: 15 .
I k Matroid constraints: 1k+2+ 1
k
.
I k Partition matroid constraints: 1k+1+ 1
k−1
.
Extra Constraints
I Cardinality Constraints: |S | ≤ k .
I Knapsack Constaints:∑
i∈S wi ≤ C .
I Matroid Constraints.I Known results: Monotone Submodular functions:
I Matroid or cardinality constraints: 1− 1e (NWF78, Vondrak08)
I k Knapsack constraints: 1− 1e . (Sviridenko01, KST09)
I k Matroid constraints: 1k+1 (NWF78).
I (Lee, M., Natarajan, Sviridenko (STOC 2009))Maximizing non-monotone submodular functions:
I One matroid or cardinality constraints: 14 .
I k Knapsack constraints: 15 .
I k Matroid constraints: 1k+2+ 1
k
.
I k Partition matroid constraints: 1k+1+ 1
k−1
.
Extra Constraints: Algorithms
Lee, M., Nagarajan, Sviridenko, STOC’09
I Local Search Algorithms.
I Cardinality constraints: 14 .
I Local Search with add, remove, and swap operations.
I k Knapsack or budget constraints: 15 .
1. Solve a fractional variant on small elements.2. Round the fractional solution for small elements.3. Output the better of the solution for small elements and large
elements.
I k Matroid constraints: 1k+2+ 1
k
.
I Local Search: Delete operation and more complicatedexchange operations.
Extra Constraints: Algorithms
Lee, M., Nagarajan, Sviridenko, STOC’09
I Local Search Algorithms.
I Cardinality constraints: 14 .
I Local Search with add, remove, and swap operations.
I k Knapsack or budget constraints: 15 .
1. Solve a fractional variant on small elements.2. Round the fractional solution for small elements.3. Output the better of the solution for small elements and large
elements.
I k Matroid constraints: 1k+2+ 1
k
.
I Local Search: Delete operation and more complicatedexchange operations.
Extra Constraints: Algorithms
Lee, M., Nagarajan, Sviridenko, STOC’09
I Local Search Algorithms.
I Cardinality constraints: 14 .
I Local Search with add, remove, and swap operations.
I k Knapsack or budget constraints: 15 .
1. Solve a fractional variant on small elements.2. Round the fractional solution for small elements.3. Output the better of the solution for small elements and large
elements.
I k Matroid constraints: 1k+2+ 1
k
.
I Local Search: Delete operation and more complicatedexchange operations.
Approximating EverywhereI Can we learn a submodular function by polynomial number of
queries?I After polynomial number of queries, construct an oracle that
approximate f by f ′?
Sk−1
SubmodularOracle
Sk
S2f(S1), . . . , f(Sk)
S1S1
S3
· · ·Approximate
Oracle
S f ′(S)
S1
I Goal: Approximate f by f ′?
Theorem (Goemans, Iwata, Harvey, M. SODA09)
Achieving factor better than√
n needs exponential number ofvalue queries even for rank functions of matroids.
Theorem (Goemans, Iwata, Harvey, M. SODA09)
After a polynomial number of value queries to a monotontesubmodular function, we can approximate the function everywherewithin O(
√n log n).
Approximating EverywhereI Can we learn a submodular function by polynomial number of
queries?I After polynomial number of queries, construct an oracle that
approximate f by f ′?
Sk−1
SubmodularOracle
Sk
S2f(S1), . . . , f(Sk)
S1S1
S3
· · ·Approximate
Oracle
S f ′(S)
S1
I Goal: Approximate f by f ′?
Theorem (Goemans, Iwata, Harvey, M. SODA09)
Achieving factor better than√
n needs exponential number ofvalue queries even for rank functions of matroids.
Theorem (Goemans, Iwata, Harvey, M. SODA09)
After a polynomial number of value queries to a monotontesubmodular function, we can approximate the function everywherewithin O(
√n log n).
Approximating EverywhereI Can we learn a submodular function by polynomial number of
queries?I After polynomial number of queries, construct an oracle that
approximate f by f ′?
Sk−1
SubmodularOracle
Sk
S2f(S1), . . . , f(Sk)
S1S1
S3
· · ·Approximate
Oracle
S f ′(S)
S1
I Goal: Approximate f by f ′?
Theorem (Goemans, Iwata, Harvey, M. SODA09)
Achieving factor better than√
n needs exponential number ofvalue queries even for rank functions of matroids.
Theorem (Goemans, Iwata, Harvey, M. SODA09)
After a polynomial number of value queries to a monotontesubmodular function, we can approximate the function everywherewithin O(
√n log n).
Outline
I Submodularity.
I Maximizing non-monotone Submodular Functions
I Application 1: Marketing over Social Networks
I Application 2: Guaranteed Banner ad Allocation.
Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Marketing over Social Networks
I Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Marketing over Social Networks
I Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Marketing over Social Networks
I Online Social Networks: MySpace, Facebook.
I Monetizing Social Networks.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.
I Marketing policy: In what order and at what price, dowe offer an item to buyers?
Application 1: Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.I Especially for Networked Goods, like Zune, Sprint, or Verizon.
I Model the influence among users by submodular (or concave)functions.
I fi (S) = g(|Ni ∩ S |).
Application 1: Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.I Especially for Networked Goods, like Zune, Sprint, or Verizon.
I Model the influence among users by submodular (or concave)functions.
I fi (S) = g(|Ni ∩ S |).
Application 1: Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.I Especially for Networked Goods, like Zune, Sprint, or Verizon.
I Model the influence among users by submodular (or concave)functions.
I fi (S) = g(|Ni ∩ S |).
100
A B
C
($100,$110,$115,$118)
Application 1: Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.I Especially for Networked Goods, like Zune, Sprint, or Verizon.
I Model the influence among users by submodular (or concave)functions.
I fi (S) = g(|Ni ∩ S |).
110
A B
C
($100,$110,$115,$118)
Application 1: Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.I Especially for Networked Goods, like Zune, Sprint, or Verizon.
I Model the influence among users by submodular (or concave)functions.
I fi (S) = g(|Ni ∩ S |).
115
A B
C
($100,$110,$115,$118)
Application 1: Marketing over Social NetworksI Online Social Networks: MySpace, Facebook.I Viral Marketing: Word-of-Mouth Advertising.
I Users influence each others’ valuation on a social network.I Especially for Networked Goods, like Zune, Sprint, or Verizon.
I Model the influence among users by submodular (or concave)functions.
I fi (S) = g(|Ni ∩ S |).
118
A B
C
($100,$110,$115,$118)
Marketing Model
I Given: A prior (probability distribution) Pi (S) for thevaluation function for user i given that a set S of usersalready adopted the item.
I Goal: Design a Marketing Policy to maximize the expectedrevenue.
I Marketing policy: Visit buyers one by one and offer them aprice.
I Ordering of buyers.I Pricing at each step.
I Optimal (myopic) Pricing: Optimal price to maximize revenueat each step (ignoring the future influence).
I Let fi (S) be the optimal revenue from buyer i using theoptimal (myopic) price given that a set S of buyers havebought the item.
I We call this function fi the influence function.
Marketing Model
I Given: A prior (probability distribution) Pi (S) for thevaluation function for user i given that a set S of usersalready adopted the item.
I Goal: Design a Marketing Policy to maximize the expectedrevenue.
I Marketing policy: Visit buyers one by one and offer them aprice.
I Ordering of buyers.I Pricing at each step.
I Optimal (myopic) Pricing: Optimal price to maximize revenueat each step (ignoring the future influence).
I Let fi (S) be the optimal revenue from buyer i using theoptimal (myopic) price given that a set S of buyers havebought the item.
I We call this function fi the influence function.
Marketing Model
I Given: A prior (probability distribution) Pi (S) for thevaluation function for user i given that a set S of usersalready adopted the item.
I Goal: Design a Marketing Policy to maximize the expectedrevenue.
I Marketing policy: Visit buyers one by one and offer them aprice.
I Ordering of buyers.I Pricing at each step.
I Optimal (myopic) Pricing: Optimal price to maximize revenueat each step (ignoring the future influence).
I Let fi (S) be the optimal revenue from buyer i using theoptimal (myopic) price given that a set S of buyers havebought the item.
I We call this function fi the influence function.
Marketing Model
I Given: A prior (probability distribution) Pi (S) for thevaluation function for user i given that a set S of usersalready adopted the item.
I Goal: Design a Marketing Policy to maximize the expectedrevenue.
I Marketing policy: Visit buyers one by one and offer them aprice.
I Ordering of buyers.I Pricing at each step.
I Optimal (myopic) Pricing: Optimal price to maximize revenueat each step (ignoring the future influence).
I Let fi (S) be the optimal revenue from buyer i using theoptimal (myopic) price given that a set S of buyers havebought the item.
I We call this function fi the influence function.
Marketing Model: ExampleI Marketing policy: In what order and at what price do we
offer a digital good to buyers?
I Each buyer has a monotone submodular influence function.
(60,70)(100,105)
(100,110,115,118)
(50,60,65)
100?
Marketing Model: ExampleI Marketing policy: In what order and at what price do we
offer a digital good to buyers?
I Each buyer has a monotone submodular influence function.
(60,70)(100,105)
(100,110,115,118)
(50,60,65)
100 70?
Marketing Model: ExampleI Marketing policy: In what order and at what price do we
offer a digital good to buyers?
I Each buyer has a monotone submodular influence function.
(60,70)(100,105)
(100,110,115,118)
(50,60,65)
70
65?
100
Marketing Model: ExampleI Marketing policy: In what order and at what price do we
offer a digital good to buyers?
I Each buyer has a monotone submodular influence function.
(60,70)(100,105)
(100,110,115,118)
(50,60,65)
100 70
65
118?
Marketing Model
Marketing policy: In what order and at what price, do weoffer a digital good to buyers?
I Given: A prior (probability distribution) Pi (S) for thevaluation of user i given that a set S of users already adoptedthe item.
I Goal: Design a Marketing Policy to maximize the expectedrevenue.
I The problem is NP-hard.
I Hartline, M., and Sundararajan [HMS] (WWW)
Marketing Model
Marketing policy: In what order and at what price, do weoffer a digital good to buyers?
I Given: A prior (probability distribution) Pi (S) for thevaluation of user i given that a set S of users already adoptedthe item.
I Goal: Design a Marketing Policy to maximize the expectedrevenue.
I The problem is NP-hard.
I Hartline, M., and Sundararajan [HMS] (WWW)
Influence & Exploit Strategies
Influence & Exploit strategies:
1. Influence: Give the item for free to a set A of influentialbuyers.
2. Exploit: Apply the following strategy on the rest.I Optimal myopic pricing.I Random order.
Optimal (myopic) pricing:
I At each time, offer a price that maximizes the currentexpected revenue (ignoring the future influence).
Influence & Exploit Strategies
Influence & Exploit strategies:
1. Influence: Give the item for free to a set A of influentialbuyers.
2. Exploit: Apply the following strategy on the rest.I Optimal myopic pricing.I Random order.
Optimal (myopic) pricing:
I At each time, offer a price that maximizes the currentexpected revenue (ignoring the future influence).
Q1: How good are influence & exploit strategies?
Theorem (Hartline, M., Sundararajan)
Given monotone submodular influence functions, Influence &Exploit strategies give constant-factor approximations to theoptimal revenue.
Constant factor:
I 0.25 for the general distributions,
I 0.306 for distributions satisfying a monotone hazard-ratecondition,
I 0.33 for additive settings and uniform distribution,
I 0.66 for undirected graphs and uniform distributions,
I 0.94 for complete undirected graphs and uniform distributions.
Q1: How good are influence & exploit strategies?
Theorem (Hartline, M., Sundararajan)
Given monotone submodular influence functions, Influence &Exploit strategies give constant-factor approximations to theoptimal revenue.
Constant factor:
I 0.25 for the general distributions,
I 0.306 for distributions satisfying a monotone hazard-ratecondition,
I 0.33 for additive settings and uniform distribution,
I 0.66 for undirected graphs and uniform distributions,
I 0.94 for complete undirected graphs and uniform distributions.
Q2: How to find influential users?
I Question 2: How do we choose a set A of influential users togive the item for free to maximize the revenue?
I Let g(A) be the expected revenue, if the initial set is A.
I For some small sets A, g(∅) ≤ g(A).
I If we give the item for free to all users, g(X ) = 0.
Theorem (Hartline, M., Sundararajan)
Given monotone submodular influence functions, the revenuefunction g is a non-monotone non-negative submodular function.
I Thus, to find a set A that maximizes the revenue g(A), wecan use the 0.4-approximation local search algorithm fromFeige, M., Vondrak.
Q2: How to find influential users?
I Question 2: How do we choose a set A of influential users togive the item for free to maximize the revenue?
I Let g(A) be the expected revenue, if the initial set is A.
I For some small sets A, g(∅) ≤ g(A).
I If we give the item for free to all users, g(X ) = 0.
Theorem (Hartline, M., Sundararajan)
Given monotone submodular influence functions, the revenuefunction g is a non-monotone non-negative submodular function.
I Thus, to find a set A that maximizes the revenue g(A), wecan use the 0.4-approximation local search algorithm fromFeige, M., Vondrak.
Q2: How to find influential users?
I Question 2: How do we choose a set A of influential users togive the item for free to maximize the revenue?
I Let g(A) be the expected revenue, if the initial set is A.
I For some small sets A, g(∅) ≤ g(A).
I If we give the item for free to all users, g(X ) = 0.
Theorem (Hartline, M., Sundararajan)
Given monotone submodular influence functions, the revenuefunction g is a non-monotone non-negative submodular function.
I Thus, to find a set A that maximizes the revenue g(A), wecan use the 0.4-approximation local search algorithm fromFeige, M., Vondrak.
Q2: How to find influential users?
I Question 2: How do we choose a set A of influential users togive the item for free to maximize the revenue?
I Let g(A) be the expected revenue, if the initial set is A.
I For some small sets A, g(∅) ≤ g(A).
I If we give the item for free to all users, g(X ) = 0.
Theorem (Hartline, M., Sundararajan)
Given monotone submodular influence functions, the revenuefunction g is a non-monotone non-negative submodular function.
I Thus, to find a set A that maximizes the revenue g(A), wecan use the 0.4-approximation local search algorithm fromFeige, M., Vondrak.
Future Directions
I Marketing over Social Networks
I Avoiding Price Discremenation (Fixed-Price).I Iterative Pricing with Positive Network Externalities
(Akhlaghpour, Ghodsi, Haghpanah, Mahini, M., Nikzad).
I Cascading Effect and Influence Propagation.I Revenue Maximization for Fixed-Priced Marketing with
Influence Propagation (M., Sundararajna, Roch).
I Learning vs. Marketing.
Thank You
Outline
I Submodularity.
I Maximizing non-monotone Submodular Functions
I Application 1: Marketing over Social Networks.
I Application 2: Guaranteed Banner ad Allocation.
Application 2: Guaranteed Banner AdvertisementOnline Advertisement: $20 billion annual revenue!Banner Advertisement: 22% of the current revenue.
Guaranteed Banner Advertisement:I Each advertiser i
I Interested in set Si of impressions, (e.g, young women inSeattle),
I Bids bi for each impression,I Needs di impressions,I Penalty αbi for not satisfying each unit (Guaranteed Delivery).
Sjdi = 4
bj
Si
bi
Advertisers
Impressions
(Si, bi, di)
Goal: Choose a set T of advertisers to maximize revenue, f (T ).If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi
Application 2: Guaranteed Banner AdvertisementOnline Advertisement: $20 billion annual revenue!Banner Advertisement: 22% of the current revenue.Guaranteed Banner Advertisement:
I Each advertiser iI Interested in set Si of impressions, (e.g, young women in
Seattle),I Bids bi for each impression,I Needs di impressions,I Penalty αbi for not satisfying each unit (Guaranteed Delivery).
Sjdi = 4
bj
Si
bi
Advertisers
Impressions
(Si, bi, di)
Goal: Choose a set T of advertisers to maximize revenue, f (T ).If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi
Application 2: Guaranteed Banner AdvertisementOnline Advertisement: $20 billion annual revenue!Banner Advertisement: 22% of the current revenue.Guaranteed Banner Advertisement:
I Each advertiser iI Interested in set Si of impressions, (e.g, young women in
Seattle),I Bids bi for each impression,I Needs di impressions,I Penalty αbi for not satisfying each unit (Guaranteed Delivery).
Sjdi = 4
bj
Si
bi
Advertisers
Impressions
(Si, bi, di)
Goal: Choose a set T of advertisers to maximize revenue, f (T ).
If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi
Application 2: Guaranteed Banner AdvertisementOnline Advertisement: $20 billion annual revenue!Banner Advertisement: 22% of the current revenue.Guaranteed Banner Advertisement:
I Each advertiser iI Interested in set Si of impressions, (e.g, young women in
Seattle),I Bids bi for each impression,I Needs di impressions,I Penalty αbi for not satisfying each unit (Guaranteed Delivery).
Sjdi = 4
bj
Si
bi
Advertisers
Impressions
(Si, bi, di)
Goal: Choose a set T of advertisers to maximize revenue, f (T ).If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi
Guaranteed Banner Advertisement: Submodularity
If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi = q(1 + α)bi − ci .
If we commit to set T of advertisers:
f (T ) = P(T )−∑i∈T ci = P(T )− C (T ).
Advertisers
Impressions
T
Maximum weighted matching for this graph.
(1 + α)bj(1 + α)bi
P (T ) =
P is submodular ⇒ f is submodular, but it can be negative.In fact, we prove that the problem is not approximable within anyconstant factor for any α = c .
Guaranteed Banner Advertisement: Submodularity
If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi = q(1 + α)bi − ci .
If we commit to set T of advertisers:
f (T ) = P(T )−∑i∈T ci = P(T )− C (T ).
Advertisers
Impressions
T
Maximum weighted matching for this graph.
(1 + α)bj(1 + α)bi
P (T ) =
P is submodular ⇒ f is submodular, but it can be negative.In fact, we prove that the problem is not approximable within anyconstant factor for any α = c .
Guaranteed Banner Advertisement: Submodularity
If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi = q(1 + α)bi − ci .
If we commit to set T of advertisers:
f (T ) = P(T )−∑i∈T ci = P(T )− C (T ).
Advertisers
Impressions
T
Maximum weighted matching for this graph.
(1 + α)bj(1 + α)bi
P (T ) =
P is submodular ⇒ f is submodular, but it can be negative.
In fact, we prove that the problem is not approximable within anyconstant factor for any α = c .
Guaranteed Banner Advertisement: Submodularity
If we give q items to advertiser i , we get
qbi − αbi (di − q) = q(1 + α)bi − diαbi = q(1 + α)bi − ci .
If we commit to set T of advertisers:
f (T ) = P(T )−∑i∈T ci = P(T )− C (T ).
Advertisers
Impressions
T
Maximum weighted matching for this graph.
(1 + α)bj(1 + α)bi
P (T ) =
P is submodular ⇒ f is submodular, but it can be negative.In fact, we prove that the problem is not approximable within anyconstant factor for any α = c .
Structural Approximation
Feige, Immorlica, M., Nazerzadeh [FIMN]There are many natural submodular functions of the form:A submodular profit function - additive cost function =P(S)− C (S).Examples:
I Maximum Facility Location Problem.
I Segmentation Problems.
I Guaranteed Banner Ad Problem.
The above problems are possibly negative, and not approximablewithin any multiplicative approximatin factor.Structural Approximation
I Approximation factor is a function of the structure of thesolution.
We design greedy and linear programming-based tight structuralapproximation algorithms for the above problems.
Structural Approximation
Feige, Immorlica, M., Nazerzadeh [FIMN]There are many natural submodular functions of the form:A submodular profit function - additive cost function =P(S)− C (S).Examples:
I Maximum Facility Location Problem.
I Segmentation Problems.
I Guaranteed Banner Ad Problem.
The above problems are possibly negative, and not approximablewithin any multiplicative approximatin factor.
Structural Approximation
I Approximation factor is a function of the structure of thesolution.
We design greedy and linear programming-based tight structuralapproximation algorithms for the above problems.
Structural Approximation
Feige, Immorlica, M., Nazerzadeh [FIMN]There are many natural submodular functions of the form:A submodular profit function - additive cost function =P(S)− C (S).Examples:
I Maximum Facility Location Problem.
I Segmentation Problems.
I Guaranteed Banner Ad Problem.
The above problems are possibly negative, and not approximablewithin any multiplicative approximatin factor.Structural Approximation
I Approximation factor is a function of the structure of thesolution.
We design greedy and linear programming-based tight structuralapproximation algorithms for the above problems.
Structural Approximation
Feige, Immorlica, M., Nazerzadeh [FIMN]There are many natural submodular functions of the form:A submodular profit function - additive cost function =P(S)− C (S).Examples:
I Maximum Facility Location Problem.
I Segmentation Problems.
I Guaranteed Banner Ad Problem.
The above problems are possibly negative, and not approximablewithin any multiplicative approximatin factor.Structural Approximation
I Approximation factor is a function of the structure of thesolution.
We design greedy and linear programming-based tight structuralapproximation algorithms for the above problems.
Guaranteed Banner Ad ProblemGreedy Algorithm:
1. S = ∅.2. At each step, add an advertiser i ∈ X\S that maximizes
P(S∪{i})−P(S)−ci
P(S∪{i})−P(S) if P(S ∪ {i})− P(S)− ci > 0.
LP-based Algorithm:I Solve a Configurational LP and Round it.
max∑
i∈A,Q⊆SiX Q
i dibi ((1 + α)( |Q|di− ln |Q|di
)− α)
s.t.∑
i∈A,Q∈Si :j∈S X Qi ≤ 1 ∀j ∈ U∑
Q∈SiX Q
i ≤ 1 ∀i ∈ A
X Qi ≥ 0 ∀i ∈ A, ∀Q ∈ Si
Theorem[Feige, Immorlica, M., Nazerzadeh] The greedy algorithmand the LP-based algorithm achieve tight structural approximationfor (capacitated) maximum facility location, segmentationproblems, and guaranteed banner advertisement problem.
Guaranteed Banner Ad ProblemGreedy Algorithm:
1. S = ∅.2. At each step, add an advertiser i ∈ X\S that maximizes
P(S∪{i})−P(S)−ci
P(S∪{i})−P(S) if P(S ∪ {i})− P(S)− ci > 0.
LP-based Algorithm:I Solve a Configurational LP and Round it.
max∑
i∈A,Q⊆SiX Q
i dibi ((1 + α)( |Q|di− ln |Q|di
)− α)
s.t.∑
i∈A,Q∈Si :j∈S X Qi ≤ 1 ∀j ∈ U∑
Q∈SiX Q
i ≤ 1 ∀i ∈ A
X Qi ≥ 0 ∀i ∈ A, ∀Q ∈ Si
Theorem[Feige, Immorlica, M., Nazerzadeh] The greedy algorithmand the LP-based algorithm achieve tight structural approximationfor (capacitated) maximum facility location, segmentationproblems, and guaranteed banner advertisement problem.
Guaranteed Banner Ad ProblemGreedy Algorithm:
1. S = ∅.2. At each step, add an advertiser i ∈ X\S that maximizes
P(S∪{i})−P(S)−ci
P(S∪{i})−P(S) if P(S ∪ {i})− P(S)− ci > 0.
LP-based Algorithm:I Solve a Configurational LP and Round it.
max∑
i∈A,Q⊆SiX Q
i dibi ((1 + α)( |Q|di− ln |Q|di
)− α)
s.t.∑
i∈A,Q∈Si :j∈S X Qi ≤ 1 ∀j ∈ U∑
Q∈SiX Q
i ≤ 1 ∀i ∈ A
X Qi ≥ 0 ∀i ∈ A, ∀Q ∈ Si
Theorem[Feige, Immorlica, M., Nazerzadeh] The greedy algorithmand the LP-based algorithm achieve tight structural approximationfor (capacitated) maximum facility location, segmentationproblems, and guaranteed banner advertisement problem.
Future Directions
I Guaranteed Banner ad Allocation
I Uncertainty in Supply (impressions).I Uncertainty in Demand (advertisers).I Online Allocation Mechanisms.I Truthful Mechanisms.
Marketing overSocial Networks
Algorithmic & Economic aspects of the Internet
InternetMonetization(Stochastic
Optimization,Linear
Programming)
Search &Large Networks
AlgorithmicGame Theory
(Auctions,Mechanism
Design)
Guaranteed BannerAdvertisement
Marketing overSocial Networks
Algorithmic & Economic aspects of the Internet
InternetMonetization(Stochastic
Optimization,Linear
Programming)
Search &Large Networks
(Refined RandomWalks, LocalClustering)
AlgorithmicGame Theory
(Auctions,Mechanism
Design)
Guaranteed BannerAdvertisement
Marketing overSocial Networks
Trust-based Recommendation Systems
Overlapping Clustering for Distributed Comp.
Algorithmic & Economic aspects of the Internet
InternetMonetization(Stochastic
Optimization,Linear
Programming)
Search &Large Networks
(Refined RandomWalks, LocalClustering)
PageRank Contributions & Link Spam Detection
AlgorithmicGame Theory
(Auctions,Mechanism
Design)
Guaranteed BannerAdvertisement
Algorithms for Search and Large Networks
I Local Computation of PageRank Contributions & LinkSpam Detection
I Axiomatic Approach to Trust-Based RecommendationSystems
I Overlapping Clustering for Distributed Computation
Algorithms for Search and Large Networks
I Local Computation of PageRank Contributions & LinkSpam Detection
I Axiomatic Approach to Trust-Based RecommendationSystems
I Overlapping Clustering for Distributed Computation
Algorithms for Search and Large Networks
I Local Computation of PageRank Contributions & LinkSpam Detection
I Axiomatic Approach to Trust-Based RecommendationSystems
I Overlapping Clustering for Distributed Computation
Thank You