Date post: | 15-Apr-2017 |
Category: |
Data & Analytics |
Upload: | jacek-wasilewski |
View: | 187 times |
Download: | 0 times |
Incorporating Diversity in a Learning to Rank Recommender System
Jacek Wasilewski and Neil Hurley
Insight Centre for Data Analytics, University College Dublin, Ireland
Recommender problem
Incorporating Diversity in a Learning to Rank Recommender System 2
If I watched
what should I watch next (that I will like)?
FLAIRS-29
Recommender problem
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 3
I watched: My recommendations:
Recommender problem
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 4
I watched: My recommendations:
Recommended movies are:• well-known,• very similar to what I have
seen in the past,• very similar to each
other.
Beyond accuracy: the diversity problem
• High similarity of the recommended items might not satisfy the users and generally lead to a poor user experience.
• Few reasons:
– recommendations that are too obvious and of little help,
– complexity of user's needs not recorded by the system,
– uncertainty of current user's needs.
• Introducing diversity addresses some of these aspects.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 5
Beyond accuracy: the diversity problem
• Diversity – dissimilarity of items being recommended.
• Intra-list diversity (ILD) – average pairwise distance – measures the diversity of a recommendation:
ILD ℛ =1
ℛ ℛ − 1 ) 𝑑(𝑖, 𝑗)0,1∈ℛ
• For the previously used example of Star Wars and Indiana Jones movies:ILD ℛ = 0.06
• How recommendations can be diversified?
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 6
Beyond accuracy: the diversity problem
• A common approach is results diversification / re-ranking:
• The re-ranking objective is to find an optimal set ℛ that:ℛ∗ = argmax
ℛ⊆𝒞1 − 𝜆 𝑎𝑐𝑐 ℛ + 𝜆𝑑𝑖𝑣(ℛ)
• We are interested if diverse recommendations can be achieved without re-ranking while learning.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 7
𝒞 ℛ
Research goals
• Continue and explore the work of (Hurley, 2013) on incorporating diversity into a collaborative filtering algorithm.
• The main interest is focused on:
– model-based matrix factorisation algorithm,
– alternating least squares (ALS) factorisation technique,
– the personalised ranking objective - RankALS (Takács, 2012),
– incorporating diversity using regularisation.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 8
Matrix factorisation & Learning to Rank
• Learning for rating prediction:
ℒ 𝑃, 𝑄 =) 𝑟G0 − 𝑝GI𝑞0 K
G,0• Learning for ranking:
ℒ 𝑃, 𝑄 = ) (𝑟G0 − 𝑟G1) − (𝑝GI𝑞0 − 𝑝GI𝑞1)K
G,0,1• We will refer to this as the accuracy objective.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 9
≈ ×
R P Q
𝑝G 𝑞0𝑟G0user vector item vector
Learning to Rank with diversity
• In the previous work, (Hurley, 2013) tried to modify the accuracy objective to increase diversity of recommendation sets.
• Our approach aims to incorporate diversity using regularisation:ℒ 𝑃,𝑄 + 𝜆reg(𝑃,𝑄)
• Regularisation typically has been used to control for overfitting.
• Different types of regularisers have been proposed to incorporate other side information, like social networks, to support the recommendation and increase its accuracy.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 10
Learning to Rank with diversity
• Social networks (Jamali, 2010):
• LapDQ diversity regulariser:
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 11
𝑠𝑖𝑚(𝑢, �⃗�)𝑢
�⃗�
�⃗�G − �⃗�S K
�⃗�G
�⃗�S
𝑠𝑖𝑚(𝚤, 𝚥)𝚤
𝚥
�⃗�0 − �⃗�1K
�⃗�0
�⃗�1
reg 𝑃, 𝑄 = )𝑑 𝑖, 𝑗 𝑞0 − 𝑞1K
0,1
reg 𝑃,𝑄 =)𝑠𝑖𝑚 𝑢,𝑣 𝑝G − 𝑝S K
G,S
reg 𝑃,𝑄 =)𝑠𝑖𝑚 𝑢,𝑣 𝑝G − 𝑝S K
G,S
Learning to Rank with diversity
• Social networks (Jamali, 2010):
• LapDQ diversity regulariser:
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 12
𝑠𝑖𝑚(𝑢, �⃗�)𝑢
�⃗�
�⃗�G − �⃗�S K
�⃗�G
�⃗�S
𝑠𝑖𝑚(𝚤, 𝚥)𝚤
𝚥
�⃗�0 − �⃗�1K
�⃗�0
�⃗�1
reg 𝑃, 𝑄 = )𝑑 𝑖, 𝑗 𝑞0 − 𝑞1K
0,1
• LapDQ: reg 𝑃, 𝑄 = ∑ 𝑑 𝑖, 𝑗 𝑞0 − 𝑞1K
0,1 == 2 ∑ 𝑑0 .0 𝑞0 K − ∑ 𝑑 𝑖, 𝑗 𝑞0I𝑞10,1
• PLapDQ: reg 𝑃, 𝑄 = ∑ 𝑑 𝑖, 𝑗 𝑝GI 𝑞0 − 𝑞1K
G,0,1
• DQ: reg 𝑃, 𝑄 = ∑ 𝑑 𝑖, 𝑗 𝑞0I𝑞10,1
Experimental setup
• Datasets:
– Netflix, MovieLens 20m
• Recommendation algorithms:
– RankALS (baseline)
– Random (for reference)
• Diversification algorithms:
– Greedy re-ranker (MMR)
– Regularisation:DQ, LapDQ, PLapDQ
• Ranked lists of top 20.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 13
• Accuracy metrics:Precision, Recall, nDCG
• Rank-aware diversity metrics (Vargas, 2011) :
– EILD – intra-list diversity1
ℛ ℛ − 1 ) 𝑑𝑖𝑠𝑐(𝑘0)𝑑(𝑖, 𝑗)0,1∈ℛ
– EPD – profile distance1
ℛ ℐG)) 𝑑𝑖𝑠𝑐(𝑘0)𝑑 𝑖, 𝑗
1∈ℐZ0∈ℛPowered by: RankSys framework (http://ranksys.org)Source code: https://github.com/jacekwasilewski/RankSys-DivMF
Results
• The LapDQ regulariser dominates other regularisers.
• Adding user's information does not improve LapDQ regulariser.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 14
random
0.60
0.63
0.66
0.69
0.72
0.75
0.03 0.04 0.05 0.06 0.07 0.08 0.09nDCG@20
EILD
@20
RegulariserLapDQDQPLapDQ
MovieLens 20m
random
0.64
0.66
0.68
0.70
0.72
0.74
0.76
0.78
0.80
0.04 0.05 0.06 0.07 0.08 0.09 0.10nDCG@20
EILD
@20
RegulariserLapDQDQPLapDQ
Netflix
Results
• All regularisers improve diversity over the baseline - best trade-off seems to be offered by LapDQ.
• Accuracy is sacrificed with the increase in diversity.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 15
Netflix nDCG EILD EPD
Random 0.0015 0.7885 0.7643
RankALS 0.1002 0.6367 0.6721
+LapDQ 0.0811 0.7354 0.7365
+DQ 0.0764 0.6872 0.7390
+PLapDQ 0.0826 0.6943 0.7385
MovieLens nDCG EILD EPD
Random 0.0008 0.7505 0.7430
RankALS 0.0951 0.5935 0.6207
+LapDQ 0.0777 0.6829 0.7013
+DQ 0.0792 0.6479 0.6634
+PLapDQ 0.0816 0.6226 0.6471
best reg.
> baseline
> random
best
• A greedy re-ranker outperforms best regularisers in terms of accuracy-diversity trade-off.
• Possible reason is that global regulariser does not necessarily improve the diversity of each individual user – the average diversity is being optimised, some users experience a decrease in diversity.
Netflix nDCG EILD EPD
Random 0.0015 0.7885 0.7643
RankALS 0.1002 0.6367 0.6721
+MMR 0.0959 0.7662 0.7164
+LapDQ 0.0811 0.7354 0.7365
Results
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 16
MovieLens nDCG EILD EPD
Random 0.0008 0.7505 0.7430
RankALS 0.0951 0.5935 0.6207
+MMR 0.0897 0.7336 0.6717
+LapDQ 0.0777 0.6829 0.7013
best div.
> baseline
> random
best
Summary
• We proposed a way of incorporating diversity into the training phase of a model-based algorithm.
• Diversity enhancement is achieved with the use of regularisation.
• Experimental evaluation has been performed on two datasets, using evaluation framework proposed by (Vargas, 2011).
• Diversity regularisers improve diversity of the recommendations, however do not outperform re-ranking approach.
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 17
Thank you!
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 18
References
• Hurley, N. J. (2013). Personalised Ranking with Diversity. RecSys ’13• Vargas, S., & Castells, P. (2011). Rank and Relevance in Novelty and
Diversity Metrics for Recommender Systems. RecSys ’11• Takács, G., & Tikk, D. (2012). Alternating Least Squares for Personalized
Ranking. RecSys ’12• Jamali, M., & Ester, M. (2010). A matrix factorization technique with
trust propagation for recommendation in social networks. RecSys ’10
FLAIRS-29 Incorporating Diversity in a Learning to Rank Recommender System 19