Recommendation Systems
Pawan Goyal
CSE, IITKGP
October 29-30, 2015
Footnotetext without footnote markPawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 1 / 61
Recommendation System?
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 2 / 61
Recommendation in Social Web
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 3 / 61
Why using Recommender Systems?
Value for the customersFind things that are interesting
Narrow down the set of choices
Discover new things
Entertainment ...
Value for the providerAdditional and unique personalized service for the customer
Increase trust and customer loyalty
Increase sales, click through rates, conversion etc
Opportunity for promotion, persuasion
Obtain more knowledge about customers
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 4 / 61
Why using Recommender Systems?
Value for the customersFind things that are interesting
Narrow down the set of choices
Discover new things
Entertainment ...
Value for the providerAdditional and unique personalized service for the customer
Increase trust and customer loyalty
Increase sales, click through rates, conversion etc
Opportunity for promotion, persuasion
Obtain more knowledge about customers
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 4 / 61
Real-world check
Myths from industryAmazon.com generates X percent of their sales through therecommendation lists (X > 35%)
Netflix generates X percent of their sales through the recommendationlists (X > 30%)
There must be some value in itSee recommendation of groups, jobs or people on LinkedIn
Friend recommendation and ad personalization on Facebook
Song recommendation at last.fm
News recommendation at Forbes.com (+37% CTR)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 5 / 61
Real-world check
Myths from industryAmazon.com generates X percent of their sales through therecommendation lists (X > 35%)
Netflix generates X percent of their sales through the recommendationlists (X > 30%)
There must be some value in itSee recommendation of groups, jobs or people on LinkedIn
Friend recommendation and ad personalization on Facebook
Song recommendation at last.fm
News recommendation at Forbes.com (+37% CTR)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 5 / 61
Recommender Systems as a function
What is given?User model: ratings, preferences, demographics, situational context
Items: with or without description of item characteristics
FindRelevance score: used for ranking
Final GoalRecommend items that are assumed to be relevant
ButRemember that relevance might be context-dependent
Characteristics of the list might be important (diversity)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 6 / 61
Recommender Systems as a function
What is given?User model: ratings, preferences, demographics, situational context
Items: with or without description of item characteristics
FindRelevance score: used for ranking
Final GoalRecommend items that are assumed to be relevant
ButRemember that relevance might be context-dependent
Characteristics of the list might be important (diversity)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 6 / 61
Recommender Systems as a function
What is given?User model: ratings, preferences, demographics, situational context
Items: with or without description of item characteristics
FindRelevance score: used for ranking
Final GoalRecommend items that are assumed to be relevant
ButRemember that relevance might be context-dependent
Characteristics of the list might be important (diversity)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 6 / 61
Recommender Systems as a function
What is given?User model: ratings, preferences, demographics, situational context
Items: with or without description of item characteristics
FindRelevance score: used for ranking
Final GoalRecommend items that are assumed to be relevant
ButRemember that relevance might be context-dependent
Characteristics of the list might be important (diversity)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 6 / 61
Paradigms of Recommender Systems
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 7 / 61
Paradigms of Recommender Systems
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 8 / 61
Paradigms of Recommender Systems
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 9 / 61
Paradigms of Recommender Systems
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 10 / 61
Paradigms of Recommender Systems
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 11 / 61
Paradigms of Recommender Systems
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 12 / 61
Comparison across the paradigms
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 13 / 61
Collaborative Filtering (CF)
The most prominent approach to generate recommendationsUsed by large, commercial e-commerce sites
well-understood, various algorithms and variations exist
applicable in many domains (book, movies, ...)
ApproachUse the “wisdom of the crowd” to recommend items
Basic assumption and ideaUsers give ratings to catalog items (implicitly/explicitly)
Customers with certain tastes in the past, might have similar tastes in thefuture
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 14 / 61
Collaborative Filtering (CF)
The most prominent approach to generate recommendationsUsed by large, commercial e-commerce sites
well-understood, various algorithms and variations exist
applicable in many domains (book, movies, ...)
ApproachUse the “wisdom of the crowd” to recommend items
Basic assumption and ideaUsers give ratings to catalog items (implicitly/explicitly)
Customers with certain tastes in the past, might have similar tastes in thefuture
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 14 / 61
Collaborative Filtering (CF)
The most prominent approach to generate recommendationsUsed by large, commercial e-commerce sites
well-understood, various algorithms and variations exist
applicable in many domains (book, movies, ...)
ApproachUse the “wisdom of the crowd” to recommend items
Basic assumption and ideaUsers give ratings to catalog items (implicitly/explicitly)
Customers with certain tastes in the past, might have similar tastes in thefuture
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 14 / 61
User-based Collaborative Filtering
Given an active user Alice and an item i not yet seen by AliceThe goal is to estimate Alice’s rating for this item, e.g., by
I Find a set of users who liked the same items as Alice in the past and whohave rated item i
I use, e.g. the average of their ratings to predict, if Alice will like item iI Do this for all items Alice has not seen and recommend the best-rated ones
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 15 / 61
User-based Collaborative Filtering
Given an active user Alice and an item i not yet seen by AliceThe goal is to estimate Alice’s rating for this item, e.g., by
I Find a set of users who liked the same items as Alice in the past and whohave rated item i
I use, e.g. the average of their ratings to predict, if Alice will like item iI Do this for all items Alice has not seen and recommend the best-rated ones
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 15 / 61
User-based Collaborative Filtering
Some first questionsHow do we measure similarity?
How many neighbors should we consider?
How do we generate a prediction from the neighbors’ ratings?
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 16 / 61
Popular similarity model
Pearson Correlation
sim(a,b) =∑p∈P(ra,p− ra)(rb,p− rb)√
∑p∈P(ra,p− ra)2√
∑p∈P(rb,p− rb)2
a,b: users
ra,p: rating of user a for item p
P: set of items, rated both by a and b
ra, rb: user’s average ratings
Possible similarity values are between -1 to 1
For the example consideredsim(Alice, User1) = 0.85
sim(Alice, User4) = -0.79
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 17 / 61
Popular similarity model
Pearson Correlation
sim(a,b) =∑p∈P(ra,p− ra)(rb,p− rb)√
∑p∈P(ra,p− ra)2√
∑p∈P(rb,p− rb)2
a,b: users
ra,p: rating of user a for item p
P: set of items, rated both by a and b
ra, rb: user’s average ratings
Possible similarity values are between -1 to 1
For the example consideredsim(Alice, User1) = 0.85
sim(Alice, User4) = -0.79
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 17 / 61
Pearson Correlation
Takes Difference in rating behavior into account
Works well in usual domains
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 18 / 61
Pearson Correlation
Takes Difference in rating behavior into account
Works well in usual domains
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 18 / 61
Making Predictions
A common prediction function:
pred(a,p) = ra +∑b∈N sim(a,b)∗ (rb,p− rb)
∑b∈N sim(a,b)
Calculate, whether the neighbor’s ratings for the unseen item i are higheror lower than their average
Combine the rating differences - use similarity as a weight
Add/subtract neighbor’s bias from the active user’s average and use thisas a prediction
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 19 / 61
Making Predictions
A common prediction function:
pred(a,p) = ra +∑b∈N sim(a,b)∗ (rb,p− rb)
∑b∈N sim(a,b)
Calculate, whether the neighbor’s ratings for the unseen item i are higheror lower than their average
Combine the rating differences - use similarity as a weight
Add/subtract neighbor’s bias from the active user’s average and use thisas a prediction
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 19 / 61
Item-based Collaborative Filtering
Basic IdeaUse the similarity between items to make predictions
For InstanceLook for items that are similar to Item5
Take Alice’s ratings for these items to predict the rating for Item5
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 20 / 61
Item-based Collaborative Filtering
Basic IdeaUse the similarity between items to make predictions
For InstanceLook for items that are similar to Item5
Take Alice’s ratings for these items to predict the rating for Item5
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 20 / 61
Similarity Measure
Ratings are seen as vector in n−dimensional space
Similarity is calculated based on the angle between the vectors
sim(~a,~b) =~a ·~b|~a| ∗ |~b|
Adjusted cosine similarity: take average user ratings into account
sim(a,b) =∑u∈U(ru,a− ru)(ru,b− ru)√
∑u∈U(ru,a− ru)2√
∑u∈U(ru,b− ru)2
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 21 / 61
Pre-processing for Item-based filtering
Calculate all pair-wise item similarities in advance
The neighborhood to be used at run-time is typically rather small,because only those items are taken into account which the user has rated
Item similarities are supposed to be more stable than user similarities
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 22 / 61
More on ratings
Pure CF-based systems only rely on the rating matrix
Explicit ratingsMost commonly used (1 to 5, 1 to 10 response scales)
Research topics: what about multi-dimensional ratings?
Challenge: Sparse rating matrices, how to stimulate users to rate moreitems?
Implicit ratingsclicks, page views, time spent on some page, demo downloads ..
Can be used in addition to explicit ones; question of correctness ofinterpretation
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 23 / 61
More on ratings
Pure CF-based systems only rely on the rating matrix
Explicit ratingsMost commonly used (1 to 5, 1 to 10 response scales)
Research topics: what about multi-dimensional ratings?
Challenge: Sparse rating matrices, how to stimulate users to rate moreitems?
Implicit ratingsclicks, page views, time spent on some page, demo downloads ..
Can be used in addition to explicit ones; question of correctness ofinterpretation
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 23 / 61
Data sparsity problems
Cold start problemsHow to recommend new items? What to recommend to new users?
Straight-forward approachUse another method (e.g., content-based, demographic or simplynon-personalized) in the initial phase
AlternativesUse better algorithms (beyond nearest-neighbor approaches)
Example: Assume “transitivity” of neighborhoods
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 24 / 61
Data sparsity problems
Cold start problemsHow to recommend new items? What to recommend to new users?
Straight-forward approachUse another method (e.g., content-based, demographic or simplynon-personalized) in the initial phase
AlternativesUse better algorithms (beyond nearest-neighbor approaches)
Example: Assume “transitivity” of neighborhoods
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 24 / 61
Data sparsity problems
Cold start problemsHow to recommend new items? What to recommend to new users?
Straight-forward approachUse another method (e.g., content-based, demographic or simplynon-personalized) in the initial phase
AlternativesUse better algorithms (beyond nearest-neighbor approaches)
Example: Assume “transitivity” of neighborhoods
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 24 / 61
Example algorithms for sparse datasets
Recursive CFAssume there is a very close neighbor n of u who however has not ratedthe target item i yet.
Apply CF-method recursively and predict a rating for item i for theneighbor n
Use this predicted rating instead of the rating of a more distant directneighbor
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 25 / 61
Example algorithms for sparse datasets
Recursive CFAssume there is a very close neighbor n of u who however has not ratedthe target item i yet.
Apply CF-method recursively and predict a rating for item i for theneighbor n
Use this predicted rating instead of the rating of a more distant directneighbor
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 25 / 61
Example algorithms for sparse datasets
Recursive CFAssume there is a very close neighbor n of u who however has not ratedthe target item i yet.
Apply CF-method recursively and predict a rating for item i for theneighbor n
Use this predicted rating instead of the rating of a more distant directneighbor
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 25 / 61
Example algorithms for sparse datasets
Graph-based methods: Spreading activationIdea: Use paths of lengths 3 and 5 to recommend items
Length 3: Recommend Item3 to User1
Length 5: Item1 also recommendable
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 26 / 61
Example algorithms for sparse datasets
Graph-based methods: Spreading activationIdea: Use paths of lengths 3 and 5 to recommend items
Length 3: Recommend Item3 to User1
Length 5: Item1 also recommendable
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 26 / 61
Matrix Factorization Methods
Are shown to be superior to the classic nearest-neighbor techniques forproduct recommendations
Allow the incorporation of additional information such as implicitfeedback, temporal effects, and confidence levels
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 27 / 61
User-oriented neighborhood method
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 28 / 61
Latent Factor Approach
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 29 / 61
Matrix Factorization Methods
Basic IdeaBoth users and items are characterized by vectors of factors, inferredfrom item rating patterns
High correspondence between item and user factors leads to arecommendation.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 30 / 61
Using Singular Value Decomposition
Let M be the matrix of user - item interactions
Use SVD to get a k−rank approximation
Mk = Uk×Σk×VkT
Prediction: r̂ui = ru + Uk(u)×Σk×VkT(i)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 31 / 61
SVD: Example
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 32 / 61
SVD: Example
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 33 / 61
Using Singular Value Decomposition
The problem, however, is the high portion of missing values
Using only relatively few entries may lead to overfitting
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 34 / 61
A Basic Matrix Factorization Model
Both users and items are mapped to a joint latent factor space ofdimensionality f ,
User-item interactions are modeled as inner products in that space
Each item i associated with a vector qi ∈ Rf , and each user u associatedwith a vector pu ∈ Rf
qi measures the extent to which the item possesses the factors, positiveor negative
pu measures the extent of interest the user has in items that are high onthe corresponding factors, positive or negative
qiTpu captures the interaction between user u and item i
This approximates user u’s rating of item i, denoted by rui
r̂ui = qiTpu
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 35 / 61
A Basic Matrix Factorization Model
Both users and items are mapped to a joint latent factor space ofdimensionality f ,
User-item interactions are modeled as inner products in that space
Each item i associated with a vector qi ∈ Rf , and each user u associatedwith a vector pu ∈ Rf
qi measures the extent to which the item possesses the factors, positiveor negative
pu measures the extent of interest the user has in items that are high onthe corresponding factors, positive or negative
qiTpu captures the interaction between user u and item i
This approximates user u’s rating of item i, denoted by rui
r̂ui = qiTpu
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 35 / 61
A Basic Matrix Factorization Model
Both users and items are mapped to a joint latent factor space ofdimensionality f ,
User-item interactions are modeled as inner products in that space
Each item i associated with a vector qi ∈ Rf , and each user u associatedwith a vector pu ∈ Rf
qi measures the extent to which the item possesses the factors, positiveor negative
pu measures the extent of interest the user has in items that are high onthe corresponding factors, positive or negative
qiTpu captures the interaction between user u and item i
This approximates user u’s rating of item i, denoted by rui
r̂ui = qiTpu
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 35 / 61
A Basic Matrix Factorization Model
Both users and items are mapped to a joint latent factor space ofdimensionality f ,
User-item interactions are modeled as inner products in that space
Each item i associated with a vector qi ∈ Rf , and each user u associatedwith a vector pu ∈ Rf
qi measures the extent to which the item possesses the factors, positiveor negative
pu measures the extent of interest the user has in items that are high onthe corresponding factors, positive or negative
qiTpu captures the interaction between user u and item i
This approximates user u’s rating of item i, denoted by rui
r̂ui = qiTpu
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 35 / 61
A Basic Matrix Factorization Model
Both users and items are mapped to a joint latent factor space ofdimensionality f ,
User-item interactions are modeled as inner products in that space
Each item i associated with a vector qi ∈ Rf , and each user u associatedwith a vector pu ∈ Rf
qi measures the extent to which the item possesses the factors, positiveor negative
pu measures the extent of interest the user has in items that are high onthe corresponding factors, positive or negative
qiTpu captures the interaction between user u and item i
This approximates user u’s rating of item i, denoted by rui
r̂ui = qiTpu
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 35 / 61
A Basic Matrix Factorization Model
Both users and items are mapped to a joint latent factor space ofdimensionality f ,
User-item interactions are modeled as inner products in that space
Each item i associated with a vector qi ∈ Rf , and each user u associatedwith a vector pu ∈ Rf
qi measures the extent to which the item possesses the factors, positiveor negative
pu measures the extent of interest the user has in items that are high onthe corresponding factors, positive or negative
qiTpu captures the interaction between user u and item i
This approximates user u’s rating of item i, denoted by rui
r̂ui = qiTpu
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 35 / 61
A Basic Matrix Factorization Model
Both users and items are mapped to a joint latent factor space ofdimensionality f ,
User-item interactions are modeled as inner products in that space
Each item i associated with a vector qi ∈ Rf , and each user u associatedwith a vector pu ∈ Rf
qi measures the extent to which the item possesses the factors, positiveor negative
pu measures the extent of interest the user has in items that are high onthe corresponding factors, positive or negative
qiTpu captures the interaction between user u and item i
This approximates user u’s rating of item i, denoted by rui
r̂ui = qiTpu
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 35 / 61
A Basic Matrix Factorization Model
Major Challenge
Computing the mapping of each item and user to factor vectors qi,pu ∈ Rf
The Learning ProblemTo learn the factor vectors pu and qi, the system minimizes the regularizedsquared error on the set of known ratings:
minp∗,q∗ ∑(u,i)∈K
(rui−qiTpu)2 + λ(||qi||2 + ||pu||2)
where k is the set of (u, i) pairs for which rui is known.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 36 / 61
A Basic Matrix Factorization Model
Major Challenge
Computing the mapping of each item and user to factor vectors qi,pu ∈ Rf
The Learning ProblemTo learn the factor vectors pu and qi, the system minimizes the regularizedsquared error on the set of known ratings:
minp∗,q∗ ∑(u,i)∈K
(rui−qiTpu)2 + λ(||qi||2 + ||pu||2)
where k is the set of (u, i) pairs for which rui is known.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 36 / 61
Stochastic Gradient Descent
minp∗,q∗ ∑(u,i)∈K
(rui−qiTpu)2 + λ(||qi||2 + ||pu||2)
Let eui = rui−qiTpu
Gradient descent can be written as
qi← qi + γ(euipu−λqi)
pu← pu + γ(euiqi−λpu)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 37 / 61
Modifying the basic approach: Adding Biases
Matrix factorization is quite flexible in dealing with various data aspects andother application-specific requirements.
Adding BiasesSome users might always give higher ratings than others, some items arewidely perceived as better than others.
Full rating value may not be explained solely by qiTpu
Identify the portion that individual user or item biases can explain
bui = µ + bi + bu
µ is the overall average rating, bu and bi indicate the observed deviationsof user u and item i respectively, from the average
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 38 / 61
Modifying the basic approach: Adding Biases
Matrix factorization is quite flexible in dealing with various data aspects andother application-specific requirements.
Adding BiasesSome users might always give higher ratings than others, some items arewidely perceived as better than others.
Full rating value may not be explained solely by qiTpu
Identify the portion that individual user or item biases can explain
bui = µ + bi + bu
µ is the overall average rating, bu and bi indicate the observed deviationsof user u and item i respectively, from the average
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 38 / 61
Modifying the basic approach: Adding Biases
Matrix factorization is quite flexible in dealing with various data aspects andother application-specific requirements.
Adding BiasesSome users might always give higher ratings than others, some items arewidely perceived as better than others.
Full rating value may not be explained solely by qiTpu
Identify the portion that individual user or item biases can explain
bui = µ + bi + bu
µ is the overall average rating, bu and bi indicate the observed deviationsof user u and item i respectively, from the average
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 38 / 61
Adding Biases
An ExampleYou want a first-order estimate for user Joe’s rating of the movie Titanic.
Let the average rating over all movies, µ, is 3.7 stars
Titanic tends to be rated 0.5 stars above the average
Joe is a critical user, who tends to rate 0.3 stars lower than the average
Thus, the estimate (bias) for Titanic’s rating by Joe would be (3.7+0.5-0.3)= 3.9 stars
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 39 / 61
Modifying the original approach
Biases modify the interaction equation as
r̂ui = µ + bi + bu + qiTpu
Four components: global average, item bias, user bias, user-item interactionThe squared error function:
minp∗,q∗,b∗ ∑(u,i)∈K
(rui−µ−bi−bu−qiTpu)2 + λ(||qi||2 + ||pu||2 + bu
2 + bi2)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 40 / 61
Additional Input Sources
Many users may supply very few ratings
Difficult to reach general conclusions on their taste
Incorporate additional sources of information about the users
E.g., gather implicit feedback, use purchases or browsing history to learnthe tendencies
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 41 / 61
Additional Input Sources
Many users may supply very few ratings
Difficult to reach general conclusions on their taste
Incorporate additional sources of information about the users
E.g., gather implicit feedback, use purchases or browsing history to learnthe tendencies
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 41 / 61
Modeling Implicit Feedback
Boolean Implicit Feedback
N(u): set of items for which user u expressed an implicit preference
Let item i be associated with xi ∈ Rf [xi is different from qi]
The user can be characterized by the vector ∑i∈N(u)
xi
Normalizing the sum:
∑i∈N(u)
xi
√|N(u)|
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 42 / 61
Modeling Demographics
Consider boolean attributes where user u corresponds to a set ofattributes A(u)
These attributes can describe gender, age group, Zip code, income leveletc.
Let a feature vector ya ∈ Rf correspond to each attribute to describe auser through this set as: ∑
a∈A(u)ya
Integrating enhanced user representation in the matrix factorization model:
r̂ui = µ + bi + bu + qiT [pu + |N(u)|−0.5
∑i∈N(u)
xi + ∑a∈A(u)
ya]
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 43 / 61
Modeling Demographics
Consider boolean attributes where user u corresponds to a set ofattributes A(u)
These attributes can describe gender, age group, Zip code, income leveletc.
Let a feature vector ya ∈ Rf correspond to each attribute to describe auser through this set as: ∑
a∈A(u)ya
Integrating enhanced user representation in the matrix factorization model:
r̂ui = µ + bi + bu + qiT [pu + |N(u)|−0.5
∑i∈N(u)
xi + ∑a∈A(u)
ya]
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 43 / 61
Adding Temporal Dynamics
In reality, product perception and popularity constantly change as newselections emerge
Customers’ inclinations evolve, leading them to redefine their taste
The system should account for the temporal effects reflecting thedynamic, time-drifting nature of user-item interactions
Items that can vary over time: item biases, bi(t); user biases, bu(t); userpreferences, pu(t)
It can be integrated in the matrix factorization model as:
r̂ui(t) = µ + bi(t) + bu(t) + qiTpu(t)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 44 / 61
Adding Temporal Dynamics
In reality, product perception and popularity constantly change as newselections emerge
Customers’ inclinations evolve, leading them to redefine their taste
The system should account for the temporal effects reflecting thedynamic, time-drifting nature of user-item interactions
Items that can vary over time: item biases, bi(t); user biases, bu(t); userpreferences, pu(t)
It can be integrated in the matrix factorization model as:
r̂ui(t) = µ + bi(t) + bu(t) + qiTpu(t)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 44 / 61
Recommendation in Social Networks
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 45 / 61
Effects in Social Networks
Social InfluenceRatings are influenced by ratings of friends, i.e. friends are more likely to havesimilar ratings than strangers
BenefitsCan deal with cold-start users, as long as they are connected to thesocial network
Exploit social influence, correlational influence, transitivity
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 46 / 61
Effects in Social Networks
Social InfluenceRatings are influenced by ratings of friends, i.e. friends are more likely to havesimilar ratings than strangers
BenefitsCan deal with cold-start users, as long as they are connected to thesocial network
Exploit social influence, correlational influence, transitivity
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 46 / 61
Memory Based Approaches
Explore the network to find raters in the neighborhood of the target user
Aggregate the ratings of these raters to predict the rating of the targetuser
Different methods to calculate the “trusted neighborhood” of users
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 47 / 61
TidalTrust; Goldbeck (2005)
Modified breadth-first search in the network
Consider all raters v at the shortest distance from the target user u
Trust between u and v:
tu,v =
∑w∈Nu
tu,wtw,v
∑w∈Nu
tu,w
where Nu denotes the set of (direct) neighbors (friends) of u
Trust depends on all connecting paths
Trust between direct neighborsCan be based on profile similarity or a value provided by the users themselves.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 48 / 61
TidalTrust; Goldbeck (2005)
Modified breadth-first search in the network
Consider all raters v at the shortest distance from the target user u
Trust between u and v:
tu,v =
∑w∈Nu
tu,wtw,v
∑w∈Nu
tu,w
where Nu denotes the set of (direct) neighbors (friends) of u
Trust depends on all connecting paths
Trust between direct neighborsCan be based on profile similarity or a value provided by the users themselves.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 48 / 61
TidalTrust
Predicted Rating
ˆru,i =
∑v∈raters
tu,vrv,i
∑v∈raters
tu,v
rv,i denotes rating of user v for item i
Shortest distance?Efficient
Taking a short distance gives high precision and low recall
One can consider raters up to a maximum-depth d, a trade-off betweenprecision (and efficiency) and recall
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 49 / 61
TidalTrust
Predicted Rating
ˆru,i =
∑v∈raters
tu,vrv,i
∑v∈raters
tu,v
rv,i denotes rating of user v for item i
Shortest distance?Efficient
Taking a short distance gives high precision and low recall
One can consider raters up to a maximum-depth d, a trade-off betweenprecision (and efficiency) and recall
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 49 / 61
TrustWalker
How far to explore the network?: trade-off between precision andcoverage
Instead of far neighbors who have rated the target item, use nearneighbors who have rated similar items
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 50 / 61
Random Walk Starting from a Target User u0
At step k, at node uIf u has rated i, return ru,i, otherwise
With probability φu,i,k, stop random walk, randomly select item j rated by uand return ru,j
With probability 1−φu,i,k, continue the random walk to a direct neighborof u
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 51 / 61
Selecting φu,i,k
φu,i,k gives the probability of staying at u to select one of its items at stepk, while we are looking for a prediction on target item i
This probability should be related to the similarities of the items rated by uand the target item i, consider the maximum similarity
The deeper we go into the network, the probability of continuing randomwalk should decrease, so φu,i,k should increase with k
φu,i,k = maxj∈RIu
sim(i, j)× 1
1 + e−k2
where RIu denotes the set of items rated by user u
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 52 / 61
Selecting φu,i,k
Selecting sim(i, j)
Let UCi,j be the set of common users, who have rated both items i and j, wecan define the correlation between items i and j as:
corr(i, j) =∑u∈UCi,j(ru,i− ru)(ru,j− ru)√
∑u∈UCi,j(ru,i− ru)2√
∑u∈UCi,j(ru,j− ru)2
Taking the effect of common usersThe size of the common users is also important. For the same value ofcorr(i, j), if number of common users, |UCi,j|, is higher, the similarity shouldbe higher
sim(i, j) =1
1 + e−|UCi,j|
2
× corr(i, j)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 53 / 61
Selecting φu,i,k
Selecting sim(i, j)
Let UCi,j be the set of common users, who have rated both items i and j, wecan define the correlation between items i and j as:
corr(i, j) =∑u∈UCi,j(ru,i− ru)(ru,j− ru)√
∑u∈UCi,j(ru,i− ru)2√
∑u∈UCi,j(ru,j− ru)2
Taking the effect of common usersThe size of the common users is also important. For the same value ofcorr(i, j), if number of common users, |UCi,j|, is higher, the similarity shouldbe higher
sim(i, j) =1
1 + e−|UCi,j |
2
× corr(i, j)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 53 / 61
When does a random walk terminate?
Three alternativesReaching a node which has expressed a rating on the target item i
At some user node u, decide to stay at the node and select one of theitems rated by u and return the rating for that item as result of the randomwalk
The random walk might continue forever, so terminate when it is very far(k > max−depth). What value of k ?
“six-degrees of separation”
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 54 / 61
When does a random walk terminate?
Three alternativesReaching a node which has expressed a rating on the target item i
At some user node u, decide to stay at the node and select one of theitems rated by u and return the rating for that item as result of the randomwalk
The random walk might continue forever, so terminate when it is very far(k > max−depth). What value of k ?
“six-degrees of separation”
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 54 / 61
How to recommend a rating?
Perform several random walks, as described before and the aggregation of allratings returned by different random walks are considered as the predictedrating ˆru0,i.
Estimated rating for source user u on target item i:
ˆru0,i = ∑{(v,j)|Rv,j}
P(XYu,i = (v, j))rv,j
XYu,i is the random variable for stopping the random walk at node v andselecting item j rated by v
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 55 / 61
How to recommend a rating?
Perform several random walks, as described before and the aggregation of allratings returned by different random walks are considered as the predictedrating ˆru0,i.Estimated rating for source user u on target item i:
ˆru0,i = ∑{(v,j)|Rv,j}
P(XYu,i = (v, j))rv,j
XYu,i is the random variable for stopping the random walk at node v andselecting item j rated by v
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 55 / 61
Social Matrix Factorization
IntuitionCan we incorporate the Social information in the matrix factorization methods?
Recollect the Matrix factorization problem
minp∗,q∗ ∑(u,i)∈K
(rui− r̂ui)2 + λ(||qi||2 + ||pu||2)
where rui is the actual rating given by user u to item i, r̂ui approximates useru’s rating of item i, simplest of the expression being qi
Tpu, though other biasescan also be incorporated.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 56 / 61
Social Matrix Factorization
IntuitionCan we incorporate the Social information in the matrix factorization methods?
Recollect the Matrix factorization problem
minp∗,q∗ ∑(u,i)∈K
(rui− r̂ui)2 + λ(||qi||2 + ||pu||2)
where rui is the actual rating given by user u to item i, r̂ui approximates useru’s rating of item i, simplest of the expression being qi
Tpu, though other biasescan also be incorporated.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 56 / 61
Social Matrix Factorization
Basic IdeaNeighbors in the social network may have similar interests.
Incorporating social factors
Let the social network information be represented by a matrix S ∈ Ru0×u0 , whereu0 is the number of users.
Su,v ∈ (0,1] denotes the directed and weighted social relationship of user u withuser v
Each of the rows of the social matrix S is normalized to 1, resulting in the newmatrix S∗, such that ∑v S∗u,v = 1 for each user u
Modified objective function
minp∗,q∗ ∑(u,i)∈K
(rui− r̂ui)2 + β ∑
all u((qu−∑
vS∗u,vqv)(qu−∑
vS∗u,vqv)T )
+λ(||qi||2 + ||pu||2)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 57 / 61
Social Matrix Factorization
Basic IdeaNeighbors in the social network may have similar interests.
Incorporating social factors
Let the social network information be represented by a matrix S ∈ Ru0×u0 , whereu0 is the number of users.
Su,v ∈ (0,1] denotes the directed and weighted social relationship of user u withuser v
Each of the rows of the social matrix S is normalized to 1, resulting in the newmatrix S∗, such that ∑v S∗u,v = 1 for each user u
Modified objective function
minp∗,q∗ ∑(u,i)∈K
(rui− r̂ui)2 + β ∑
all u((qu−∑
vS∗u,vqv)(qu−∑
vS∗u,vqv)T )
+λ(||qi||2 + ||pu||2)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 57 / 61
Social Matrix Factorization
Basic IdeaNeighbors in the social network may have similar interests.
Incorporating social factors
Let the social network information be represented by a matrix S ∈ Ru0×u0 , whereu0 is the number of users.
Su,v ∈ (0,1] denotes the directed and weighted social relationship of user u withuser v
Each of the rows of the social matrix S is normalized to 1, resulting in the newmatrix S∗, such that ∑v S∗u,v = 1 for each user u
Modified objective function
minp∗,q∗ ∑(u,i)∈K
(rui− r̂ui)2 + β ∑
all u((qu−∑
vS∗u,vqv)(qu−∑
vS∗u,vqv)T )
+λ(||qi||2 + ||pu||2)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 57 / 61
Circle-based Social Recommendation
Basic IdeaA user may trust different subsets of friends regarding different domains.
Inferring circles based on categories
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 58 / 61
Circle-based Social Recommendation
Basic IdeaA user may trust different subsets of friends regarding different domains.
Inferring circles based on categories
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 58 / 61
Circle-based Social Recommendation
Basic IdeaA user may trust different subsets of friends regarding different domains.
Inferring circles based on categories
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 58 / 61
Circle-based Social Recommendation
v is in inferred circle c of u iff u connects to v and both are interested in thecategory c.
Example CategoriesVideos and DVDs
Books
Music
Toys
Software
Cars
...
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 59 / 61
Circle-based Social Recommendation
v is in inferred circle c of u iff u connects to v and both are interested in thecategory c.
Example CategoriesVideos and DVDs
Books
Music
Toys
Software
Cars
...
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 59 / 61
Circle-based Social Recommendation
Using the nomalized trust matrix S(c)∗, a separate matrix mactorization modelis trained for each category c.
Modified Objective function
L(c)(r(c),q(c),p(c),S(c)) = minp∗,q∗ ∑(u,i)∈K
(r(c)ui− r̂ui(c))2
+β ∑all u
((qu(c)−∑
vS(c)∗
u,vq(c)v)(qu(c)−∑
vS(c)∗
u,vq(c)v)T )
+λ(||q(c)i||2 + ||p(c)u||2)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 60 / 61
Circle-based Social Recommendation
Using the nomalized trust matrix S(c)∗, a separate matrix mactorization modelis trained for each category c.
Modified Objective function
L(c)(r(c),q(c),p(c),S(c)) = minp∗,q∗ ∑(u,i)∈K
(r(c)ui− r̂ui(c))2
+β ∑all u
((qu(c)−∑
vS(c)∗
u,vq(c)v)(qu(c)−∑
vS(c)∗
u,vq(c)v)T )
+λ(||q(c)i||2 + ||p(c)u||2)
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 60 / 61
Class Problem
Consider the following ratings provided by 5 users, Alice, User1 - User4, to 5 items, Item1 to Item5.
Assume that there is an underlying social network between these 5 users, which is given by the following adjacency list. The network is directed.
Alice, User1 Alice, User2 Alice, User3User1, User3 User1, User4User2, User3 User2, User1User3, User4 User3, User2User4, User 3
Also, assume that the ratings given by the users to various items are same as in the above matrix, except that we do not have the ratings
provided by User1 and User2 to Item5 anymore. Suppose you are using the TrustWalker method to predict the rating of Item5 by the user
‘Alice’. Assuming that at each step, you can choose any of the direct neighbors with equal probability, find out the probability that the random
walk will continue for more than 1 step.
Pawan Goyal (IIT Kharagpur) Recommendation Systems October 29-30, 2015 61 / 61