+ All Categories
Home > Education > Lecture 5: How to make the Social Web Personalized? (VU Amsterdam Social Web Course)

Lecture 5: How to make the Social Web Personalized? (VU Amsterdam Social Web Course)

Date post: 15-Jul-2015
Category:
Upload: lora-aroyo
View: 106 times
Download: 5 times
Share this document with a friend
51
Social Web 2015 Lecture 5: Personalization on the Social Web (some slides adopted from Fabian Abel) Lora Aroyo The Network Institute VU University Amsterdam
Transcript

Social Web���2015

Lecture 5: Personalization on the Social Web(some slides adopted from Fabian Abel)

Lora AroyoThe Network Institute

VU University Amsterdam

theory & techniques for how to design & evaluate

recommenders & user models to use in Social Web applications

Social Web 2015, Lora Aroyo

Fig. 1 Functional model of tasks and sub-tasks specifically suited for SASs

Functional model of tasks and sub-tasks specifically suited for SASs (Ilaria Torre, 2009)

Social Web 2015, Lora Aroyo

Kevin Kelly

How to infer & represent user information that supports a given

application or context?

User Modeling

Social Web 2015, Lora Aroyo

•  Application has to obtain, understand & exploit information about the user

•  Information (need & context) about user

•  Inferring information about user & representing it so that it can be consumed by the application

•  Data relevant for inferring information about user

User Modeling Challenge

Social Web 2015, Lora Aroyo

•  People leave traces on the Web and on their computers: !• Usage data, e.g., query logs, click-through-data •  Social data, e.g., tags, (micro-)blog posts, comments,

bookmarks, friend connections • Documents, e.g., pictures, videos•  Personal data, e.g., affiliations, locations •  Products, applications, services - bought, used, installed

• Not only a user’s behavior, but also interactions of other users !•  “people can make statements about me”•  “people who are similar to me can reveal information about me”•  “social learning” collaborative recommender systems

User & Usage Data is Everywhere

Social Web 2015, Lora Aroyo

•  User Profile = data structure = a characterization of a user at a particular moment represents what, from a given (system) perspective, there is to know about a user. The data in the profile can be explicitly given by user or derived by system

•  User Model = definitions & rules for the interpretation of observations about the user & about the translation of that interpretation into the characteristics in a user profile user model is the recipe for obtaining & interpreting user profiles

•  User Modeling = the process of representing the user

UM: Basic Concepts

Social Web 2015, Lora Aroyo

•  Overlay User Modeling: describe user characteristics, e.g. “knowledge of a user”, “interests of a user” with respect to “ideal” characteristics

•  Customizing: user explicitly provides & adjusts elements of the user profile

•  User model elicitation: ask & observe the user; learn & improve user profile successively “interactive user modeling”

•  Stereotyping: stereotypical characteristics to describe a user

•  User Relevance Modeling: learn/infer probabilities that a given item or concept is relevant for a user

Related scientific conference: http://umap2011.org/ Related journal: http:/umuai.org/

User Modeling Approaches

Social Web 2015, Lora Aroyo

http://farm7.staticflickr.com/6240/6346803873_e756dd9bae_b.jpg

Which approach suits best the conditions of applications?

•  among the oldest user models

•  used for modeling student knowledge

•  the user is typically characterized in terms of domain concepts & hypotheses of the user’s knowledge about these concepts in relation to an (ideal) expert’s knowledge

•  concept-value pairs

Overlay User Models

Social Web 2015, Lora Aroyo

•  Ask the user explicitly learn•  NLP, intelligent dialogues•  Bayesian networks, Hidden Markov models

•  Observe the user learn •  Logs, machine learning•  Clustering, classification, data mining���

•  Interactive user modeling: mixture of direct inputs of a user, observations and inferences

User Model Elicitation

Social Web 2015, Lora Aroyo

http://hunch.comSocial Web 2015, Lora Aroyo

Social Web 2015, Lora Aroyo

http://farm1.staticflickr.com/155/413650229_31ef379b0b_b.jpg

•  set of characteristics (e.g. attribute-value pairs) that describe a group of users.

•  user is not assigned to a single stereotype - user profile can feature characteristics of several different stereotypes

User Stereotypes

based on slides from Fabien Abel

based on slides from Fabien Abel

Can we infer a Twitter-based User Profile?

User Modeling Building Blocks

based on slides from Fabien Abel

User Modeling Building Blocks

based on slides from Fabien Abel

User Modeling Building Blocks

based on slides from Fabien Abel

User Modeling Building Blocks

based on slides from Fabien Abel

Observations •  Profile characteristics: !

•  Semantic enrichment solves sparsity problems•  Profiles change over time: recent profiles reflect

better current user demands•  Temporal patterns: weekend profiles differ

significantly from weekday profiles

•  Impact on recommendations: !•  The more fine-grained the concepts the better the

recommendation performance: entity-based > topic-based > hashtag-based

•  Semantic enrichment improves recommendation quality •  Time-sensitivity (adapting to trends) improves

performance

Social Web 2015, Lora Aroyo

User Modeling it is not about putting everything in a user profile

it is about making the right choices

Social Web 2015, Lora Aroyo

User Adaptation

Knowing the user to adapt a system or interfaceto improve the system functionality and user experience

Social Web 2015, Lora Aroyo

A. Jameson. Adaptive interfaces and agents. The HCI handbook: fundamentals, evolving technologies and emerging applications, pp. 305–330, 2003.

User-Adaptive Systems

Social Web 2015, Lora Aroyo

based on slides from Fabien Abel

Last.fm adapts to your music taste

•  Overfitting, “bubble effects”, loss of serendipity problem: •  systems may adapt too strongly to the interests/behavior•  e.g., an adaptive radio station may always play the same or

very similar songs•  We search for the right balance between novelty and relevance

for the user

•  “Lost in Hyperspace” problem: •  when adapting the navigation – i.e. the links on which users

can click to find/access information •  e.g., re-ordering/hiding of menu items may lead to

confusion

Issues in User-Adaptive Systems

Social Web 2015, Lora Aroyo

What is good user modelling & personalisation?

http://www.flickr.com/photos/bellarosebyliz/4729613108

From the consumer perspective of an adaptive system:

From the provider perspective of an adaptive system:

Success Perspectives

Social Web 2015, Lora Aroyo

•  User studies: ask/observe (selected) people whether you did a good job

•  Log analysis: Analyze (click) data and infer whether you did a good job,

•  Evaluation of user modeling:

•  measure quality of profiles directly, e.g. measure overlap with existing (true) profiles, or let people judge the quality of the generated user profiles

•  measure quality of application that exploits the user profile, e.g., apply user modeling strategies in a recommender system

Evaluation Strategies

Social Web 2015, Lora Aroyo

Evaluating User Modeling in RecSys

Social Web 2015, Lora Aroyo

Possible Metrics •  The usual IR metrics:•  Precision: fraction of retrieved items that are relevant

•  Recall: fraction of relevant items that have been retrieved

•  F-Measure: (harmonic) mean of precision and recall

• Metrics for evaluating recommendation (rankings):• Mean Reciprocal Rank (MRR) of first relevant item

•  Success@k: probability that relevant item occurs within the top k

•  If a true ranking is given: rank correlations

•  Precision@k, Recall@k & F-Measure@k

• Metrics for evaluating prediction of user preferences:

• MAE = Mean Absolute Error

•  True/False Positives/Negatives

Social Web 2015, Lora Aroyo

•  [Rae et al.] a typical example of how to investigate and evaluate a proposal for improving (tag) recommendations (using social networks)

•  Task: test how well the different strategies (different tag contexts) can be used for tag prediction/recommendation

•  Steps:

1. Gather a dataset of tag data part of which can be used as input and aim to test the recommendation on the remaining tag data

2.  Use the input data and calculate for the different strategies the predictions

3. Measure the performance using standard (IR) metrics: Precision of the top 5 recommended tags (P@5), Mean Reciprocal Rank (MRR), Mean Average Precision (MAP)

4. Test the results for statistical significance using T-test, relative to the baseline (e.g. existing approach, competitive approach)

[Rae et al. Improving Tag Recommendations Using Social Networks, RIAO’10]]

Example Evaluation

Social Web 2015, Lora Aroyo

•  [Guy et al.] another example of a similar evaluation approach

•  The different strategies differ in the way people & tags are used: with tag-based systems, there are complex relationships between users, tags and items, and strategies aim to find the relevant aspects of these relationships for modeling and recommendation

•  The baseline is the ‘most popular’ tags - often used to compare the most popular tags to the tags predicted by a particular personalization strategy - investigating whether the personalization is worth the effort and is able to outperform the easily available baseline. [Guy et al. Social Media Recommendation based on People and Tags, SIGIR’10]]

Example Evaluation

Social Web 2015, Lora Aroyo

Predict relevant/useful/interesting items for a given user (in a given context) it’s often a ranking task!

Recommendation Systems

Social Web 2015, Lora Aroyo

March 28, 2013

Social Web 2015, Lora Aroyo

Social Web 2015, Lora Aroyo

Social Web 2015, Lora Aroyo

http://www.wired.com/magazine/2011/11/mf_artsy/all/1

Social Web 2015, Lora Aroyo

Collaborative Filtering • Memory-based: User-Item matrix: ratings/preferences of users => compute

similarity between users & recommend items of similar users

• Model-based: Item-Item matrix: similarity (e.g. based on user ratings) between items => recommend items that are similar to the ones the user likes

• Model-based: Clustering: cluster users according to their preferences => recommend items of users that belong to the same cluster

• Model-based: Bayesian networks: P(u likes item B | u likes item A) = how likely is it that a user, who likes item A, will like item B learn probabilities from user ratings/preferences

•  Others: rule-based, other data mining techniques

Social Web 2015, Lora Aroyo

•  complete input data is required

•  pre-computation not possible

•  does not scale well •  high quality of

recommendations

•  abstraction (model) of input data

•  pre-computation (partially) possible (model has to be re-built from time to time)

•  scales better•  abstraction may reduce

recommendation quality

Memory vs. Model-based

Social Web 2015, Lora Aroyo

•  collaborative filtering: ‘neighborhoods’ of people with similar interest & recommending items based on likings in neighborhood

•  limitations: next to ‘cold start’ and ‘sparsity’ the lack of control (over one’s neighborhood) is also a problem, i.e. cannot add ‘trusted’ people, nor exclude ‘strange’ ones

•  therefore, interest in ‘social recommenders’, where presence of social connections defines the similarity in interests (e.g. social tagging CiteULike):•  does a social connection indicate user interest similarity?•  how much users interest similarity depends on the strength of their

connection?•  is it feasible to use a social network as a personalized recommendation?

[Lin & Brusilovsky, Social Networks and Interest Similarity: The Case of CiteULike, HT’10]

Social Networks & Interest Similarity

•  unilaterally connected pairs have more common items/metadata/tags than non-connected pairs

•  highest similarity for direct connections - decreasing with the increase of distance between users in SN

•  reciprocal relationship users - significantly larger similarity than users in a unidirectional relationship

•  traditional item-level similarity may be less reliable to find similar users in social bookmarking systems

•  peers connected by self-defined social connections could be a useful source for cross-recommendation!

Conclusions

Social Web 2015, Lora Aroyo

•  Input: characteristics of items & interests of a user into characteristics of items => Recommend items that feature characteristics which meet the user’s interests

•  Techniques:•  Data mining methods: Cluster items based on their

characteristics => Infer users’ interests into clusters•  IR methods: Represent items & users as term vectors =>

Compute similarity between user profile vector and items•  Utility-based methods: Utility function that gets an item as

input; the parameters of the utility function are customized via preferences of a user

Content-based Recommendations

Social Web 2015, Lora Aroyo

based on slides from Fabien Abel

Content Features

based on slides from Fabien Abel

User Model

based on slides from Fabien Abel

Recommendations

RecSys Issues

•  Cold-start problem (new user problem): no/little data available to infer preferences of new users!

•  Changing User Preferences: user interests may change over time !

•  Sparsity problem (new item problem): item descriptions are sparse, e.g. not many user rated or tagged an item !

•  Lack of Diversity (overfitting): when adapting too strongly to the preferences of users they might see same/similar recommendations!

•  Use the right context: users do things, which might not be relevant for their user model, e.g. try out things, do stuff for other people !

•  Research challenge: right balance between serendipity & personalization!

•  Research challenge: right way to use the influence of recommendations on user’s behavior!

Social Web 2015, Lora Aroyo

image source: http://www.flickr.com/photos/bionicteaching/1375254387/

Hands-on Teaser

•  Your Facebook Friends’ popularity in a spread sheet•  Locations of your Facebook Friends•  Tag Cloud of your wall posts


Recommended