Interactive Recommender Systems:Bridging the gap between predictive
algorithms and interactive user interfaces
Denis Parra, Ph.D. Information SciencesAssistant Professor, CS DepartmentSchool of EngineeringPontificia Universidad Católica de Chile
UFMG, March 29th 2017
Outline
• Brief Personal Introduction• Computer Science at PUC Chile• Projects at SocVis Lab
• Overview of Recommender Systems • Interactive Recommender Systems
• Summary & Current & Future Work
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 2
1 slide Geography Class: Chile• One third of the 16 million Chileans lives in Santiago,
the Capital• But Chile is a looong country (4.000 Km), in the north
is hot and dry, in the south (Patagonia) is very cold.Very Hot!
Very Cold!
My hometown!Valdivia
Santiago, PUC Chile
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 3
Personal Introduction 1/3
• B.Eng. and Engineering in Informatics from Universidad Austral de Chile (2004), Valdivia, Chile
• Ph.D. in Information Sciences at University of Pittsburgh (2008-2013), Pittsburgh, PA, USA
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 4
Personal Introduction 2/3
• In 2009 I did an internship at Trinity College Dublin, with researcher Alexander Troussov (IBM)
• In 2010 I did another internship at Telefonica I+D, Barcelona, with Xavier Amatrian (now VP Quora)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 5
Personal Introduction 3/3
• 2013: Moved back to Santiago, Chile • Department of CS, School of Engineering, PUC.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 6
DCC, Engineering, PUC Chile• DCC: Departamento de Ciencia de la Computación• Programs: BEng, Engineering title, Master, PhD• Research Areas:
– Databases and Semantic Web– Information Technologies– Machine Learning and Computer Vision (GRIMA)– Software Engineering– Educational Technologies, MOOCs
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 7
http://dcc.ing.puc.cl
Academic activities (2017)
• Research topics: Recommender Systems/Personalization, Visualization, SNA.
• Teaching: Data Mining, Recommender Systems, Information Visualization, SNA.
• Leading the Social Computing and Visualization (SocVis) Lab.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 8
SocVis Lab
http://www.socvis.clMarch 29th, 2017 D.Parra ~ UFMG – Invited Talk 9
People, Publications, News (ND)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 10
Projects at SocVis
• Mood-based music artists recommendation– Collaboration with J. O’Donovan (UCSB)– Student: Raimundo Herrera
• IR on evidence-based Medicine– Help doctors on answering clinical questions– Student: I. Donoso, collaboration Epistemonikos
• Artwork Recommendation – Collaboration with online artwork store UGallery– Students: P. Messina & V. Dominguez
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 11
Recommender Systems Class
• Recommender Systems at PUC Chilehttp://web.ing.puc.cl/~dparra/classes/recsys-2016-2/
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 12
INTRODUCTION TO RECSYSRecommender Systems
* Danboard (Danbo): Amazon’s cardboard robot, in these slides it represents a recommender system
*
Recommender Systems (RecSys)Systems that help (groups of) people to find relevant items in
a crowded item or information space (MacNee et al. 2006)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 14
Why do we care about RecSys?
• RecSys have gained popularity due to several domains & applications that require people to make decisions among a large set of items.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 15
A lil’ bit of History
• First recommender systems were built at the beginning of 90’s (Tapestry, GroupLens, Ringo)
• Online contests, such as the Netflix prize, grew the attention on recommender systems beyond Computer Science(2006-2009)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 16
The Recommendation Problem
• The most popular way of presenting the recommendation problem is rating prediction:
• How good is my prediction?
Item 1 Item 2 … Item m
User 1 1 5 4
User 2 5 1 ?
…
User n 2 5 ?
Predict!
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 17
Recommendation Methods
• Without covering all possible methods, the two most typical classifications on recommender algorithms are
Classification 1 Classification 2- Collaborative Filtering- Content-based Filtering- Hybrid
- Memory-based- Model-based
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 18
Collaborative Filtering (User-based KNN)• Step 1: Finding Similar Users (Pearson Corr.)
5
4
4
1
2
1
54
4
12
5
Active user
User_1
User_2
User_3
active user
user_1
user_2
user_3
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 19
Collaborative Filtering (User-based KNN)• Step 1: Finding Similar Users (Pearson Corr.)
5
4
4
1
2
1
54
4
12
5
Active user
User_1
User_2
User_3
∑∑∑
⊂⊂
⊂
−−
−−=
nunu
nu
CRi nniCRi uui
CRi nniuui
rrrr
rrrrnuSim
,,
,
22 )()(
))((),(
active user
user_1 0.4472136
user_2 0.49236596
user_3 -0.91520863
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 20
Collaborative Filtering (User-based KNN)
• Step 2: Ranking the items to recommend
5
4
4
2
1
54
4
Active user
User_1
User_2
2
3
4
2
Item 1
Item 2
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 21
Collaborative Filtering (User-based KNN)
• Step 2: Ranking the items to recommend
5
4
4
2
1
54
4
Active user
User_1
User_2
∑∑
⊂
⊂−⋅
+=)(
)(
),(
)(),(),(
uneighborsn
uneighborsn nni
u nuuserSim
rrnuuserSimriupred2
3
4
2
Item 1
Item 2
Item 3
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 22
Pros/Cons of CFPROS:• Very simple to implement• Content-agnostic• Compared to other techniques such as content-
based, is more accurate. There is also the Item KNN.CONS:• Sparsity• Cold-start• New Item
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 23
Content-Based Filtering• Can be traced back to techniques from IR, where
the User Profile represents a query.
user_profile = {w_1, w_2, …., w_3} using TF-IDF, weighting
Doc_1 = {w_1, w_2, …., w_3}
Doc_2 = {w_1, w_2, …., w_3}
Doc_3 = {w_1, w_2, …., w_3}
Doc_n = {w_1, w_2, …., w_3}
5
4
5
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 24
PROS/CONS of Content-Based FilteringPROS:• New items can be matched without previous
feedback• It can exploit also techniques such as LSA or LDA• It can use semantic data (ConceptNet, WordNet,
etc.)CONS:• Less accurate than collaborative filtering• Tends to overspecialization
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 25
Hybridization• Combine previous methods to overcome their
weaknesses (Burke, 2002)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 26
C2. Model/Memory Classification
• Memory-based methods use the whole dataset in training and prediction. User and Item-based CF are examples.
• Model-based methods build a model during training and only use this model during prediction. This makes prediction performance way faster and scalable
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 27
Model-based: Matrix Factorization
Latent vector of the itemLatent vector of the user
SVD ~Singular Value Decomposition
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 28
PROS/CONS of MF and latent factors model
PROS:• So far, state-of-the-art in terms of accuracy (these
methods won the Netflix Prize)• Performance-wise, the best option nowadays: slow
at training time O((m+n)3) compared to correlation O(m2n), but linear at prediction time O(m+n)
CONS:• Recommendations are obscure: How to explain that
certain “latent factors” produced the recommendation?
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 29
Other paradigms and techniques
• Recommendation as a graph problem: – Model the problem as diffusion or link prediction– Personalized PageRank (Kamvar et al, 2010), (Santos et
al 2016)• Recommendation as a ranking problem:
– Rather than predicting ratings, predict a Top-N list– Learning-to-rank approaches developed in the IR
community– Karatzoglou et al. (2013), Shi et al. (2014), Macedo et al.
(2015)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 30
(Important) RecSys Topics Not Covered in this Presentation
• Learning to rank• Graph-based methods• Context-aware recommenders• Recommendation problem as next-item in sequence• User-centric evaluation frameworks• Multiarmed Bandits• Reinforcement Learning• ... You need to take Professor Santos’ course J
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 31
Rethinking the Recommendation Problem
• User feedback is scarce: need for exploiting different sources of user preference and context
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 32
Rethinking the Recommendation Problem
• Ratings are scarce: need for exploiting other sources of user preference
• User-centric recommendation takes the problem beyond ratings and ranked lists: evaluate user engagement and satisfaction, not only RMSE/MAP
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 33
Rethinking the Recommendation Problem
• Ratings are scarce: need for exploiting other sources of user preference
• User-centric recommendation takes the problem beyond ratings and ranked lists: evaluate user engagement and satisfaction, not only RMSE/MAP
• Several other dimensions to consider in the evaluation: novelty of the results, diversity, coverage (user and catalog), trust
• Study de effect of interface characteristics: controllability, transparency, explainability.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 34
My Take on RecSys Research (2009 ~)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 35
My Work on RecSys
• In my research I have contributed to RecSys by:– Utilizing other sources of user preference (Social Tags)– Exploiting implicit feedback for recommendation and for
mapping explicit feedback– Studying interactive interfaces: the effect of visualizations and
user interaction on user satisfaction, perception of trust and accuracy.
• Nowadays: Focus on interactive exploratory interfaces for recommender systems
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 36
This is not only My work J• Dr. Peter Brusilovsky
University of Pittsburgh, PA, USA
• Dr. Alexander TroussovIBM Dublin and TCD, Ireland
• Dr. Xavier AmatriainTID / Netflix /Quora
• Dr. Christoff TrattnerNTNU, Norway
• Dr. Katrien VerbertKU Leuven, Belgium
• Dr. Leandro Balby-MarinhoUFCG, Brasil
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 37
VISUALIZATION + USER CONTROLLABILITY
Part of this work with Katrien Verbert
Human Factors in RecSys• Transparency and Explainability: Konstan et al (2000),
Tintarev and Mastoff (2010)
• Frameworks to evaluate RecSys user studies: ResQue (Pu et al , 2010), Knijnenburg et al (2012)
• Controllability and Inspectability: O’Donovan (2008), Knijnenburg et al (2010, 2012), Hijikata (2012), Ekstrand et al (2015)
• Visualization and Interfaces: O’Donovan (2008 - ..), Verbert et al (2013), Parra et al (2014), Loepp et al (2014, 2017),
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 39
Visualization & User Controllability
• Motivation: Can user controllability and explainability improve user engagement and satisfaction with a recommender system?
• Specific research question: How intersections of contexts of relevance (of recommendation algorithms) might be better represented for user experience with the recommender?
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 40
Traditional RecSys InterfaceMovieLens: example of traditional recommender list
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 41
Explanations and Control
Options of User Control
Explainability
Recommendations of books
GoodReads: Book recommender system
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 42
PeerChooser (2008) Controllability in CF
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 43
O’Donovan et al. “PeerChooser: Visual Interactive Recommendation” (2008)
SmallWorlds: Expanded Explainability
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 44
Gretarsson et al. “SmallWorlds: Visualizing social recommendations” (2010)
TasteWeights: Hybrid Control and Inspect
Bostandjev et al. “TasteWeights: A Visual Interactive Hybrid Recommender System” (2012)
Controllability: Sliders that let users control the importance of preferences and contexts
Inspectability: lines that connect recommended items with contexts and user preferences
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 45
IUI 2017
• Loepp et al. (2017)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 46
More Details? Check our survey
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 47
He, C., Parra, D., & Verbert, K. (2016). Interactive recommender systems: a survey of the state of the art and future research challenges and opportunities. Expert Systems with Applications, 56, 9-27.
Visualization & User Controllability• Motivation: Can user controllability and
explainability improve user engagement and satisfaction with a recommender system?
• Specific research question: How overlapping contexts of relevance (of recommendation algorithms) might be better represented for user experience with the recommender?
• Our scenario: Conference articles
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 48
Research Platform
• The studies were conducted using Conference Navigator, a Conference Support System
• Our goal was recommending conference talks
Program Proceedings Author List Recommendations
http://halley.exp.sis.pitt.edu/cn3/
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 49
TalkExplorer – IUI 2013• Adaptation of Aduna Visualization to CN• Main research question: Does fusion (intersection) of
contexts of relevance improve user experience?
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 50
TalkExplorer - I
EntitiesTags, Recommender Agents, Users
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 51
TalkExplorer - II
RecommenderRecommender
Cluster with intersection of entities Cluster (of
talks) associated to only one entity
• Canvas Area: Intersections of Different Entities
User
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 52
TalkExplorer - III
ItemsTalks explored by the user
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 53
Our Assumptions
• Items which are relevant in more that one aspect could be more valuable to the users
• Displaying multiple aspects of relevance visually is important for the users in the process of item’s exploration
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 54
TalkExplorer Studies I & II• Study I
– Controlled Experiment: Users were asked to discover relevant talks by exploring the three types of entities: tags, recommender agents and users.
– Conducted at Hypertext and UMAP 2012 (21 users)– Subjects familiar with Visualizations and Recsys
• Study II– Field Study: Users were left free to explore the interface.– Conducted at LAK 2012 and ECTEL 2013 (18 users) – Subjects familiar with visualizations, but not much with
RecSys
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 55
Evaluation: Intersections & Effectiveness
• What do we call an “Intersection”?
• We used #explorations on intersections and their effectiveness, defined as:
Effectiveness =
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 56
Results of Studies I & II
• Effectiveness increases with intersections of more entities
• Effectiveness wasn’t affected in the field study (study 2)
• … but exploration distribution was affected
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 57
More Details About TalkExplorer• Verbert, K., Parra, D., Brusilovsky, P., & Duval, E. (2013).
Visualizing recommendations to support exploration, transparency and controllability. In Proceedings of the 2013 international conference on Intelligent user interfaces (pp. 351-362). ACM.
• Verbert, K., Parra, D., & Brusilovsky, P. (2016). Agents Vs. Users: Visual Recommendation of Research Talks with Multiple Dimension of Relevance. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(2), 11.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 58
SETFUSION: VENN DIAGRAM FOR USER-CONTROLLABLE INTERFACE
SetFusion – IUI 2014
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 60
SetFusion I
Traditional Ranked List
Papers sorted by Relevance. It combines 3 recommendation approaches.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 61
SetFusion - IISlidersAllow the user to control the importance of each data source or recommendation method
Interactive Venn DiagramAllows the user to inspect and to filter papers recommended. Actions available:- Filter item list by clicking on an area- Highlight a paper by mouse-over on a circle- Scroll to paper by clicking on a circle- Indicate bookmarked papers
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 62
Study : iConference• A laboratory within-subjects study. 40 subjects.• In Preference elicitation phase, people did not have limit of
papers. Under RecSys interfaces, minimum limit was 15.• In bookmarking, subjects could pick items relevant to a)
themselves, b) themselves and others, and c) only to others.
$12/hourAvg: 1.5 hours
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 63
Study : Population and General Stats
Non-controllable Controllable# Total bookmarks 638 625# Average bookmarks/user 15.95 15.63# Average rating 2.48±0.089 2.46±0.076
Gender Female: 17 Male: 23
Age 31.75±6.5Native Speaker Yes: 10 No: 30
Subject Occupation Information Sc. (16), Library Sc.(9), Comp. Sc. (6), Telecomm (3), (+6)
PCA15 questions
on pre-questionnaire
4 Factors (User Characteristics)• Expertise in domain• Engaged with iSchools• Trusting Propensity• Experience w/RecSysDropped• Experience w/CN
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 64
Study 2: Results (1)Variables Comment
User Engagement
Significant Talks explored, clicks (nbr. actions) , time spent on task All significantly higher in controllable interface
User Experience
Significant MAP Significantly higher in controllable interface
User Characteristics
Significant Trusting prop.: increases use of Venn diagram and MAPNative speaker: Decreases time spent on taskGender: Being male increases use of sliders Age: Each additional year decreases use of sliders
Trusting propensityconfirms results of previous studies
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 65
Rating per method – Effect of Visuals
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 66
Gender Differences on SetFusion?
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 67
Study : Results (2)Post-session surveys Controllable No-Controllable
Understandability 4.05±0.09*** 2.95±0.16
Satisfaction with interface 4.28±0.09*** 3.4±0.16
Confidence of not missing relevant talks 3.9±0.11*** 3.13±0.15
Intention: I would use it again 4.23±0.09*** 3.45±0.15
Intention: I would recommend system to colleagues 4.28±0.09*** 3.48±0.16
Venn diagram visualization was useful to identify talksrecommended by a specific or by a combination ofrecommendation methods.
4.35±0.11 --
Venn diagram visualization supported explainability4.08±0.13 --
Satisfaction due to ability to control 4.05±0.12 --
Perception of Control with Sliders 4.03±0.13 --
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 68
Study : Results (3)Non-control Controllable Both None
Which interface did you prefer? 0 36 4 0Non-control Controllable None Both
Which interface would you suggest to implement permanently in CN?
1 33 1 5
“I like the Venn diagram especially because most papers I was interested in fell in the same intersections, so it was pretty easy to find and bookmark”
“I thought the controllable one adds unnecessary complication if the list is not very long”
“I prefer the sliders (over Venn diagram) because I have used a system before to control search results with a similar widget, so it was more familiar to me.”
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 69
Study Takeaways• User Engagement: Controllable interface significantly
drives more user engagement (objective and subjective metrics)
• User Experience: Controllable interface improves user experience by allowing user to interactively control ranking (MAP) and improving explainability.
• User characteristics: Trusting propensity affects positively engagement and experience, engagement with iSchools shows the opposite. Males have a tendency to prefer sliders over Venn diagram to control and filter.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 70
More Details on SetFusion?
• Effect of other variables: gender, age, experience with in the domain, familiarity with the system
• Check our paper in the IJHCS “User-controllable Personalization: A Case Study with SetFusion”: Controlled Laboratory study with SetFusion versus traditional ranked list
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 71
Study 2 – UMAP 2013
• Field Study: let users freely explore the interface
- ~50% (50 users) tried the SetFusion recommender
- 28% (14 users) bookmarked at least one paper
- Users explored in average 14.9 talks and bookmarked 7.36 talks in average.
A AB ABC AC B BC C15 7 9 26 18 4 1716% 7% 9% 27% 19% 4% 18%
Distribution of bookmarks per method or combination of methods
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 72
Hybrid RecSys: Visualizing Intersections
Clustermap Venn diagram
• Clustermap vs. Venn Diagram
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 73
TalkExplorer vs. SetFusion
• Comparing distributions of explorations
In studies 1 and 2 over TalkExplorer we observed an important change in the distribution of explorations.
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 74
TalkExplorer vs. SetFusion
• Comparing distributions of explorations
Comparing the field studies:- In TalkExplorer, 84% of
the explorations over intersections were performed over clusters of 1 item
- In SetFusion, was only 52%, compared to 48% (18% + 30%) of multiple intersections, diff. not statistically significant
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 75
Summary & Conclusions
• We showed that intersections of several contexts of relevance help to discover relevant items
• The visual paradigm used can have a strong effect on user behavior: we need to keep working on visual representations that promote exploration without increasing the cognitive load over the users
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 76
Limitations & Future Work
• Apply our approach to other domains (fusion of data sources or recommendation algorithms)
• For SetFusion, find alternatives to scale the approach to more than 3 sets, potential alternatives:– Clustering and – Radial sets
• Consider other factors that interact with the user satisfaction: – Controllability by itself vs. minimum level of accuracy
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 77
Current Work on Interfaces
• MoodPlay– With Ivana Andjelkovic & John O’Donovan (UCSB)
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 78
Andjelkovic, I., Parra, D., & O'Donovan, J. (2016, July). Moodplay: Interactive Mood-based Music Discovery and Recommendation. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization (pp. 275-279). ACM.
MoodPlay
• https://www.youtube.com/watch?v=eEdo32oOmcE
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 79
Emotion Models
• Modelo de emociones de Russel(1980)
• GEMS (2008)
Moods and Music: the GEMS model
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 81
System Architecture
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 82
Hybrid Recommendation Approach
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 83
User Study
• Conducted on Mechanical Turk, 4 conditions
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 84
Interactions
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 85
Interaction Stats
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 86
Diversity Consumption
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 87
User Prior Mood and Artist Mood
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 88
Post-Study Survey
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 89
Accuracy
Post-Study Survey
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 90
Diversity
Post-Study Survey
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 91
Confusing Interface
Post-Study Survey
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 92
Easy to use
CONCLUSIONS (& CURRENT) & FUTURE WORK
Challenges in Interactive RecSys
• Objectives• Controllability• Context-aware recommendations• Privacy• Visualization Techniques• Interaction Techniques• Conversation Interfaces• Evaluation Methodology
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 94
Future Work
• Opportunities for using new devices (Sensors on Stmartphones, EEG)
• Although new devices can capture a lot of new types of data, there is still a lot to be done with data we already produce but we haven’t consumed (user logs on social web sites, etc.)
3/29/17 D. Parra, FuturePD talk, UMAP 2016 95
MoodPlay in the Chilean news
MoodPlay in the Chilean news
Moodplay as therapy?
Moodplay as therapy?
• S. Koelsch. A neuroscientific perspective on music therapy. Annals of the New York Academy of Sciences, 1169(1):374–384, 2009.
• Music can help on modulate certain mental states.
3/29/17 D. Parra, FuturePD talk, UMAP 2016 98
Previous work: MIT Mood Meter
• http://moodmeter.media.mit.edu/
3/29/17 D. Parra, FuturePD talk, UMAP 2016 99
Input Data: from Social Networks?
• Michelle Zhou’s personality profile
3/29/17 D. Parra, FuturePD talk, UMAP 2016 100
Visual emotion detection
• https://github.com/auduno/clmtrackr
3/29/17 D. Parra, FuturePD talk, UMAP 2016 101
Using EEG (BCI)
3/29/17 D. Parra, FuturePD talk, UMAP 2016 102
EMOTIVhttp://emotiv.com/epoc/
NEUROSKYhttp://neurosky.com/biosensors/
Heatmaps to Moodplay
3/29/17 D. Parra, FuturePD talk, UMAP 2016 103
EpistAid
• Epistemonikos: Evidence-based Medicine• Physicians answer clinical questions
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 105
EpistAid 2
• Process of building evidence matrices is really slow
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 106
EpistAid: IUI to support physicians
• Study 1: Relevance Feedback to find missing papers faster, off-line evaluation
• Study 2: Study with physicians at PUC
March 29th, 2017 D.Parra ~ UFMG – Invited Talk 107