Date post: | 17-Dec-2015 |
Category: |
Documents |
Upload: | ezra-gordon |
View: | 215 times |
Download: | 1 times |
Prof. Panos Ipeirotis
Search and the New Economy
Session 5
Mining User-Generated Content
Today’s Objectives
• Tracking preferences using social networks– Facebook API– Trend tracking using Facebook
• Mining positive and negative opinions– Sentiment classification for product reviews– Feature-specific opinion tracking
• Economic-aware opinion mining– Reputation systems in marketplaces– Quantifying sentiment using econometrics
Top-10, Zeitgeist, Pulse, …
• Tracking top preferences have been around for ever
Online Social Networking Sites
• Preferences listed and easily accessible
Facebook API
• Content easily extractable
• Easy to “slice and dice”– List the top-5 books for 30-year old New Yorkers– List the book that had the highest increase across
female population last week– …
Demo
Today’s Objectives
• Tracking preferences using social networks– Facebook API– Trend tracking using Facebook
• Mining positive and negative opinions– Sentiment classification for product reviews– Feature-specific opinion tracking
• Economic-aware opinion mining– Reputation systems in marketplaces– Quantifying sentiment using econometrics
Customer-generated Reviews
• Amazon.com started with books
• Today there are review sites for almost everything
• In contrast to “favorites” we can get information for less popular products
Questions
• Are reviews representative?
• How do people express sentiment?
Rating(1 … 5 stars)
Helpfulness of review(by other customers)
Review
Do People Trust Reviews?
• Law of large numbers: single review no, multiple ones, yes
• Peer feedback: number of useful votes
• Perceived usefulness is affected by:– Identity disclosure: Users trust real people– Mixture of objective and subjective elements– Readability, grammaticality
• Negative reviews that are useful may increase sales! (Why?)
Are Reviews Representative?
1 2 3 4 5
cou
nts
1 2 3 4 5
cou
nts
1 2 3 4 5
cou
nts
1 2 3 4 5
cou
nts
Guess?
What is the Shape of the Distribution of Number of Stars?
Observation 1: Reporting Bias
1 2 3 4 5
cou
nts
Why?
Implications for WOM strategy?
Possible Reasons for Biases
• People don’t like to be critical
• People do not post if they do not feel strongly about the product (positively or negatively)
Observation 2: The SpongeBob Effect
SpongeBob Squarepants Oscar
versus
Oscar Winners 2000-2005
Average Rating 3.7 Stars
SpongeBob DVDs
Average Rating 4.1 Stars
And the Winner is… SpongeBob!
If SpongeBob effect is common, then ratings do not accurately signal the quality of the resource
What is Happening Here?
• People choose movies they think they will like, and often they are right– Ratings only tell us that “fans of SpongeBob like SpongeBob”– Self-selection
• Oscar winners draw a wider audience– Rating is much more representative of the general population
• When SpongeBob gets a wider audience, his ratings drop
Title # Ratings Ave
SpongeBob Season 2 3047 4.12
Tide and Seek 3114 4.05
SpongeBob the Movie 21,918 3.49
Home Sweet Pineapple 2007 4.10
Fear of a Krabby Patty 1641 4.06
Effect of Self-Selection: Example
• 10 people see SpongeBob’s 4-star ratings– 3 are already SpongeBob fans, rent movie, award 5 stars– 6 already know they don’t like SpongeBob, do not see
movie– Last person doesn’t know SpongeBob, impressed by high
ratings, rents movie, rates it 1-star
Result:• Average rating remains unchanged: (5+5+5+1)/4
= 4 stars• 9 of 10 consumers did not really need rating
system• Only consumer who actually used the rating
system was misled
Bias-Resistant Reputation System
• Want P(S) but we collect data on P(S|R)S = Are satisfied with resourceR = Resource selected (and reviewed)
• However, P(S|E) P(S|E,R) E = Expects that will like the resource
– Likelihood of satisfaction depends primarily on expectation of satisfaction, not on the selection decision
– If we can collect prior expectation, the gap between evaluation group and feedback group disappears
• whether you select the resource or not doesn’t matter
Bias-Resistant Reputation System
Before viewing:• I think I will:
Love this movie Like this movie It will be just OK Somewhat dislike this movie Hate this movie
After viewing:• I liked this movie:
Much more than expected More than expected About the same as I expected Less than I expected Much less than I expected
Big fans
Everyone else
Skeptics
Conclusions
1. Reporting bias and Self-selection bias exists in most cases of consumer choice
2. Bias means that user ratings do not reflect the distribution of satisfaction in the evaluation group– Consumers have no idea what “discount” to apply to
ratings to get a true idea of quality
3. Many current rating systems may be self-defeating– Accurate ratings promote self-selection, which leads to
inaccurate ratings
4. Collecting prior expectations may help address this problem
OK, we know the biases
• Can we get more knowledge?
• Can we dig deeper than the numeric ratings?– “Read the reviews!”– “They are too many!”
Independent Sentiment Analysis
• Often we need to analyze opinions– Can we provide review summaries? – What should the summary be?
Basic Sentiment classification
• Classify full documents (e.g., reviews, blog postings) based on the overall sentiment– Positive, negative and (possibly) neutral
• Similar but also different from topic-based text classification.– In topic-based classification, topic words are important
• Diabetes, cholesterol health• Election, votes politics
– In sentiment classification, sentiment words are more important, e.g., great, excellent, horrible, bad, worst, etc.
– Sentiment words are usually adjectives or adverbs or some specific expressions (“it rocks”, “it sucks” etc.)
• Useful when doing aggregate analysis
Can we go further?
• Sentiment classification is useful, but it does not find what the reviewer liked and disliked.
– Negative sentiment does not mean that the reviewer does not like anything about the object.
– Positive sentiment does not mean that the reviewer likes everything
• Go to the sentence level and feature level
Extraction of features
• Two types of features: explicit and implicit
• Explicit features are mentioned and evaluated directly– “The pictures are very clear.”– Explicit feature: picture
• Implicit features are evaluated but not mentioned– “It is small enough to fit easily in a coat pocket or purse.”– Implicit feature: size
• Extraction: Frequency based approach– Focusing on frequent features (main features)– Infrequent features can be listed as well
Identify opinion orientation of features
• Using sentiment words and phrases– Identify words that are often used to express positive or
negative sentiments – There are many ways (dictionaries, WorldNet, collocation with
known adjectives,…)
• Use orientation of opinion words as the sentence orientation, e.g., – Sum:
• a negative word is near the feature, -1, • a positive word is near a feature, +1
Two types of evaluations
• Direct Opinions: sentiment expressions on some objects/entities, e.g., products, events, topics, individuals, organizations, etc– E.g., “the picture quality of this camera is great”– Subjective
• Comparisons: relations expressing similarities, differences, or ordering of more than one objects.– E.g., “car x is cheaper than car y.”– Objective or subjective– Compares feature quality– Compares feature existence
Visual Summarization & Comparison
Summary
Picture Battery Size Weight Zoom
+
_
Comparison
_
+
Digital camera 1
Digital camera 1
Digital camera 2
Example: iPod vs. Zune
Today’s Objectives
• Tracking preferences using social networks– Facebook API– Trend tracking using Facebook
• Mining positive and negative opinions– Sentiment classification for product reviews– Feature-specific opinion tracking
• Economic-aware opinion mining– Reputation systems in marketplaces– Quantifying sentiment using econometrics
Comparative Shopping in e-Marketplaces
Customers Rarely Buy Cheapest Item
Are Customers Irrational?
$11.04
BuyDig.com gets
Price Premium(customers pay more than
the minimum price)
Price Premiums @ Amazon
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
-100 -75 -50 -25 0 25 50 75 100
Price Premium
Nu
mb
er
of
Tra
ns
ac
tio
ns Are Customers
Irrational (?
)
Why not Buying the Cheapest?
You buy more than a product
Customers do not pay only for the product
Customers also pay for a set of fulfillment characteristics
Delivery
Packaging
Responsiveness
…
Customers care about reputation of sellers!
Reputation Systems are Review Systems for Humans
Example of a reputation profile
Basic idea
Conjecture: Price premiums measure reputation
Reputation is captured in text feedback
Examine how text affects price premiums(and do sentiment analysis as a side effect)
Outline
• How we capture price premiums
• How we structure text feedback
• How we connect price premiums and text
Data
Overview
Panel of 280 software products sold by Amazon.com X 180 days
Data from “used goods” market
Amazon Web services facilitate capturing transactions
No need for any proprietary Amazon data
Data: Secondary Marketplace
Data: Capturing Transactions
time
Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8
We repeatedly “crawl” the marketplace using Amazon Web Services
While listing appears item is still available no sale
Data: Capturing Transactions
time
Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10
We repeatedly “crawl” the marketplace using Amazon Web Services
When listing disappears item sold
Capturing transactions and “price premiums”
Data: Transactions
When item is sold, listing disappears
time
Item sold on 1/9
Jan 1 Jan 2 Jan 3 Jan 4 Jan 5 Jan 6 Jan 7 Jan 8 Jan 9 Jan 10
Data: Variables of Interest
Price Premium
Difference of price charged by a seller minus listed price of a competitor
Price Premium = (Seller Price – Competitor Price)
Calculated for each seller-competitor pair, for each transaction
Each transaction generates M observations, (M: number of competing sellers)
Alternative Definitions:
Average Price Premium (one per transaction)
Relative Price Premium (relative to seller price)
Average Relative Price Premium (combination of the above)
Price premiums @ Amazon
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
-100 -75 -50 -25 0 25 50 75 100
Price Premium
Nu
mb
er
of
Tra
ns
ac
tio
ns
Average price premiums @ Amazon
0
200
400
600
800
1000
1200
-100 -75 -50 -25 0 25 50 75 100
Average Price Premium
Nu
mb
er
of
Tra
ns
ac
tio
ns
Relative Price Premiums
-1--0.9
-0.9--0.8
-0.8--0.7
-0.7--0.6
-0.6--0.5
-0.5--0.4
-0.4--0.3
-0.3--0.2
-0.2--0.1
-0.1-0.0
0-0.1 0.1-0.2
0.2-0.3
0.3-0.4
0.4-0.5
0.5-0.6
0.6-0.7
0.7-0.8
0.8-0.9
0.9-10
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000
Average Relative Price Premiums
-1--0.9
-0.9--0.8
-0.8--0.7
-0.7--0.6
-0.6--0.5
-0.5--0.4
-0.4--0.3
-0.3--0.2
-0.2--0.1
-0.1-0.0
0-0.1 0.1-0.2
0.2-0.3
0.3-0.4
0.4-0.5
0.5-0.6
0.6-0.7
0.7-0.8
0.8-0.9
0
500
1000
1500
2000
2500
Outline
• How we capture price premiums
• How we structure text feedback
• How we connect price premiums and text
Decomposing Reputation
Is reputation just a scalar metric?
Many studies assumed a “monolithic” reputation
Instead, break down reputation in individual components
Sellers characterized by a set of fulfillment characteristics(packaging, delivery, and so on)
What are these characteristics (valued by consumers?)
We think of each characteristic as a dimension, represented by a noun, noun phrase, verb or verbal phrase (“shipping”, “packaging”, “delivery”, “arrived”)
Use (simple) Natural Language Processing tools
Scan the textual feedback to discover these dimensions
Decomposing and Scoring Reputation
Decomposing and scoring reputation
We think of each characteristic as a dimension, represented by a noun or verb phrase (“shipping”, “packaging”, “delivery”, “arrived”)
The sellers are rated on these dimensions by buyers using modifiers (adjectives or adverbs), not numerical scores
“Fast shipping!”
“Great packaging”
“Awesome unresponsiveness”
“Unbelievable delays”
“Unbelievable price”
How can we find out the meaning of these adjectives?
Structuring Feedback Text: Example
Parsing the feedback
P1: I was impressed by the speedy delivery! Great Service!
P2: The item arrived in awful packaging, but the delivery was speedy
Deriving reputation score
We assume that a modifier assigns a “score” to a dimension α(μ, k): score associated when modifier μ evaluates the k-th dimension
w(k): weight of the k-th dimension
Thus, the overall (text) reputation score Π(i) is a sum:
Π(i) = 2*α (speedy, delivery) * weight(delivery)+ 1*α (great, service) * weight(service) +
1*α (awful, packaging) * weight(packaging)
unknownunknown?
Outline
• How we capture price premiums
• How we structure text feedback
• How we connect price premiums and text
Sentiment Scoring with Regressions
Scoring the dimensions
Use price premiums as “true” reputation score Π(i) Use regression to assess scores (coefficients)
Regressions
Control for all variables that affect price premiums
Control for all numeric scores of reputation
Examine effect of text: E.g., seller with “fast delivery” has premium $10 over seller with “slow delivery”, everything else being equal
“fast delivery” is $10 better than “slow delivery”
estimated coefficients
Π(i) = 2*α (speedy, delivery) * weight(delivery)+ 1*α (great, service) * weight(service) +
1*α (awful, packaging) * weight(packaging)
PricePremium
Some Indicative Dollar Values
Positive Negative
Natural method for extracting sentiment strength and polarity
good packaging -$0.56
Naturally captures the pragmatic meaning within the given context
captures misspellings as well
Positive? Negative?
Results
Some dimensions that matter
Delivery and contract fulfillment (extent and speed)
Product quality and appropriate description
Packaging
Customer service
Price (!)
Responsiveness/Communication (speed and quality)
Overall feeling (transaction)
More Results
Further evidence: Who will make the sale?
Classifier that predicts sale given set of sellers
Binary decision between seller and competitor
Used Decision Trees (for interpretability)
Training on data from Oct-Jan, Test on data from Feb-Mar
Only prices and product characteristics: 55%
+ numerical reputation (stars), lifetime: 74%
+ encoded textual information: 89%
text only: 87%
Text carries more information than the numeric metrics
Other applications
Summarize and query reputation data
Give me all merchants that deliver fast
SELECT merchant FROM reputation
WHERE delivery > ‘fast’
Summarize reputation of seller XYZ Inc.
Delivery: 3.8/5
Responsiveness: 4.8/5
Packaging: 4.9/5
Pricing reputation
Given the competition, merchant XYZ can charge $20 more and still make the sale (confidence: 83%)
Seller: uCameraSite.com
1. Canon Powershot x300
2. Kodak - EasyShare 5.0MP
3. Nikon - Coolpix 5.1MP
4. Fuji FinePix 5.1
5. Canon PowerShot x900
Your last 5 transactions in Cameras
Name of product Price
Seller 1 - $431
Seller 2 - $409
You - $399
Seller 3 - $382
Seller 4-$379
Seller 5-$376
Canon Powershot x300
Your competitive landscapeProduct Price (reputation)
(4.8)
(4.65)
(4.7)
(3.9)
(3.6)
(3.4)
Your Price: $399Your Reputation Price: $419Your Reputation Premium: $20 (5%)
$20
Left on the table
Reputation Pricing Tool for Sellers
25%
14%
7%
45%
9%
Quantitatively Understand & Manage Seller Reputation
How your customers see you relative to other sellers:
35%*
69%
89%
82%
95%
Service
Packaging
Delivery
Overall
Quality
Dimensions of your reputation and the relative importance to your customers:
Service
Packaging
Delivery
Quality
Other* Percentile of all merchants
• RSI Products Automatically Identify the Dimensions of Reputation from Textual Feedback• Dimensions are Quantified Relative to Other Sellers and Relative to Buyer Importance• Sellers can Understand their Key Dimensions of Reputation and Manage them over Time• Arms Sellers with Vital Info to Compete on Reputation Dimensions other than Low Price.
Tool for Seller Reputation Management
Marketplace Search
Used Market (ex: Amazon)
Price Range $250-$300
Seller 1 Seller 2
Seller 4 Seller 3
Sort by Price/Service/Delivery/other dimensions
Canon PS SD700
Service
Packaging
Delivery
Price
Dimension Comparison
Seller 1
Price Service Package Delivery
Seller 2
Seller 3
Seller 4
Seller 5
Seller 6
Seller 7
Tool for Buyers
Summary
• User feedback defines reputation → price premiums
• Generalize: User-generated-content affects “markets”• Reviews and product sales• News/blogs and elections
• Examine changes in demand and estimate weights of features and strength of evaluations
Product Reviews and Product Sales
“poor lenses”
+3%
“excellent lenses”
-1%
“poor photos”
+6%
“excellent photos”
-2%
Feature “photos” is two time more important than “lenses” “Excellent” is positive, “poor” is negative “Excellent” is three times stronger than “poor”
Question: Reviews and Ads
• How?
• Is your strategy incentive-compatible?
Given product review summaries (potentially with economic impact), can we improve ad generation?
Sentiment & Presidential Election
Political News and Prediction Markets
Hillary Clinton
Political News and Prediction Markets
Hillary Clinton, Feb 2nd
Political News and Prediction Markets
Mitt Romney
Political News and Prediction Markets
Mitt Romney, Feb 2nd
Summary
• We can quantify unstructured, qualitative data. We need:
• A context in which content is influential and not redundant (experiential content for instance)
• A measurable economic variable: price (premium), demand, cost, customer satisfaction, process cycle time
• Methods for structuring unstructured content
• Methods for aggregating the variables in a business context-aware manner
Question:
• What needs to be done for other types of USG?– Structuring: Opinions are expressed in many ways
– Independent summaries: Not all scenarios have associated economic outcomes, or difficult to measure (e.g., discussion about product pre-announcement)
– Personalization: The weight of the opinion of each person varies (interesting future direction!)
– Data collection: Rarely evaluations are in one place