+ All Categories
Home > Documents > eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t...

eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t...

Date post: 23-Sep-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
71
Don’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity Snehasish Banerjee * Wee Kim Wee School of Communication and Information, Nanyang Technological University, 31 Nanyang Link, Singapore 637718. E- mail: [email protected] Alton Y. K. Chua Wee Kim Wee School of Communication and Information, Nanyang Technological University, 31 Nanyang Link, Singapore 637718. E- mail: [email protected] Jung-Jae Kim Institute for Infocomm Research, #21-01 Connexis (South Tower), 1 Fusionopolis Way, Singapore 138632. E-mail: [email protected] star.edu.sg 1
Transcript
Page 1: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Don’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review

Authenticity

Snehasish Banerjee*

Wee Kim Wee School of Communication and Information, Nanyang Technological University,

31 Nanyang Link, Singapore 637718. E-mail: [email protected]

Alton Y. K. Chua

Wee Kim Wee School of Communication and Information, Nanyang Technological University,

31 Nanyang Link, Singapore 637718. E-mail: [email protected]

Jung-Jae Kim

Institute for Infocomm Research, #21-01 Connexis (South Tower), 1 Fusionopolis Way,

Singapore 138632. E-mail: [email protected]

This author was based at the School of Computer Engineering, Nanyang Technological

University when the research was conducted.

* Corresponding author

1

Page 2: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Don’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review

Authenticity

This paper uses linguistic analysis to help users discern the authenticity of online reviews.

Two related studies were conducted using hotel reviews as the test case for investigation.

The first study analyzed 1,800 authentic and fictitious reviews based on the linguistic cues

of comprehensibility, specificity, exaggeration and negligence. The analysis involved

classification algorithms followed by feature selection and statistical tests. A filtered set of

variables that helped discern review authenticity was identified. The second study

incorporated these variables to develop a guideline that aimed to inform humans how to

distinguish between authentic and fictitious reviews. The guideline was used as an

intervention in an experimental setup that involved 240 participants. The intervention

improved human ability to identify fictitious reviews amid authentic ones.

2

Page 3: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Don’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review

Authenticity

Introduction

User-generated online reviews are commonplace yet their authenticity cannot be blindly

assumed. This is due to opinion spamming—posting fictitious reviews that resemble authentic

ones (Ott, Choi, Cardie, & Hancock, 2011). As users struggle to discern review authenticity, they

run the risk of being misled. Although users are often warned by review websites about the

prevalence of fictitious entries, they receive little guidance on ways to identify fictitious reviews

amid authentic ones.

This problem has motivated two disparate research strands. The first distinguishes

between authentic and fictitious reviews using classification algorithms but ignores users’

perceptions (Jindal & Liu, 2008; Ott et al., 2011). The second strand examines users’ perceptions

such as perceived review credibility through user studies but ignores if users are able to

distinguish between authentic and fictitious reviews (Park, Lee, & Han, 2007; Sidali, Schulze, &

Spiller, 2009).

However, these two research strands are yet to converge. In consequence, scholars have

seldom attempted to shed light on both actual differences as well as users’ perceived differences

between authentic and fictitious reviews concurrently. Furthermore, the potential of combining

methodological approaches from both the strands—classification algorithms and user studies—

has hardly been exploited hitherto.

3

Page 4: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

To bridge this chasm in the extant literature, the objective of this paper is to examine the

potential of linguistic analysis to help users distinguish between authentic and fictitious reviews.

Authentic reviews are defined as those written with post-purchase experiences while fictitious

reviews refer to those written based on imagination.

To achieve this objective, this paper conducts two related studies (Figure 1). The first

study (henceforth, Linguistic Study) analyzes linguistic differences between authentic and

fictitious reviews. After all, the use of language could offer clues to discern review authenticity

(Heydari, Tavakoli, Salim & Heydari, 2015; Johnson & Raye, 1981). A theoretically-informed

linguistic framework is proposed. Using a dataset of 1,800 hotel reviews (900 authentic + 900

fictitious), the linguistic cues in the framework were measured through some 83 variables. The

study employed classification algorithms, feature selection, and statistical analyses to identify a

filtered set of linguistic variables that helped distinguish between authentic and fictitious

reviews.

The second study (henceforth, User Study) extends the Linguistic Study by examining

the extent to which users’ perceptions of linguistic differences between authentic and fictitious

reviews help them discern authenticity. Informed by the filtered set of linguistic variables

identified in the Linguistic Study, a guideline was developed to guide users how to distinguish

between authentic and fictitious reviews. This is necessary because guidance on ways to evaluate

information improve users’ information-processing strategies (Argelagós & Pifarré, 2012). After

pre-tests, the guideline was used as an intervention in an experimental setup. The efficacy of the

intervention was examined using 240 annotators (120 with intervention + 120 without

intervention).

Insert Figure 1 here

4

Page 5: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

FIG. 1. Two related studies conducted to address the objective.

The paper proceeds as follows. The next section reviews the literature, which culminates

in the linguistic framework to distinguish between authentic and fictitious reviews. The

Linguistic Study and the User Study are presented thereafter. Following that, the major findings

are discussed. The paper concludes by highlighting its contributions and limitations.

Literature Review

There are at least four major approaches to distinguish between authentic and fictitious

reviews. One approach relies on websites’ metadata. They include review-related, reviewer-

5

Page 6: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

related and product related information (Jindal & Liu, 2008; Li et al., 2011). Such an approach is

useful because it leverages on information that are automatically captured on websites. However,

it is impossible to be employed on reviews collected from websites that do not display metadata

liberally.

A second approach involves rule-based detection. Some works develop rules based on

patterns of incoming reviews for a product or a service (Feng, Xing, Gogar, & Choi, 2012; Wu,

Greene, & Cunningham, 2010) while others rely on behavioral footprints of individuals who

contribute reviews (Jindal, Liu, & Lim, 2010; Mukherjee et al., 2013). This approach enables

flagging out manipulated products or services and spammers. However, since it relies on macro

trends and patterns, the focus is not to ascertain the authenticity of individual reviews.

A third approach involves duplicate detection. It allows identifying duplicates from

different user ids on the same product, from different user ids on different products, from the

same user id on the same product, and from the same user id on different products (Gera &

Singh, 2015; Jindal & Liu, 2008). However, this approach implicitly assumes non-duplicate

reviews as authentic. Even non-duplicate reviews could be fictitious because spammers need not

always make blatant copies of existing entries.

Another approach involves language-based detection. It is hinged on the long-known

premise that authentic texts written based on experiences differ linguistically from fictitious texts

concocted out of imagination (Johnson & Raye, 1981; Rubin & Lukoianova, 2015). Although the

language of authentic and fictitious reviews could appear similar to the naked eye (DePaulo et

al., 2003), linguistic nuances continue to be heralded as “the first thing to be considered” to

automatically distinguish between authentic and fictitious reviews (Heydari et al., 2015, p.

3635). Siding with this stream of literature (Ott et al., 2011; Yoo & Gretzel, 2009), the paper

6

Page 7: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

argues that the lack of first-hand experiences in fictitious reviews would make their language

subtly different from authentic entries.

Such a language-based approach is more viable than the metadata-based approach

because it could be employed on reviews collected from any websites regardless of the

availability of metadata. It deviates from the rule-based detection approach by ascertaining the

authenticity of individual reviews rather than relying on macro trends and patterns. It is superior

to the duplication detection approach because it allows identification of non-duplicate fictitious

reviews. Therefore, this paper illustrates the possibility of using language to discern review

authenticity.

Concurrently, interventions on critical evaluation of information are known to inform

humans’ information-processing strategies (Argelagós & Pifarré, 2012; Kammerer, Amann, &

Gerjets, 2015; Munzel, 2015). For example, interventions in the form of a one-hour instructional

unit improved human ability to differentiate between reliable and unreliable health information

(Wiley et al., 2009). Perhaps, such long interventions to discern review authenticity would be

cognitively onerous. Nonetheless, exploring if short interventions improve human ability to

distinguish between authentic and fictitious reviews is worthwhile.

Recognizing the importance of linguistic cues in developing such an intervention, this

paper proposes a framework to distinguish between authentic and fictitious reviews. The

framework specifically identifies four linguistic cues that include comprehensibility, specificity,

exaggeration and negligence (DePaulo et al., 2003; Ekman & Friesen, 1969; Johnson & Raye,

1981; McCornack, 1992).

Comprehensibility

7

Page 8: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Comprehensibility is the extent to which reviews are easy to understand. Differences in

comprehensibility between authentic and fictitious reviews stem from the information

manipulation theory (McCornack, 1992), as well as the self-presentational perspective (DePaulo

et al., 2003). The information manipulation theory expects authentic and fictitious reviews to

differ in quantity and clarity, both of which shape comprehensibility. The self-presentational

perspective suggests that authentic reviews could be easier to understand vis-à-vis fictitious ones.

Unlike authentic reviews, fictitious entries could contain grandiloquent language to compensate

for the lack of real experience (Ghose & Ipeirotis 2011).

In this vein, two competing views exist. The first holds authentic reviews to be less

comprehensible than fictitious ones. This is based on the assumption that individuals writing

authentic reviews have lower cognitive load, and hence, greater bandwidth to craft sophisticated

sentences (Burgoon & Qin, 2006; Johnson & Raye, 1981; Newman, Pennebaker, Berry, &

Richards, 2003). Such sophistication makes authentic reviews difficult to comprehend. The

second view however posits that authentic reviews would be more comprehensible than fictitious

ones. The assumption is that unlike authentic reviews, fictitious ones are written by individuals

who are too enthusiastic to use sophisticated language in order to sound credible (Ghose &

Ipeirotis, 2011; Yoo & Gretzel, 2009). To examine which view prevails, investigating

differences in comprehensibility between authentic and fictitious reviews is needed.

Comprehensibility is commonly conceptualized as three sub-dimensions—readability,

word familiarity and structural features. Readability denotes the expertise required to grasp the

meaning of reviews (Zakaluk & Samuels, 1998). Word familiarity is the degree to which reviews

contain easily-recognizable words (Chall & Dale, 1995). Structural features include superficial

characteristics such as number of characters per word (Cao, Duan & Gan, 2011). Reviews that

8

Page 9: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

are readable, use familiar words, but avoid long words are deemed comprehensible (Burgoon et

al., 2016; Ghose & Ipeirotis, 2011).

Specificity

Specificity is the extent to which reviews are detailed. Differences in specificity between

authentic and fictitious reviews stem from the information manipulation theory (McCornack,

1992), as well as the reality monitoring theory (Johnson & Raye, 1981). The information

manipulation theory suggests that authentic reviews could be more specific than fictitious ones.

The reality monitoring theory expects authentic reviews to contain more perceptual and

contextual details vis-à-vis fictitious ones.

Authentic reviews are generally expected to be more specific than fictitious ones, which

describe events that did not occur or attitudes that did not exist (Newman et al., 2003). However,

fictitious reviews could also be more specific than authentic ones. Since the former is written

based on imagination, the lack of experience could be over-compensated through concocted

specificity (Hancock, Curry, Goorha, & Woodworth, 2005). The lack of consensus suggests

merit in examining differences in specificity between authentic and fictitious reviews.

Specificity is commonly conceptualized as three sub-dimensions—informativeness,

perceptual details, and contextual details. Informativeness refers to the content-richness of

reviews (Ott et al., 2011). Perceptual details indicate the use of sensory perceptions while

contextual details include spatio-temporal references (Hancock et al., 2005). Informative reviews

containing perceptual and contextual details are deemed specific (Johnson & Raye, 1981;

Rayson, Wilson, & Leech, 2001).

9

Page 10: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Exaggeration

Exaggeration is the extent to which reviews attempt to sound convincing. Differences in

exaggeration between authentic and fictitious reviews stem from the self-presentational

perspective (DePaulo et al., 2003). The theory suggests that rhetorical strategies could be

overdone in fictitious reviews to sound convincing. Fictitious reviews might use over-the-top

superlatives while authentic entries could sound innocuous.

Authentic reviews are generally expected to be less exaggerated than fictitious ones

(Maurer & Schaich, 2011). However, recent literature finds evidence of spammers becoming

smarter to blur the lines between authentic and fictitious entries (Abulaish & Bhat, 2015). To

deliberately mimic authentic entries, fictitious reviews might not be overly exaggerated. To catch

up with the growing skills of spammers, analyzing the level of exaggeration in authentic and

fictitious reviews is a timely undertaking.

Exaggeration is commonly conceptualized as four sub-dimensions—affectiveness, tenses,

emphases and syntactic features. Affectiveness refers to the use of positive or negative emotion

words that create a lasting impact (Maurer & Schaich, 2011). Tenses indicate the chronological

focus of attention in reviews (Tausczik & Pennebaker, 2010). Emphases refer to the use of

hyperbolic expressions (Yoo & Gretzel, 2009). Syntactic features represent writing style at the

sentence-level through the use of punctuations and function words (Afroz, Brennan, &

Greenstadt, 2012; Shojaee, Murad, Azman, Sharef, & Nadali, 2013). Affective, hyperbolic

reviews with a temporal focus on present or future, containing several punctuations but few

function words are deemed exaggerated (Maurer & Schaich, 2011; Tausczik & Pennebaker,

2010).

10

Page 11: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Negligence

Negligence is the extent to which reviews inadvertently leak out cues for deception

detection. Differences in negligence between authentic and fictitious reviews stem from the

leakage theory (Ekman & Friesen, 1969), as well as the reality monitoring theory (Johnson &

Raye, 1981). The leakage theory suggests that the lack of conscientiousness in writing fictitious

reviews makes the task challenging, thereby leaking inadvertent signals for deception detection.

The reality monitoring theory suggests that authors of fictitious reviews inadvertently use more

cognitive words than those writing authentic entries.

Individuals engaged in fictitious behavior could feel the pangs of conscience for using

underhanded tactics, or having their credibility questioned. This could result in leakage of

negligence cues in fictitious reviews. However, if spammers get increasingly adept, they might

not necessarily feel guilty while writing fictitious reviews. Rather, they could be enthused by the

opportunity to mislead others (Vartapetiance & Gillam, 2012). This creates an interesting context

to examine the level of negligence in authentic and fictitious reviews.

Negligence is commonly conceptualized as three sub-dimensions—self-references,

uncertainty words, and cognitive words. The use of self-references in reviews indicate the extent

to which authors take ownership of the entries (Mehrabian, 1967). Uncertainty words express

authors’ non-commitment toward what is being written (Burgoon et al., 2016). Cognitive words

connote authors’ psychological processing in reviews (Tausczik & Pennebaker, 2010). Reviews

with few self-references but rich in uncertainty and cognitive words are deemed to reflect

negligence (Mehrabian, 1967; Pasupathi, 2007). The proposed framework is summarized in

Table 1.

Insert Table 1 here

11

Page 12: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

TABLE 1. Linguistic framework of cues to distinguish between authentic and fictitious reviews.Linguistic cues Underpinning theories Sub-dimensions ReferencesComprehensibility

Information manipulation theory Readability Zakaluk & Samuels (1998)

Self-presentational perspective Word familiarity Chall & Dale (1995)Structural features Cao et al. (2011)

Specificity Information manipulation theory Informativeness Ott et al. (2011)Reality monitoring theory Perceptual details Hancock et al. (2005)

Contextual details

Exaggeration Self-presentational perspective Affectiveness Maurer & Schaich (2011)Tenses Tausczik & Pennebaker (2010)Emphases Yoo & Gretzel (2009)Syntactic features Shojaee et al. (2013)

Negligence Leakage theory Self-references Mehrabian (1967)Reality monitoring theory Uncertainty words Burgoon et al. (2016)

Cognitive words Tausczik & Pennebaker (2010)

Linguistic Study

This study linguistically analyzed a dataset of 1,800 hotel reviews (900 authentic + 900

fictitious), which were measured based on the proposed framework through 83 variables. The

analysis involved classification algorithms followed by feature selection and statistical tests. A

filtered set of linguistic variables that helped distinguish between authentic and fictitious reviews

was identified.

Data Collection

Three authenticated review websites—Agoda.com, Expedia.com and Hotels.com—were

chosen to collect authentic reviews. They solicit reviews comprising titles and descriptions only

from bona fide travelers (Gössling et al., in press).

Fifteen hotels in Asia that had attracted more than 1,000 reviews across the chosen

websites were identified. To enhance variability, the chosen hotels uniformly straddled across

12

Page 13: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

three categories: luxury, budget and mid-range. Hotel categories were ascertained by checking

the consistency of hotels’ website-assigned star ratings across the three portals.

For each hotel, 60 authentic reviews were randomly collected to yield 900 entries (15

hotels x 60 reviews). To enhance variability, reviews were collected to uniformly straddle across

three sentiments (300 positive + 300 negative + 300 mixed). Sentiment was ascertained based on

polarity of the user-assigned review ratings (Gerdes, Stringam, & Brookshire, 2008). All reviews

were in English, contained meaningful titles, and meaningful descriptions of minimally 150

characters.

For each hotel, at least 60 fictitious reviews were collected cumulatively from more than

some 400 participants. Since authentic reviews contained titles and descriptions, fictitious entries

were also solicited with a similar format.

To solicit fictitious reviews, participants were identified using convenience sampling and

snowballing. They were allowed to participate on meeting four eligibility criteria. First, their age

had to be within 45 years. This was necessary because reviews are mostly written by young

individuals aged 45 years or below (Gretzel et al., 2007; Ip, Lee, & Law, 2012; Ratchford et al.,

2003). Second, they must have completed secondary/high school education. After all, reviews

are mostly written by educated individuals who have minimally completed secondary/high

school (Gretzel et al., 2007; Ip et al., 2012; Rong et al., 2012). Third, they must have had travel

experiences in the previous year, and read or contributed reviews regularly. This meant that they

were appropriate for the task. Fourth, they must not have stayed in the hotel for which a fictitious

review was sought. This ensures that all fictitious reviews were written based on imagination

without any post-purchase experiences.

13

Page 14: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Informed by prior studies (Ott et al., 2011; Yoo & Gretzel, 2009), participants were

instructed to write fictitious reviews—either positive, or negative, or mixed—for at most six

different hotels. They were also given the website of the hotel for which fictitious reviews were

sought.

Eventually, 900 fictitious reviews (300 positive + 300 negative + 300 mixed) written by

284 participants were admitted for analysis. All entries were in English, contained meaningful

titles, and meaningful descriptions of minimally 150 characters. The corpora of 900 authentic

reviews and 900 fictitious reviews (1,800 reviews altogether) were used for analysis. Table 2

shows an authentic review and a fictitious review in the dataset.

Insert Table 2 here

TABLE 2. Example of an authentic review and a fictitious review in the dataset.Review authenticity Review contentAuthentic Review Title: Newly renovated hotel

Description: Nice hotel. I like the people in this hotel very accommodating and friendly. Since the hotel is newly renovated, most of the amenities, rooms, corridors are new and beautiful. Housekeeping is also a plus. They clean the room very well. A buffet resto is near the hotel.

Fictitious review Title: Excellent staff and serviceDescription: From start to finish, I was treated by courteous and professional staff. The hotel is a symbol of hospitality and my first experience has been top class. I booked a standard king room and was upgraded complimentarily to a room with a cute balcony and great view. I was told it was a deluxe club room and it was simply amazing. Every part of my stay at this hotel was made memorable and the credit goes to the staff and their service.

Measurements

In terms of the linguistic cue comprehensibility, readability was measured as the mean of

commonly used metrics such as Automated-Readability Index and Coleman-Liau Index

(Korfiatis, García-Bariocanal, & Sánchez-Alonso, 2012; Zakaluk & Samuels, 1998). Word

familiarity was calculated as the proportion of words in reviews available in the Dale-Chall

lexicon of familiar words (Chall & Dale, 1995). Structural features included number of words,

14

Page 15: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

characters per word, words per sentence, and fraction of long words with 10 or more characters

(Cao et al., 2011).

In terms of the linguistic cue specificity, informativeness was ascertained based on the

proportion of eight parts-of-speech (POS)—nouns, adjectives, prepositions, articles,

conjunctions, verbs, adverbs, pronouns—and lexical diversity. Apart from being lexically diverse

(Shojaee et al., 2013), informative texts are generally rich based on the first four POS yet scanty

in terms of the rest (Ott et al., 2011; Rayson et al., 2001; Tausczik & Pennebaker, 2010).

Perceptual details included the proportion of visual (e.g., see), aural (e.g., hear), and feeling (e.g.,

touch) words (Hancock et al., 2005; Johnson & Raye, 1981). Contextual details entailed the

fraction of spatial (e.g., around) and temporal (e.g. until) words (Bond & Lee, 2005; Johnson &

Raye, 1981).

In terms of the linguistic cue exaggeration, affectiveness was measured as the fraction of

positive and negative emotion words, as well as emotiveness—the ratio of adjectives and adverbs

to nouns and verbs (Burgoon et al., 2016; Maurer & Schaich, 2011; Missen & Boughanem,

2009). Tenses included the proportion of past, present and future tense words (Gunsch,

Brownlow, Haynes, & Mabe, 2000; Tausczik & Pennebaker, 2010). Emphases were measured as

the fraction of firm words (e.g., never), upper case characters, and references to hotel names

(Pasupathi, 2007; Tsur, Davidov, & Rappoport, 2010; Yoo & Gretzel, 2009). Syntactic features

were measured as the proportion of question marks, exclamation marks, ellipses, and all

punctuations in general (Afroz et al., 2012; Keshtkar & Inkpen, 2012; Zhou, Shi, & Zhang,

2008), as well as the fraction of function words (Tausczik & Pennebaker, 2010).

In terms of the linguistic cue negligence, self-references entailed the proportion of both

first person singular (e.g., I), and plural (e.g. we) words (Mehrabian, 1967; Tausczik &

15

Page 16: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Pennebaker, 2010). Uncertainty words included the proportion of modal verbs (e.g., could), filler

(e.g., I mean), and tentative (e.g., perhaps) words (Pasupathi, 2007; Tausczik & Pennebaker,

2010). Cognitive words were measured as the fraction of causal (e.g., hence), insight (e.g.,

think), motion (e.g., go), and exclusion (e.g., except) words (Boals & Klein, 2005; Newman et

al., 2003; Tausczik & Pennebaker, 2010).

The four linguistic cues were operationalized as 43 variables (Table 3). Most of these

were measured using the Linguistic Inquiry and Word Count (LIWC2007) tool (Pennebaker,

Booth, & Francis, 2007). However, the following 10 variables are not reported by LIWC2007:

mean readability index, word familiarity using the Dale-Chall lexicon, characters per word, long

words, nouns, adjectives, upper case characters, hotel names, ellipses, and emoticons. To

compute the proportions of nouns and adjectives, Stanford Parser’s POS tagger was utilized

(Klein & Manning, 2003). The remaining eight variables were computed using custom-

developed Java programs.

All the variables were measured separately for titles and descriptions of reviews. For

titles however, only 40 of the 43 variables were used. Mean readability (variable #1), and words

per sentence (variable #5)—that depend on sentence count—were ignored because titles rarely

contain sentences. Additionally, the use of ellipses in titles (variable #32) was ignored due to few

occurrences in the dataset. Thus, each review was represented as a vector of 83 variables (40 for

titles + 43 for descriptions).

Insert Table 3 here

16

Page 17: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

TABLE 3. Operationalization of the linguistic cues.Linguistic cues Sub-dimensions Variables ReferencesComprehensibility Readability (1) Mean readability# Korfiatis et al. (2012)

Word familiarity (2) Familiar words Chall & Dale (1995)Structural features (3) Words

(4) Characters per word(5) Words per sentence#

(6) Long words

Cao et al. (2011)

Specificity Informativeness (7) Nouns(8) Adjectives(9) Prepositions(10) Articles(11) Conjunctions(12) Verbs(13) Adverbs(14) Pronouns(15) Lexical diversity

Ott et al. (2011)Rayson et al. (2001)Shojaee et al. (2013)Tausczik & Pennebaker (2010)

Perceptual details (16) Visual words(17) Aural words(18) Feeling words

Hancock et al. (2005)Johnson & Raye (1981)

Contextual details (19) Spatial words(20) Temporal words

Bond & Lee (2005)Johnson & Raye (1981)

Exaggeration Affectiveness (21) Positive emotion words(22) Negative emotion words(23) Emotiveness

Burgoon et al. (2016)Maurer & Schaich (2011)Missen & Boughanem (2009)

Tenses (24) Past tense words(25) Present tense words(26) Future tense words

Gunsch et al. (2000)Tausczik & Pennebaker (2010)

Emphases (27) Firm words(28) Upper case characters(29) Hotel names

Pasupathi (2007)Tsur et al. (2010)Yoo & Gretzel (2009)

Syntactic features (30) Question marks(31) Exclamation marks(32) Ellipses#

(32) All punctuations(34) Function words

Afroz et al. (2012)Keshtkar & Inkpen (2012)Tausczik & Pennebaker (2010)Zhou et al. (2008)

Negligence Self-references (35) First person singular words(36) First person plural words

Mehrabian (1967)Tausczik & Pennebaker (2010)

Uncertainty words (37) Modal verbs(38) Filler words(39) Tentative words

Pasupathi (2007)Tausczik & Pennebaker (2010)

Cognitive words (40) Causal words(41) Insight words(42) Motion words(43) Exclusion words

Boals & Klein (2005)Newman et al. (2003)Tausczik & Pennebaker (2010)

# Variables that were measured only for review descriptions but not for review titles

Analysis

The analysis adopted a two-step approach. First, authentic and fictitious reviews were

classified using average probability voting among five commonly used supervised learning

17

Page 18: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

algorithms: C4.5, JRip, logistic regression, random forest, and support vector machine (Ghose &

Ipeirotis, 2011; Ott et al., 2011; Zhou, Burgoon, Twitchell, Qin, & Nunamaker Jr., 2004). The

initial pool of 83 variables were filtered through Information gain (IG) and Chi-squared (χ2)

feature selection techniques. Only those variables with non-zero IG, and non-zero χ2 values were

selected for further investigation (Banerjee, Chua, & Kim, 2015; Forman, 2003; O’Mahony &

Smyth, 2009).1

Second, the feature-selected variables were further tested using independent samples t-

tests to identify a filtered set of linguistic variables that differed between authentic and fictitious

reviews. When t-tests emerged statistically significant (|t|>1.96; p<0.05), the results were

enriched with qualitative insights from the dataset using the ‘Difference Between Proportions

Method’ (DBPM) word-level analysis (Gerdes Jr. et al., 2008). It involved computing the

difference in word count frequency of each word between authentic reviews and fictitious

reviews. This difference for each word was standardized into z-scores, and examined for

statistical significance (|z|>1.96; p<0.05). This helped identify specific words that differed

significantly in their occurrences between authentic and fictitious reviews.

Results

The proposed linguistic framework performed reasonably well (Accuracy=77.28%, F1-

measure=0.77, AUC=0.85). It accurately classified 677 of the 900 authentic reviews, and 714 of

the 900 fictitious entries.

There were 41 variables with non-zero IG and non-zero χ2 values. Of these feature-

selected variables, 38 emerged significantly different between authentic and fictitious reviews

based on t-tests (the other three feature-selected variables included mean readability index of

18

Page 19: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

review descriptions, articles in review titles, and upper case characters in review titles). The

analyses corresponding to these 38 variables, which constituted the filtered set of linguistic

variables, are presented as follows.

With respect to comprehensibility, titles of authentic and fictitious reviews significantly

differed in terms of two structural features: number of words, and fraction of long words.

Authentic reviews used longer titles [t(1798)=4.00] but with fewer long words [t(1719.98)=-

3.04] such as “experience” (z=-2.75) compared with fictitious entries.

Besides, descriptions of authentic and fictitious reviews significantly differed in terms of

three structural features: characters per word, words per sentence, and fraction of long words.

Authentic reviews contained fewer characters per word [t(1785.48)=-2.93], fewer long words

[t(1798)=-5.30] but longer sentences [t(1503.32)=2.80] vis-à-vis fictitious entries. Long words

such as “experience” (z=-6.32) were significantly fewer in the former. The differences in

comprehensibility between authentic and fictitious reviews are summarized in Table 4.

Insert Table 4 here

TABLE 4. Filtered set of linguistic differences based on comprehensibility.Sub-dimensions Variables Authentic Reviews

(Mean ± SD)Fictitious Reviews(Mean ± SD)

Titles Structural features Words*** 4.88 ± 2.34 4.44 ± 2.28Long words** 0.07 ± 0.25 0.11 ± 0.31

Descriptions Structural features Characters per word** 4.33 ± 0.37 4.39 ± 0.34Words per sentence* 14.49 ± 7.93 13.62 ± 4.93Long words*** 0.04 ± 0.03 0.05 ± 0.03

Statistical significance level of t-tests: *p<0.05, **p<0.01, ***p<0.001

With respect to specificity, titles of authentic and fictitious reviews significantly differed

in terms of three informativeness variables: nouns, conjunctions and pronouns. Authentic

reviews were richer in nouns [t(1770.16)=5.41] such as “room” (z=8.03), conjunctions

[t(1775.76)=3.72] such as “but” (z=9.34) yet scantier in pronouns [t(1726.88)=-3.19] compared

19

Page 20: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

with fictitious entries. The DBPM analysis could not identify any specific significantly-differing

pronoun. In terms of contextual details, authentic reviews were significantly richer in spatial

words [t(1643.81)=6.48] such as “location” (z=21.03) than fictitious ones.

Besides, descriptions of authentic and fictitious reviews significantly differed in terms of

four informativeness variables: nouns, articles, verbs and pronouns. Authentic reviews were

significantly richer in nouns [t(1705.71)=6.96] such as “airport” (z=2.54) yet scantier in articles

[t(1761.26)=-4.23] such as “a” (z=-20.15), verbs [t(1779.94)=-3.85] such as “has” (z=-3.96), and

pronouns [t(1798)=-7.58] such as “my” (z=-21.83) vis-à-vis fictitious entries. In terms of

contextual details, authentic reviews were richer in spatial words [t(1774.89)=5.96] such as

“near” (z=5.44) than fictitious ones. The differences in specificity between authentic and

fictitious reviews are summarized in Table 5.

Insert Table 5 here

TABLE 5. Filtered set of linguistic differences based on specificity.Sub-dimensions Variables Authentic Reviews

(Mean ± SD)Fictitious Reviews(Mean ± SD)

Titles Informativeness Nouns*** 43.35 ± 22.52 37.21 ± 25.54Conjunctions*** 4.93 ± 7.96 3.60 ± 7.09Pronouns** 1.28 ± 5.05 2.13 ± 6.21

Contextual details Spatial words*** 8.24 ± 12.75 4.85 ± 9.29Descriptions

Informativeness Nouns*** 28.05 ± 7.03 25.98 ± 5.55Articles*** 8.28 ± 3.87 9.00 ± 3.35Verbs*** 11.90 ± 3.92 12.59 ± 3.54Pronouns*** 8.20 ± 4.62 9.83 ± 4.47

Contextual details Spatial words*** 10.27 ± 4.05 9.18 ± 3.62Statistical significance level of t-tests: *p<0.05, **p<0.01, ***p<0.001

With respect to exaggeration, titles of authentic and fictitious reviews significantly

differed in affectiveness. Authentic reviews were richer in positive emotion words

[t(1798)=4.46] such as “lovely” (z=2.49) yet scantier in negative emotion words [t(1665.32)=-

4.29] such as “bad” (z=-2.25) vis-à-vis fictitious entries. Additionally, authentic reviews were

significantly less emotive [t(1720.91)=-3.86] with fewer adjectives such as “bad” (z=-2.25)

20

Page 21: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

compared with fictitious entries. In terms of tenses, authentic reviews contained significantly

fewer future tense words vis-à-vis fictitious entries [t(1101.33)=-3.48]. In terms of emphases,

authentic reviews used significantly fewer firm words [t(1458.54)=-4.62] and hotel names

[t(1628.06)=-2.63] compared with fictitious entries. However, the DBPM analysis could not

identify any specific significantly-differing future tense word, firm word or hotel name. In terms

of syntactic features, authentic reviews contained fewer punctuations in general [t(1725.45)=-

6.62]—question marks [t(911.03)=-2.70] and exclamation marks [t(946.21)=-10.78] in particular

—as well as fewer function words [t(1784.34)=-2.00] such as “not” (z=-2.14) vis-à-vis fictitious

entries.

Besides, descriptions of authentic and fictitious reviews significantly differed in

affectiveness. Authentic reviews contained fewer negative emotion words [t(1728.38)=-3.90]

than fictitious entries did. The former was significantly less emotive [t(1798)=-3.42] with fewer

adverbs such as “really” (z=-1.97). In terms of tenses, authentic reviews were scantier in past

tense words [t(1798)=-7.00] such as “was” (z=-20.91) yet richer in future tense words

[t(1756.48)=3.20] such as “will” (z=6.82) vis-à-vis fictitious entries. Based on emphases,

authentic and fictitious reviews significantly differed in using firm words, upper case characters,

and hotel names. Authentic reviews were significantly scantier in firm words [t(1775.66)=-5.79],

upper case characters [t(1443.88)=-2.14], and hotel names [t(1687.65)=-4.59] than fictitious

entries. However, the DBPM analysis could not identify any specific instance of such

significantly-differing word. In terms of syntactic features, authentic reviews contained fewer

exclamation marks [t(1630.29)=-4.27], and function words [t(1750.84)=-6.78] such as “as” (z=-

7.16) vis-à-vis fictitious entries. The differences in exaggeration between authentic and fictitious

reviews are summarized in Table 6.

21

Page 22: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Insert Table 6 here

TABLE 6. Filtered set of linguistic differences based on exaggeration.Sub-dimensions Variables Authentic Reviews

(Mean ± SD)Fictitious Reviews(Mean ± SD)

Titles Affectiveness Positive emotion words*** 12.28 ± 14.42 9.36 ± 13.31Negative emotion words*** 1.72 ± 7.08 3.42 ± 9.47Emotiveness*** 4.28 ± 16.79 7.72 ± 20.81

Tenses Future tense* 0.05 ± 0.86 0.37 ± 2.55Emphases Firm words*** 0.47 ± 3.16 1.42 ± 5.34

Hotel names* 0.02 ± 0.15 0.04 ± 0.21Syntactic features Exclamation marks*** 0.89 ± 4.72 11.47 ± 29.09

Question marks* 0.01 ± 0.28 0.30 ± 3.25All punctuations*** 18.93 ± 27.84 28.69 ± 34.30Function words* 19.83 ± 17.84 21.60 ± 19.47

Descriptions Affectiveness Negative emotion words*** 0.96 ± 1.44 1.26 ± 1.77Emotiveness* 0.40 ± 0.16 0.44 ± 0.15

Tenses Past tense*** 3.90 ± 3.65 5.13 ± 3.78Future tense* 0.72 ± 1.03 0.58 ± 0.89

Emphases Firm words*** 0.99 ± 1.38 1.39 ± 1.55Upper case characters* 0.01 ± 0.06 0.03 ± 0.03Hotel names*** 0.14 ± 0.40 0.24 ± 0.52

Syntactic features Exclamation marks*** 0.34 ± 1.21 0.65 ± 1.70Function words*** 52.78 ± 7.21 54.91 ± 6.11

Statistical significance level of t-tests: *p<0.05, **p<0.01, ***p<0.001

With respect to negligence, titles of authentic and fictitious reviews significantly differed

in using self-references in the form of first person singular words. Authentic reviews contained

fewer first person singular words [t(1506.29)=-2.64] compared with fictitious entries. In terms of

uncertainty words, authentic reviews contained significantly fewer modal verbs [t(1487.09)=-

4.03] and filler words [t(1047.56)=-2.18] than fictitious entries did. However, the DBPM

analysis could not identify any specific significantly-differing first person singular word, modal

verb or filler word.

Besides, descriptions of authentic and fictitious reviews significantly differed in using

self-references in the form of first person singular words. In particular, authentic reviews

contained fewer first person singular words [t(1751.55)=-9.07] such as “I” (z=-32.91) vis-à-vis

fictitious entries. In terms of cognitive words, the former was richer in exclusion words

22

Page 23: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

[t(1798)=4.59] such as “but” (z=15.06). The differences in negligence between authentic and

fictitious reviews are summarized in Table 7.

Insert Table 7 here

TABLE 7. Filtered set of linguistic differences based on negligence.Sub-dimensions Variables Authentic Reviews

(Mean ± SD)Fictitious Reviews(Mean ± SD)

Titles Self-references First person singular words* 0.24 ± 1.74 0.54 ± 2.82Uncertainty words Modal verbs*** 0.36 ± 2.66 1.05 ± 4.37

Filler words* 0.02 ± 0.45 0.14 ± 1.57Descriptions Self-references First person singular words*** 1.87 ± 2.42 3.00 ± 2.85

Cognitive words Exclusion words*** 3.31 ± 2.49 2.79 ± 2.34Statistical significance level of t-tests: *p<0.05, ***p<0.001

Thus, authentic and fictitious reviews seemed to exhibit disparate traits across the

different sub-dimensions of the four identified linguistic cues. In other words, there seems to be

no straightforward answer to the question of whether authentic reviews are more

comprehensible, specific, exaggerated and negligent than fictitious entries. Nonetheless, the

results demonstrate that exaggeration had the highest number of variables (19) that helped

distinguish between authentic and fictitious reviews, followed by specificity (9). However,

comprehensibility and negligence had fewer such variables (5 each). This indicates that

exaggeration offered maximal scope to identify fictitious reviews, followed by specificity. On

the other hand, comprehensibility and specificity offered relatively less opportunity to

distinguish between authentic and fictitious reviews. Hence, in order to discern review

authenticity, the cues in the proposed framework could be leveraged in the following order—

exaggeration, specificity, followed by comprehensibility or negligence. Maintaining this order,

the linguistic differences between authentic and fictitious reviews that were prominent across

both titles and descriptions are presented in Table 8.

Insert Table 8 here

23

Page 24: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

TABLE 8. Linguistic differences prominent across both titles and descriptions of reviews.Linguistic Cues DifferencesExaggeration Fictitious reviews were more likely to be emotive, containing more negative

emotion words, firm words, hotel names, exclamation marks and function words vis-à-vis authentic entries.

Specificity Authentic reviews were more likely to contain nouns and spatial words than fictitious entries. On the other hand, fictitious reviews were more likely to contain pronouns vis-à-vis authentic entries.

Comprehensibility

Fictitious reviews were more likely to contain long words vis-à-vis authentic entries.

Negligence Fictitious reviews were more likely to contain first person singular words vis-à-vis authentic entries.

User Study

Informed by the results of the Linguistic Study, the User Study develops a guideline to

discern review authenticity. After pre-tests, the guideline was used as an intervention in a

between-participants experimental setup. The efficacy of the intervention was examined using

240 annotators (120 with intervention + 120 without intervention), each of whom annotated 54

reviews (27 authentic + 27 fictitious). The difference between the two groups in discerning

review authenticity was statistically analyzed.

Guideline Development

The Linguistic Study found that authentic and fictitious reviews could be distinguished

by leveraging on their linguistic cues in the order presented in Table 8. Therefore, the User Study

develops a guideline that resembles a decision-tree with three decision-points (Figure 2). A

decision-tree was chosen over a linear list of cues because the former was unanimously found to

be cognitively more manageable by 10 participants, who were recruited for a pilot study. Their

24

Page 25: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

feedback suggested that a decision-tree was more efficacious for discerning review authenticity

than a linear list of cues.

At the first decision-point of the decision-tree, the guideline required users to rely on

exaggeration to identify fictitious reviews. If exaggeration cues were unavailable, it required

users to check reviews’ specificity at the second decision-point to spot fictitious entries. If

specificity cue were unavailable, the guideline required users to examine reviews’

comprehensibility or negligence to find fictitious reviews. Authentic reviews were left to be

labelled by elimination. Put differently, the guideline prioritized accurate identification of

fictitious reviews over that of authentic entries. This was necessary to minimize the chances of

labelling fictitious reviews as authentic. After all, consequences are direr when users regard a

fictitious review authentic than an authentic review fictitious (Chen & Lin, 2013).

Insert Figure 2 here

FIG. 2. Guideline to help users distinguish between authentic and fictitious reviews.

25

Page 26: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Each decision-point was presented as an instruction. The instruction for the first decision-

point required users to check if a review was rich in emotions, especially negative emotion words

such as “bad,” firm words such as “never,” hotel names, function words such as “are,” or

exclamation marks. If yes, it should be annotated as fictitious. Otherwise, users could proceed to

the next decision-point.

The instruction for the second decision-point required users to check if the review failed

to provide details through nouns such as “room” or spatial words such as “location,” and was

vague by describing personal experiences using pronouns such as “you.” If yes, it should be

annotated as fictitious. Otherwise, users could proceed to the next decision-point.

The instruction for the third decision-point required users to check if the review used long

words such as “claustrophobic,” or if it was rich in first person singular words such as “me.” If

yes, it should be annotated as fictitious. Otherwise, it could be labelled as authentic.

Pre-tests

Before the guideline was used as the intervention, its instructions were pre-tested and

refined multiple times using separate batches of ten participants, whose profile was similar to

those recruited to write fictitious reviews in the Linguistic Study. In one-to-one meetings with

one of the authors, the participants were required to think aloud while going through the

instructions. They were asked to comment on its ease of understanding.

For the first round of pre-test, the instructions for the decision-points were corroborated

with several excerpts of authentic and fictitious reviews. The participants however unanimously

complained about information overload. Based on their feedback, excerpts were completely

26

Page 27: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

removed. To make a trade-off between managing participants’ cognitive load and ensuring the

efficacy of the guideline, instructions for the decision points were revised to highlight only

selected word samples.

For the second round of pre-test without excerpts, the comments of the participants were

more favorable. However, two participants complained about confusions arising from

inconsistencies in the instructions. They pointed that while some instructions were of the form,

“Check if the review is rich in…If yes, annotate it as fake.” other instructions stated, “Check if

the review lacks…If yes, annotate it as fake.” Therefore, the instructions were fine-tuned to

maintain a consistent tone with sentences of the form, “Check if the review is rich in…If yes,

annotate it as fake.” In this way, the instructions were now more understandable by consistently

asking participants to look for confirming evidences rather than a mixture of both confirming and

disconfirming evidences.2

For the third round of pre-test, all the participants were able to follow the fine-tuned

instructions without any ambiguity. The guideline with these instructions was finalized as the

intervention (see Appendix).

Reviews for the Experimental Setup

A set of 54 reviews (27 authentic + 27 fictitious) was identified for use in the

experimental setup. Selecting these reviews involved three steps. First, the total of 1,800 reviews

(900 authentic + 900 fictitious) collected for the Linguistic Study was filtered to identify only the

accurately classified authentic reviews, and the accurately classified fictitious reviews. This

ensured that the selection of reviews was informed by the results of the Linguistic Study. In

particular, 677 of the 900 authentic reviews, and 714 of the 900 fictitious reviews in the dataset

27

Page 28: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

were accurately identified. Put differently, these 677 authentic reviews, and 714 fictitious entries

were largely consistent with the overall findings pertaining to the four linguistic cues—

comprehensibility, specificity, exaggeration and negligence. Hence, these 1,391 reviews (677

authentic + 714 fictitious) formed the initial pool from which reviews for the intervention were

selected.

Second, from the initial pool, reviews with specific location references (e.g., names of

streets), brand references (e.g., names of hotels and restaurants), or cultural references (e.g.,

“China travellers”) were manually identified and eliminated.3 Such entries might introduce

biases when read by annotators in the experimental setup. This step yielded a filtered pool of 985

reviews (518 authentic + 467 fictitious).

Third, from the filtered pool of reviews, stratified random sampling was done to identify

the final set of 54 reviews (27 authentic + 27 fictitious). Specifically, the sets of 518 authentic

reviews, and 467 fictitious reviews were stratified across the nine combinations crossing hotel

categories—luxury, budget and mid-range—with review sentiments—positive, negative and

mixed. This resulted in 18 strata (9 for authentic + 9 for fictitious). A set of three reviews were

randomly admitted from each strata yielding 54 entries altogether (18 strata x 3 reviews).

Procedure

A total of 240 annotators, who had neither written fictitious reviews for the Linguistic

Study nor participated in the intervention pre-tests for the User Study, were recruited. Their

profile was similar to those recruited to write fictitious reviews in the Linguistic Study. The

annotators were randomly assigned to one of the two between-participants conditions: without

28

Page 29: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

intervention (henceforth, control group), or with intervention (henceforth, experimental group).

They had to annotate each of the 54 selected reviews as either authentic or fictitious.

Efforts were made to have several annotators labelling a manageable volume of reviews.

Related prior studies often required each annotator to label in excess of 100 reviews each (Lau et

al., 2011; Li, Huang, Yang, & Zhu, 2011). In contrast, this study requires each annotator to label

54 reviews each. This enhances the robustness of the results by lowering annotators’ cognitive

load, thereby minimizing the chances of fatigue-induced errors.

Annotators in the control group were asked to heuristically determine if reviews were

authentic or fictitious. Those in the experimental group were asked to follow the instructions in

the intervention to discern review authenticity. The annotators were unaware that there were

equal numbers of authentic and fictitious reviews. Thus, they could not reverse-engineer to

complete the task. All annotators received $5 as a token of appreciation.

Analysis and Results

For all annotators, the accuracy in discerning the authenticity of the 54 reviews was

calculated. The difference between the experimental group and the control group was analyzed

using t-test. The accuracy percentage of the former (68.94 ± 7.23) was significantly higher than

that of the latter (54.32 ± 7.98) [t(238)=-14.86, p<0.001].

To delve deeper, the fractions of accurately identified authentic reviews, and accurately

identified fictitious entries were also calculated. In identifying the 27 authentic reviews, the

accuracy percentage of the control group (71.08 ± 17.28) exceeded that of the experimental

group (67.04 ± 17.60) albeit non-significantly [t(238)=1.80, p=0.07]. However, in identifying the

29

Page 30: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

27 fictitious reviews, the experimental group (70.83 ± 14.38) significantly outperformed the

control group (37.56 ± 14.50) [t(238)=-17.85, p<0.001].

The results demonstrate that the experimental group was significantly better than the

control group in accurately identifying fictitious reviews. Interestingly however, the former

showed marginally and non-significantly lower accuracy in identifying authentic reviews.

Perhaps unlike the annotators in the experimental group, those in the control group were affected

by truth bias (Vrij & Baxter, 1999)—the default tendency to consider reviews authentic. If they

label most of the 54 reviews as authentic, they would conceivably perform well in accurately

identifying authentic reviews.

To verify if truth bias is a valid explanation, the volumes of reviews annotated as

authentic by the two groups were examined. Of the 54 reviews, the control group (36.05 ± 7.46)

labelled significantly more reviews as authentic compared with the experimental group (25.98 ±

7.75) [t(238)=10.26, p<0.001]. The control group apparently outperformed the experimental

group in authentic reviews due to their inherent truth bias. This in turn suggests that the

intervention could not only improve human ability to identify fictitious reviews but also made

annotators relatively immune to truth bias.

Discussion

Two major findings emerge from this paper. First, the proposed framework performed

reasonably well to distinguish between authentic and fictitious reviews. Based on

comprehensibility, fictitious reviews contained longer words vis-à-vis authentic entries. Long

words might have been used in the former to make the entries grandiloquent (Yoo & Gretzel,

2009). Based on specificity, authentic reviews were rich in nouns and spatial words yet scanty in

30

Page 31: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

terms of pronouns. Consistent with prior research (Johnson & Raye, 1981; McCornack, 1992;

Ott et al., 2011), authentic reviews appeared more specific than fictitious entries. Fictitious

reviews were more exaggerated than authentic ones—a finding consistent with prior studies

(DePaulo et al., 2003; Yoo & Gretzel, 2009). Even though spammers are growing smarter

(Abulaish & Bhat, 2015), they are not adept enough to blur the lines between authentic and

fictitious reviews based on exaggeration. Based on negligence, fictitious reviews were richer in

first person singular words than authentic entries. A possible explanation is that writing fictitious

reviews is cognitively challenging (Newman et al., 2003). When individuals perform a

challenging task, they tend to draw attention toward themselves by using first person singular

words (Rude, Gortner, & Pennebaker, 2004).

Although the framework yielded promising results, several findings contradicted its

underpinning theories. For example, contrary to the reality monitoring theory (Johnson & Raye,

1981), authentic and fictitious reviews were indistinguishable based on perceptual details and

temporal words. Again, contrary to the leakage theory (Ekman & Friesen, 1969), neither were

authentic reviews rich in self-references nor were fictitious entries rich in uncertainty or

cognitive words. A possible reason for the counter-intuitive findings is that these theories were

developed for spontaneous communication. However, fictitious reviews are never written

spontaneously. Rather, spammers could spend substantial time and effort to articulate fictitious

reviews to pass them off as authentic. As spammers strive to blur the lines between authentic and

fictitious reviews, they appear to play a cat-and-mouse game with scholars who strive to develop

approaches to discern review authenticity.

Second, the linguistic cue-based intervention improved human ability to identify

fictitious reviews. Compared with prior studies such as Wiley et al. (2009) that used a one-hour

31

Page 32: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

long instructional unit as intervention, the one used in this paper was much shorter (see

Appendix). Moreover, its instructions could not incorporate all the linguistic differences that

were consistently detected between authentic and fictitious reviews across titles as well as

descriptions (see footnotes 2 and 3). Even then, its efficacy turned out to be substantial to

improve human ability to identify fictitious reviews. This encouraging finding lends support to

the growing body of literature that suggests that interventions on critical evaluation of

information improves humans’ information-processing strategies (Argelagós & Pifarré, 2012;

Kammerer et al., 2015).

Even though the intervention improved human ability to identify fictitious reviews, it

could not improve their ability to identify authentic reviews. Such a finding was not too

unexpected. This is because as indicated earlier, the intervention was designed by prioritizing

accurate identification of fictitious reviews over that of authentic entries. After all, when users

read reviews prior to making purchase decisions, regarding a fictitious review authentic is direr

than considering an authentic entry fictitious (Chen & Lin, 2013). Given such a design of the

intervention, annotators in the experimental group labelled more reviews as fictitious, and fewer

reviews as authentic compared with individuals in the control group. Put differently, the

annotators in the experimental group were somewhat resistant to truth bias, which is one of the

biggest impediments for humans in discerning the authenticity of information (Vrij & Baxter,

1999). Thus, interventions to critically evaluate information not only improve human ability to

identify bogus entries but also make individuals more cautious and skeptical in their information-

processing strategies. Given that such interventions are even known to bolster humans’ epistemic

beliefs (Kammerer et al., 2015), it is high time to use similar training materials to develop

individuals’ information literacy skills (Gross & Latham, 2012).

32

Page 33: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Conclusions

This paper used linguistic analysis to help users discern review authenticity. Two related

studies were conducted. In the Linguistic Study, authentic and fictitious reviews were

linguistically analyzed based on comprehensibility, specificity, exaggeration and negligence. A

filtered set of variables that helped discern review authenticity was identified. These variables

were used to develop a guideline in the User Study, which aimed to inform humans how to

distinguish between authentic and fictitious reviews. The guideline improved humans’ ability to

identify fictitious reviews.

This paper makes three-fold contributions. First, it represents one of the earliest efforts to

bridge the chasm between two disparate research strands—one that distinguishes between

authentic and fictitious reviews ignoring users’ perceptions, and the other that examines users’

perceptions ignoring if users could discern review authenticity. Studies related to the first strand

are generally conducted by computer science scholars (e.g., Jindal & Liu, 2008) using

classification algorithms while those related to the second are mostly conducted by management

scholars (e.g., Tsang & Prendergast, 2009) through user studies. Given the dominant paradigms

in the two disciplines, a symbiosis of the methods had seldom been attempted. This paper

addresses the piecemeal scholarship by feeding the results of the linguistic analysis—obtained

using classification and statistical analyses—as inputs to develop the intervention that informs

human perceptions in a user study.

Second, this paper furthers the understanding about the role of language in online

deception as well as its detection. The paper demonstrates that the expected differences and the

observed differences between authentic and fictitious reviews are not always in sync. For

33

Page 34: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

example, Burgoon et al. (2016) suggested that fictitious reviews would be richer in uncertainty

words vis-à-vis authentic entries. However, uncertainty words emerged as being comparable in

descriptions of both authentic and fictitious reviews. When expected and actual differences are

aberrant, it is conceivably impossible for humans to discern review authenticity. To address the

root of the problem, this paper highlights the need to develop cyber laws so that submission of

fictitious reviews could be prevented. Additionally, it calls for honesty and netiquette among

users in posting user-generated content.

Third, this paper demonstrates the importance of training to help address the well-

recognized information-seeking problem of distinguishing between authentic and fictitious

information. Specifically, this paper suggests that a guideline could not only improve human

ability to discern review authenticity but also enhance immunity against their inherent truth bias.

Given that credibility of online information is a growing concern, easy-to-use guidelines could

be designed to sharpen information-processing strategies of individuals, who could form pattern-

based heuristics to discern authenticity (Watson, 2014). Such guidelines could even be

incorporated as training materials in social media applications as well as websites to encourage

critical thinking among information-seekers.

This paper is constrained by three limitations. First, it examined the ways in which

authentic and fictitious reviews differed from one another in terms of only four linguistic cues—

comprehensibility, specificity, exaggeration and negligence. Taking into account other cues such

as believability, objectivity and timeliness might have resulted in a more holistic investigation

(Chen & Tseng, 2011). Second, this paper defined authentic reviews as those written with post-

purchase experiences, and fictitious reviews as those written based on imagination. Caution

should be exercised in generalizing the findings to fictitious reviews written by professional

34

Page 35: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

spammers. Third, this paper examined users’ ability to discern review authenticity without

shedding light on the underlying mechanism of human decision-making. Individual differences

were also overlooked. In future, scholars specializing in areas such as computational linguistics,

cognitive psychology and management could collaborate to pick up from where we leave to

further expand this research landscape.

Footnotes

1. The comparison of the proposed classification approach with existing baselines, and the

detailed results of feature selection are reported in a conference paper presented by the

authors at the IEEE International Conference on Computing, Communications and

Networking Technologies (ICCCNT) 2015 (Banerjee et al., 2015). Those results are omitted

for brevity.

2. The linguistic difference in terms of nouns and spatial words were not included in the

guideline. Both were more abundant in authentic reviews than fictitious ones. Highlighting

these differences would have given rise to instructions asking annotators to look for

disconfirming evidence as in, “Check if the review lacks nouns and spatial words. If yes,

annotate it as fake.” In any case, reviews rich in nouns and spatial words could still be

identified as authentic by elimination.

3. Since reviews containing hotel names were avoided in the annotation process, the instruction

in the guideline corresponding to the use of hotel names was not included.

35

Page 36: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Acknowledgment

This work was supported by the Ministry of Education Research Grant AcRF Tier 2 (MOE2014-

T2-2-020).

References

Abulaish, M., & Bhat, S.Y. (2015). Classifier ensembles using structural features for spammer

detection in online social networks. Foundations of Computing and Decision Sciences,

40(2), 89-105.

Afroz, S., Brennan, M., & Greenstadt, R. (2012). Detecting hoaxes, frauds, and deception in

writing style online. Proceedings of the Security and Privacy Symposium (pp. 461-475).

IEEE.

Argelagós, E., & Pifarré, M. (2012). Improving information problem solving skills in secondary

education through embedded instruction. Computers in Human Behavior, 28, 515-526.

Banerjee, S., Chua, A. Y. K., & Kim, J. J. (2015). Distinguishing between authentic and

fictitious user-generated hotel reviews. Proceedings of the International Conference on

Computing, Communication and Networking Technologies (pp. 1-7). IEEE.

Boals, A., & Klein, K. (2005). Word use in emotional narratives about failed romantic

relationships and subsequent mental health. Journal of Language and Social Psychology,

24(3), 252-268.

Bond, G.D., & Lee, A.Y. (2005). Language of lies in prison: Linguistic classification of

prisoners’ truthful and deceptive natural language. Applied Cognitive Psychology, 19(3),

313-329.

36

Page 37: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Burgoon, J.K., & Qin, T. (2006). The dynamic nature of deceptive verbal communication.

Journal of Language and Social Psychology, 25(1), 76-96.

Burgoon, J., Mayew, W.J., Giboney, J.S., Elkins, A.C., Moffitt, K., Dorn, B.,... & Spitzley, L.

(2016). Which spoken language markers identify deception in high-stakes settings?

Evidence from earnings conference calls. Journal of Language and Social Psychology,

35(2), 123-157.

Cao, Q., Duan, W., & Gan, Q. (2011). Exploring determinants of voting for the “helpfulness” of

online user reviews: A text mining approach. Decision Support Systems, 50(2), 511-521.

Chall, J.S., & Dale, E. (1995). Readability revisited: The new Dale-Chall readability formula.

Cambridge: Brookline Books.

Chen, C.C., & Tseng, Y.D. (2011). Quality evaluation of product reviews using an information

quality framework. Decision Support Systems, 50(4), 755-768.

Chen, L.S., & Lin, J.Y. (2013). A study on review manipulation classification using decision

tree. Proceedings of the International Conference on Service Systems and Service

Management (pp. 680-685). IEEE.

DePaulo, B.M., Lindsay, J.J., Malone, B.E., Muhlenbruck, L., Charlton, K., & Cooper, H.

(2003). Cues to deception. Psychological Bulletin, 129(1), 74-118.

Ekman, P., & Friesen, W.V. (1969). Nonverbal leakage and clues to deception. Psychiatry,

32(1), 88-106.

Feng, S., Xing, L., Gogar, A., & Choi, Y. (2012). Distributional footprints of deceptive product

reviews. Proceedings of the International Conference on Weblogs and Social Media (pp.

98-105). AAAI.

37

Page 38: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Forman, G. (2003). An extensive empirical study of feature selection metrics for text

classification. Journal of Machine Learning Research, 3, 1289-1305.

Gera, T., & Singh, J. (2015). A parameterized approach to deal with sock puppets. Proceedings

of the International Conference on Computer, Communication, Control and Information

Technology (pp. 1-6). IEEE.

Gerdes Jr., J., Stringam, B.B., & Brookshire, R.G. (2008). An integrative approach to assess

qualitative and quantitative consumer feedback. Electronic Commerce Research, 8(4),

217-234.

Ghose, A., & Ipeirotis, P.G. (2011). Estimating the helpfulness and economic impact of product

reviews: Mining text and reviewer characteristics. IEEE Transactions on Knowledge and

Data Engineering, 23(10), 1498-1512.

Gössling, S., Hall, C.M., & Andersson, A.C. (in press). The manager’s dilemma: A

conceptualization of online review manipulation strategies. Current Issues in Tourism.

doi:10.1080/13683500.2015.1127337

Gross, M., & Latham, D. (2012). What's skill got to do with it?: Information literacy skills and

self‐views of ability among first‐year college students. Journal of the American Society

for Information Science and Technology, 63(3), 574-583.

Gunsch, M.A., Brownlow, S., Haynes, S.E., & Mabe, Z. (2000). Differential linguistic content of

various forms of political advertising. Journal of Broadcasting & Electronic Media, 44(1),

27-42.

Hancock, J.T., Curry, L., Goorha, S., & Woodworth, M. (2005). Automated linguistic analysis of

deceptive and truthful synchronous computer-mediated communication. Proceedings of

the Hawaii International Conference on System Sciences (pp. 1-10). IEEE.

38

Page 39: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Heydari, A., Tavakoli, M.A., Salim, N., & Heydari, Z. (2015). Detection of review spam: A

survey. Expert Systems with Applications, 42(7), 3634-3642.

Ip, C., Lee, H.A., & Law, R. (2012). Profiling the users of travel websites for planning and

online experience sharing. Journal of Hospitality & Tourism Research, 36(3), 418-426.

Jindal, N., & Liu, B. (2008). Opinion spam and analysis. Proceedings of the International

Conference on Web search and Web Data Mining (pp. 219-230). ACM.

Jindal, N., Liu, B., & Lim, E.P. (2010). Finding unusual review patterns using unexpected rules.

Proceedings of the International Conference on Information and Knowledge Management

(pp. 1549-1552). ACM.

Johnson, M.K., Raye, C.L. (1981). Reality monitoring. Psychological Review, 88(1), 67-85.

Kammerer, Y., Amann, D.G., & Gerjets, P. (2015). When adults without university education

search the Internet for health information: The roles of Internet-specific epistemic beliefs

and a source evaluation intervention. Computers in Human Behavior, 48, 297-309.

Keshtkar, F., & Inkpen, D. (2012). A hierarchical approach to mood classification in blogs.

Natural Language Engineering, 18(01), 61-81.

Klein, D., & Manning, C.D. (2003). Accurate unlexicalized parsing. Proceedings of the Annual

Meeting on Association for Computational Linguistics (pp. 423-430).

Korfiatis, N., García-Bariocanal, E., & Sánchez-Alonso, S. (2012). Evaluating content quality

and helpfulness of online product reviews: The interplay of review helpfulness vs. review

content. Electronic Commerce Research and Applications, 11(3), 205-217.

Lau, R.Y., Liao, S.Y., Kwok, R.C.W., Xu, K., Xia, Y., & Li, Y. (2011). Text mining and

probabilistic language modeling for online review spam detection. ACM Transactions on

Management Information Systems, 2(4), 25:1-25:30.

39

Page 40: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Li, F., Huang, M., Yang, Y., & Zhu, X. (2011). Learning to identify review spam. Proceedings of

the International Joint Conference on Artificial Intelligence (pp. 2488-2493). AAAI.

Lim, E.P., Nguyen, V.A., Jindal, N., Liu, B., & Lauw, H.W. (2010). Detecting product review

spammers using rating behaviors. Proceedings of the International Conference on

Information and knowledge management (pp. 939-948). ACM.

Maurer, C., & Schaich, S. (2011). Online customer reviews used as complaint management tool.

In R. Law, M. Fuchs, F. Ricci (Eds.), Information and Communication Technologies in

Tourism (pp. 499-511). Springer.

McCornack, S.A. (1992). Information manipulation theory. Communications Monographs,

59(1), 1-16.

Mehrabian, A. (1967). Attitudes inferred from non-immediacy of verbal communications.

Journal of Verbal Learning and Verbal Behavior, 6(2), 294-295.

Missen, M.M.S., & Boughanem, M. (2009). Using wordnet’s semantic relations for opinion

detection in blogs. In M. Boughanem, C. Berrut, J. Mothe, & C. Soule-Dupuy (Eds.),

Advances in Information Retrieval (pp. 729-733). Springer.

Mukherjee, A., Kumar, A., Liu, B., Wang, J., Hsu, M., Castellanos, M., & Ghosh, R. (2013).

Spotting opinion spammers using behavioral footprints. Proceedings of the International

Conference on Knowledge Discovery and Data Mining (pp. 632-640). ACM.

Munzel, A. (2015). Malicious practice of fake reviews: Experimental insight into the potential of

contextual indicators in assisting consumers to detect deceptive opinion spam. Recherche

et Applications en Marketing (English Edition), 30(4), 24-50.

40

Page 41: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Newman, M.L., Pennebaker, J.W., Berry, D.S., & Richards, J.M. (2003). Lying words:

Predicting deception from linguistic styles. Personality and Social Psychology Bulletin,

29(5), 665-675.

O’Mahony, M.P., & Smyth, B. (2009). Learning to recommend helpful hotel reviews.

Proceedings of the Conference on Recommender systems (pp. 305-308). ACM.

Ott, M., Choi, Y., Cardie, C., & Hancock, J.T. (2011). Finding deceptive opinion spam by any

stretch of the imagination. Proceedings of the Association for Computational Linguistics:

Human Language Technologies (pp. 309-319).

Park, D., Lee, J, & Han, I. (2007). The effect of on-line consumer reviews on consumer

purchasing intention: The moderating role of involvement. International Journal of

Electronic Commerce, 11(4), 125-148.

Pasupathi, M. (2007). Telling and the remembered self: Linguistic differences in memories for

previously disclosed and previously undisclosed events. Memory, 15(3), 258-270.

Pennebaker, J.W., Booth, R.J., & Francis, M.E. (2007). Linguistic Inquiry and Word Count:

LIWC [Software]. Austin, TX: LIWC.net.

Ravid, D. (2005). Emergence of linguistic complexity in later language development: Evidence

from expository text construction. In D. Ravid & H. Shyldkrot (Eds.), Perspectives on

Language and Language Development (pp. 337-355). Springer.

Rayson, P., Wilson, A., & Leech, G. (2001). Grammatical word class variation within the British

National Corpus sampler. Language and Computers, 36(1), 295-306.

Rubin, V.L., & Lukoianova, T. (2015). Truth and deception at the rhetorical structure level.

Journal of the Association for Information Science and Technology, 66(5), 905-917.

41

Page 42: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Rude, S., Gortner, E.M., & Pennebaker, J. (2004). Language use of depressed and depression

vulnerable college students. Cognition & Emotion, 18(8), 1121-1133.

Shojaee, S., Murad, M.A.A., Azman, A.B., Sharef, N.M., & Nadali, S. (2013). Detecting

deceptive reviews using lexical and syntactic features. Proceedings of the International

Conference on Intelligent Systems Design and Applications (pp. 53-58). IEEE.

Sidali, K.L., Schulze, H., & Spiller, A. (2009). The impact of online reviews on the choice of

holiday accommodations. In W. Höpken, U. Gretzel, & R. Law (Eds.), Information and

Communication Technologies in Tourism (pp. 87-98). Springer.

Tausczik, Y.R., & Pennebaker, J.W. (2010). The psychological meaning of words: LIWC and

computerized text analysis methods. Journal of Language and Social Psychology, 29(1),

24-54.

Tsang, A.S., & Prendergast, G. (2009). Is a “star” worth a thousand words?: The interplay

between product-review texts and rating valences. European Journal of Marketing,

43(11/12), 1269-1280.

Tsur, O., Davidov, D., & Rappoport, A. (2010). A great catchy name: Semi-supervised

recognition of sarcastic sentences in online product reviews. Proceedings of the

International Conference on Weblogs and Social Media (pp. 162-169). AAAI.

Vartapetiance, A., & Gillam, L. (2012). ‘I don't know where he is not’: Does deception research

yet offer a basis for deception detectives?. Proceedings of the Workshop on

Computational Approaches to Deception Detection (pp. 5-14). ACL.

Vrij, A., & Baxter, M. (1999). Accuracy and confidence in detecting truths and lies in

elaborations and denials: Truth bias, lie bias and individual differences. Expert Evidence,

7(1), 25-36.

42

Page 43: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Watson, C. (2014). An exploratory study of secondary students’ judgments of the relevance and

reliability of information. Journal of the Association for Information Science and

Technology, 65(7), 1385-1408.

Wiley, J., Goldman, S.R., Graesser, A.C., Sanchez, C.A., Ash, I.K., & Hemmerich, J.A. (2009).

Source evaluation, comprehension, and learning in Internet science inquiry tasks.

American Educational Research Journal, 46(4), 1060-1106.

Wu, G., Greene, D., & Cunningham, P. (2010). Merging multiple criteria to identify suspicious

reviews. Proceedings of the ACM Conference on Recommender systems (pp. 241-244).

ACM.

Yoo, K.H., & Gretzel, U. (2009). Comparison of deceptive and truthful travel reviews. In W.

Höpken, U. Gretzel, R. Law (Eds.), Information and Communication Technologies in

Tourism (pp. 37-47). Springer.

Zakaluk, B.L., & Samuels, S.J. (1988). Readability: Its past, present and future. Delaware:

International Reading Association.

Zhou, L., Burgoon, J.K., Twitchell, D.P., Qin, T., & Nunamaker Jr., J.F. (2004). A comparison

of classification methods for predicting deception in computer-mediated communication.

Journal of Management Information Systems, 20(4), 139-165.

Zhou, L., Shi, Y., & Zhang, D. (2008). A statistical language modeling approach to online

deception detection. IEEE Transactions on Knowledge and Data Engineering, 20(8),

1077-1081.

43

Page 44: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

Appendix

For the User Study, the intervention asked the annotators in the experimental group to follow the

below steps, which were pictorially depicted to them as shown in Figure A1.

Step 1: Check if the review is exaggerated with rich use of emotional expressions such as

“awesome” and “awful,” especially negative emotion words such as “bad,” firm words

such as always,” “never” and “perfect,” function words such as “as” and “are,” and

punctuations such as exclamation marks.

- If yes, it is fake. Else, go to Step 2.

Step 2: Check if the review—instead of describing specific hotel characteristics—is vague by

describing personal experiences with rich use of pronouns such as “our,” “we,” and

“you.”

- If yes, it is fake. Else, go to Step 3.

Step 3: Check if the review uses long words such as “claustrophobic” and “disappointing,” or if

it is rich in first person singular words such as “I” and “me.”

- If yes, it is fake. Else, it is authentic.

Insert Figure A1 here

44

Page 45: eprints.whiterose.ac.ukeprints.whiterose.ac.uk/125055/1/DontBeDeceived_R2.doc  · Web viewDon’t be Deceived: Using Linguistic Analysis to Learn How to Discern Online Review Authenticity.

FIG. A1. Pictorial representation of the intervention.

45


Recommended