Date post: | 27-Jan-2015 |
Category: |
Technology |
Upload: | jason-hong |
View: | 105 times |
Download: | 1 times |
1
OTO:Online Trust Oracle for User-Centric Trust Establishment
Tiffany Hyun-Jin Kim, Jun Han, Emmanuel Owusu, Jason Hong, Adrian PerrigCarnegie Mellon University
Payas Gupta, Debin GaoSingapore Management University
19th International Conference on Computer and Communication Security (CCS)October 17, 2012
2
WHEN DOWNLOADING SOFTWARE…
Challenge: gauging authenticity & legitimacy of software Novice users
Don’t understand dangers Lack ability to validate
Security-conscious users Often frustrated by their inability to judge
3
EXAMPLE OF SOFTWARE DOWNLOAD
4
TRUST INFO FROM THE INTERNET
Challenging for end-users Cumbersome information gathering Being unaware of existing evidence Assessing the quality of evidence Contradicting evidence
Automate trust decisions for users? Delays in identifying new & evolving threats Malware authors can circumvent the automated system Users are still left alone to make trust decisions!
5
PROBLEM DEFINITION
Design a dialog box with robust trust evidence indicators Help novice users make correct trust decisions
Avoid malware Even if underlying OS fails to correctly label legitimacy
Desired properties Correct
Users can still make correct trust decisions given conflicting info Usable
Indicators are useful to novice users Indicators should not disturb users
6
ASSUMPTION
Malware cannot interfere with dialog box operations Display of the dialog box Detection of software downloads Gathering trust evidence
Adversary model Malware distributors manipulate trust evidence
Provide falsifying info Hide crucial info
7
DESIGN RATIONALE
8
DESIGN RATIONALE
Prevalent security threats 85% malware from web Drive-by downloads Fake antivirus Keyloggers 45% success from user actions
Common pitfalls Lack of security knowledge Visual deception Reliance on prior experience Bounded attention
Effective design principle Grayed-out background Mimicked UI of OS vendor Detailed explanation Non-uniform UIs
Suppose your friend is bored at home and wants to watch some movie.
Next
He searches on Google for “batman begins.”
After looking through several options, he decides to watch this video and clicks.
Click on the link
While waiting for the video to load, a dialog box appears.
Would you recommend your friend to continue?
12
AT THE END OF EACH SCENARIO
Questions Would you recommend that your friend proceeds and
downloads the software [Yes/No/Not sure] [If Yes or No] Why? [If Not sure] What would you do to find out the legitimacy of this
software?
What evidence would you present to your friend to convince him/her of the legitimacy of this software?
How well do you know this software? [1:don’t know at all – 5: know very well]
13
RESULTS OF EXPERTS’ USER STUDYPROCESSING OPERATION # EXPERTS
SOFTWARE REVIEWAre reviews available from reputable sources, experts, or friends? 9Are the reviews good? 3HOSTING SITEIs the hosting site reputable? 8What is the corporate parameter (e.g., # employees, age of company)? 2USER INTENTION
Did you search for that specific software? 1Are you downloading from a pop-up? 1SECURING MACHINEDo you run an updated antivirus? 2Is your machine trusted? 1
14
OTO: ONLINE TRUST ORACLE
User interface displaying safety of downloading file
Summary & clickable link
15
3 COLOR MODES Similar to Windows User Account Control framework
Blue: highly likely to be legitimate Red: highly likely to be malicious Yellow: system cannot determine the legitimacy
16
EVALUATION
Experiment with 2 conditions IE9 SmartScreen Filter (SSF): base condition
Current state-of-the-art technology[1]
Widely used on browser Checks software against a known blacklist If flagged red warning banner
No reputation yellow warning banner
[1] M. Hachman. Microsoft’s IE9 Blocks Almost All Social Malware, Study Finds. http://www.pcmag.com/article2/0,2817,2391164,00.asp
17
Same 10 scenarios for experts’ user study
End of each scenario: display SSF or OTO warning dialog box
Legitimate MaliciousSystem detection outcome
Gro
und
trut
h Legitimate
Malicious
TN
KasperskySPAMfighter
AhnlabMindMaple
Adobe flash
ActiveX codec Windows activationPrivacy violationHDD diagnostics
RkillFP
FN TP
PROCEDURE
18
END OF EACH SCENARIO
While waiting for the video to load, a dialog box appears.
Your friend clicks the “Continue” button.Click on the link
When he clicks “Continue," your friend's computer prevents him from proceeding and instead displays this interface.
Please help your friend make a decision.
22
EFFECTIVENESS OF OTO
Demographics 58 participants
30 male and 28 female Age 18—59
Between-subjects study: 29 for each condition
Compensation $15 for participating Additional $1 for each correct answer $25 max
23
RESULTS
Repeated Measures ANOVA test Did participants answer each scenario correctly?
OTO helps people make more correct decisions than SSF does regardless of gender, age, occupation, education level, or background security knowledge!
24
TIMING ANALYSIS
N = 13 for SSF, N = 11 for OTO Overall, time(OTO) < time(SSF)
Participants relied on evidence to make trust decisions
25
WHAT IF OS MISCATEGORIZES?
OTO >> SSF
5-pt Likert scale questions OTO is as useful as SSF OTO is more comfortable to use
Legitimate MaliciousSystem detection outcome
Gro
und
trut
h Legitimate
Malicious
TN
KasperskySPAMfighter
AhnlabMindMaple
Adobe flash
ActiveX codec Windows activationPrivacy violationHDD diagnostics
RkillFP
FN TP
26
SCOPE OF THIS PAPER
Main objective of this paper Whether providing extra pieces of evidence helps users
Outside the scope of this paper How each piece of evidence is gathered How each piece of evidence is authenticated How malware cannot interfere with OTO operations Existence of system-level trusted path for input and output Helping people who don’t care about security
27
CONCLUSIONS
OTO: download dialog box Displays robust & scalable trust evidence to users Based on interview results of security experts
Goal: do users find additional trust evidence useful? People actually read the evidence Empowers users to make better trust decisions Even if underlying OS misdetects
29
BACKUP SLIDES
30
SCENARIOS FOR USER STUDY
31
PRE-STUDY QUESTIONS
32
RETRIEVING EVIDENCE
Robust & scalable evidence
33
DEMOGRAPHICS
34
MEAN & MAX TIME TAKEN (SEC)
N = 13 for SSF, N = 11 for OTO
35
SUMMARY OF ANOVA RESULTS
36
SECURITY ANALYSIS Malware detection
Zero-day: lack of enough evidence Well-known malware: likely to have more negative than positive
False alarms Users examine and compare Evidence is what users would have gathered from Internet
Manipulation attack Creating fake positive evidence
OTO’s evidence is robust E.g., by considering temporal aspect Need to forge multiple pieces of evidence
Hiding harmful evidence Challenging to prevent authorative resources from serving negative evidence
Impersonation of legitimate software Can associate each piece of software with cryptographic hash
37
USEFULNESS OF EVIDENCE
38
RELATED WORK User mental models
Responses to SSL warning messages [Sunshine et al. 2009] Psychological responses to warnings [Bravo-Lillo et al., 2011] Folk models of security threats [Wash, 2010] Information Content for Microsoft UAC warning [Motiee, 2011]
Habituation Effectiveness of browser warnings [Egelman et al. 2008] Polymorphic and audited dialogs [Brustoloni et al. 2007]
Assessing credibility online Augmenting search results with credibility visualizations [Schwarz and
Morris, 2011] Prominence-Interpretation theory [Fogg et al. 2003]
39
RELATED WORK
User mental models Responses to SSL warning messages [Sunshine et al. 2009]
Warnings in general do not prevent users from unsafe behavior Psychological responses to warnings [Bravo-Lillo et al., 2011]
Users have wrong mental model for computer warnings Most users don’t understand SSL warnings without background
knowledge Warnings should not be the main way of defense Folk models of security threats [Wash, 2010]
Security should focus on both actionable advice and potential threats Information Content for Microsoft UAC warning [Motiee,
2011] Let users assess risk and correctly respond to warnings Information can still be easily spoofed
40
RELATED WORK Microsoft SmartScreen Filter
current state-of-the-art technology widely used on browsers Checks the software against a known blacklist of malicious software If flagged -> red-banner warning appears, hiding options to make users
download
Information Content for Microsoft UAC warning [Motiee, 2011] Let users assess risk and correctly respond to warnings Information can still be easily spoofed
Psychological responses to warnings [Bravo-Lillo et al., 2011] Users have wrong mental model for computer warnings
Most users don’t understand SSL warnings without background knowledge Warnings should not be the main way of defense
41
DESIGN RATIONALE Prevalent security threats
85% malware from web Drive-by downloads Fake antivirus Keyloggers 45% success from user actions
Common pitfalls Lack of security knowledge Visual deception Psychological pressure Reliance on prior experience Bounded attention
Effective design principle Grayed-out background Mimicked UI of OS vendor Detailed explanation Non-uniform UIs