+ All Categories
Home > Documents > Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis...

Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis...

Date post: 08-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
42
MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application examples Secrets and vulnerability Channels and leakage Multiplicative Bayes-Capacity Comparing systems, the lattice of information Applications and exercises Protection of sensitive information Protecting the confidentiality of sensitive information is a fundamental issue in computer security Access control and encryption are not sufficient! Systems could leak secret information through correlated observables. The notion of “observable” depends on the adversary Often, secret-leaking observables are public, and therefore available to the adversary Blood type: Birth date: HIV: AB 9/5/46 positive Leakage through correlated observables Password checking Election tabulation Timings of decryptions
Transcript
Page 1: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

MPRI 2.3.2 - Foundations of privacy

Lecture 1

Kostas Chatzikokolakis

Sep 19, 2016

Plan of the course

Quantitative Information Flow

◮ Motivation, application examples

◮ Secrets and vulnerability

◮ Channels and leakage

◮ Multiplicative Bayes-Capacity

◮ Comparing systems, the lattice of information

◮ Applications and exercises

Protection of sensitive information

• Protecting the confidentiality of sensitive information is a fundamental issue in computer security

• Access control and encryption are not sufficient! Systems could leak secret information through correlated observables.

• The notion of “observable” depends on the adversary

• Often, secret-leaking observables are public, and therefore available to the adversary

Blood type:Birth date: HIV:

AB9/5/46positive

Leakage through correlated observables

Password checking

Election tabulation

Timings of decryptions

Page 2: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Quantitative Information Flow

Information Flow: Leakage of secret information via correlated observables

Ideally: No leak

• No interference [Goguen & Meseguer’82]

In practice: There is almost always some leak

• Intrinsic to the system (public observables, part of the design)

• Side channels

need quantitative ways to measure the leak

Password checker 1

Password: K1K2 . . .KN

Input by the user: x1x2 . . . xN

Output: out (Fail or OK)

Intrinsic leakage

By learning the result of the check the adversary learns something about the secret

Example 1

Example 1

Password checker 2

Password: K1K2 . . .KN

Input by the user: x1x2 . . . xN

Output: out (Fail or OK)

More efficient, but what about security?

{ }

Password checker 2

Password: K1K2 . . .KN

Input by the user: x1x2 . . . xN

Output: out (Fail or OK)

Side channel attack

If the adversary can measure the execution time, then he can also learn the longest correct prefix of the password

{ }

Example 1

Page 3: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

• A set of nodes with some communication channels (edges).

• One of the nodes (source) wants to broadcast one bit b of information

• The source (broadcaster) must remain anonymous

Example 2Example of Anonymity Protocol:

DC Nets [Chaum’88]

• A set of nodes with some communication channels (edges).

• One of the nodes (source) wants to broadcast one bit b of information

• The source (broadcaster) must remain anonymous

b=1

Example of Anonymity Protocol: DC Nets [Chaum’88]

Chaum’s solution

• Associate to each edge a fair binary coin

b=1

Chaum’s solution

0

1

• Associate to each edge a fair binary coin

• Toss the coins b=1

Page 4: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Chaum’s solution

0

11

1

0

00

• Associate to each edge a fair binary coin

• Toss the coins

• Each node computes the binary sum of the incident edges. The source adds b. They all broadcast their results

b=1

Chaum’s solution

0

11

1

0

00

• Associate to each edge a fair binary coin

• Toss the coins

• Each node computes the binary sum of the incident edges. The source adds b. They all broadcast their results

• Achievement of the goal: Compute the total binary sum: it coincides with b

b=1

Anonymity of DC Nets

Observables: An (external) attacker can only see the declarations of the nodes

Question: Does the protocol protects the anonymity of the source?

• If the graph is connected and the coins are fair, then for an external observer, the protocol satisfies strong anonymity:

the a posteriori probability that a certain node is the source is equal to its a priori probability

• A priori / a posteriori = before / after observing the declarations

Strong anonymity (Chaum)

0

1

1

0

00

1

b=1

Page 5: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

The Crowds protocol

◮ DC is not practical for a large number of users

◮ In practice we might want to trade anonymity for efficiency

◮ Crowds offers a weaker notion of anonymity called probable

innocence

◮ Designed for anonymous web surfing

The Crowds protocolThe initiator:

◮ Forwards the

message

A forwarder:

◮ With pb pf forwards

◮ With pb 1− pfdelivers

◮ The path is used in

the opposite

direction for the

reply

◮ The same path is

used in future

requests

The Crowds protocol: anonymity

◮ We consider sender

anonymity

◮ Attacker model◮ Cannot see the whole

network◮ Only messages sent

to him

◮ The server:◮ only sees the last user◮ Strong anonymity is

satisfied

The Crowds protocol: anonymityCorrupted users:

◮ They can see forwarding

requests and “detect” a

user i

◮ User i can still claim that

he was forwading the

message for user j

◮ Is strong anonymity

satisfied?

◮ Compare the probability to

detect i :◮ when i is the payer◮ when j is the payer

◮ They are different: strong

anonymity is violated

Page 6: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Location-Based Systems

2

‣ Retrieval of Points of Interest (POIs).

‣Mapping Applications.

‣Deals and discounts applications.

‣ Location-Aware Social Networks.

A location-based system is a system that uses geographical information in order to provide a service.

Location-Based Systems

‣ Location information is sensitive. (it can be linked to home, work, religion, political views, etc).

‣ Ideally: we want to hide our true location.

‣ Reality: we need to disclose some information.

3

Example

‣ Find restaurants within 300 meters.

4

‣Hide location, not identity.

‣ Provide approximate location.

Obfuscation

7

area of interest

Page 7: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Obfuscation

7

reported position

area of interest

Obfuscation

7

area of retrieval

area of interest

Obfuscation

7

area of retrieval

area of interest

Obfuscation

7

area of interest

Page 8: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Issues to study

How can get we generate the noise?

What kind of formal privacy guarantees do we get?

Which mechanism gives optimal utility?

What if we use the service repeatedly?

Timing Attacks in Cryptosystems

Remote timing attack [BonehBrumley03]

1024-bit RSA key recovered in 2 hours from standard

OpenSSL implementation across LAN

Response time depends on the key!

Timing Attacks in Cryptosystems

What counter-measures can we use?

Make the decryption time constant

Too slow!

Force the set of possible decryption times to be small

Is it enough?

Must be combined with blinding

Careful analysis of the privacy guarantees is required

MPRI 2.3.2 - Foundations of privacy

Lecture 2

Kostas Chatzikokolakis

Sep 26, 2016

Page 9: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Plan of the course

Quantitative Information Flow

◮ Motivation, application examples

◮ Secrets and vulnerability

◮ Channels and leakage

◮ Multiplicative Bayes-Capacity

◮ Comparing systems, the lattice of information

◮ Applications and exercises

Quantitative Information Flow

Information Flow: Leakage of secret information via correlated observables

Ideally: No leak

• No interference [Goguen & Meseguer’82]

In practice: There is almost always some leak

• Intrinsic to the system (public observables, part of the design)

• Side channels

need quantitative ways to measure the leak

Location-Based Systems

2

‣ Retrieval of Points of Interest (POIs).

‣Mapping Applications.

‣Deals and discounts applications.

‣ Location-Aware Social Networks.

A location-based system is a system that uses geographical information in order to provide a service.

Location-Based Systems

‣ Location information is sensitive. (it can be linked to home, work, religion, political views, etc).

‣ Ideally: we want to hide our true location.

‣ Reality: we need to disclose some information.

3

Page 10: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Example

‣ Find restaurants within 300 meters.

4

‣Hide location, not identity.

‣ Provide approximate location.

Obfuscation

7

area of interest

Obfuscation

7

reported position

area of interest

Obfuscation

7

area of retrieval

area of interest

Page 11: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Obfuscation

7

area of retrieval

area of interest

Obfuscation

7

area of interest

Issues to study

How can get we generate the noise?

What kind of formal privacy guarantees do we get?

Which mechanism gives optimal utility?

What if we use the service repeatedly?

Timing Attacks in Cryptosystems

Remote timing attack [BonehBrumley03]

1024-bit RSA key recovered in 2 hours from standard

OpenSSL implementation across LAN

Response time depends on the key!

Page 12: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Timing Attacks in Cryptosystems

What counter-measures can we use?

Make the decryption time constant

Too slow!

Force the set of possible decryption times to be small

Is it enough?

Must be combined with blinding

Careful analysis of the privacy guarantees is required

What is a secret?

My password: x = ”dme3@21!SDFm12”

What does it mean for this password to be secret?

The adversary should not know it, i.e. x comes from a set X

of possible passwords

Is x ′ = 123 an equally good password? Why?

Passwords are drawn randomly from a probability distribution

What is a secret?

A secret x is something about which the adversary knows

only a probability distribution π

π is called the adversary’s prior knowledge◮ π could be the distribution from which x is generated◮ or it could model the adversary’s knowledge on the population

the user comes from

How vulnerable is x?

It’s a property of π, not of x

Vulnerability

How vulnerable is our secret under prior π?

The answer highly depends on the application

Eg: assume uniformly distributed secrets but the adversary

knows the first 4 bytes

(this can be expressed by a prior π)

Is this threat substantial?◮ No: if the secrets are long passwords◮ Yes: if the first 4 bytes is a credit card pin

Page 13: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Vulnerability

To quantify the threat we need an operational scenario

What is the goal of the adversary?

How successful is the adversary in achieving this goal?

Vulnerability: measure of the adversary’s success

Uncertainty: measure of the adversary’s failure

First approach

Assume the adversary can ask for properties of x

i.e. questions of the form “is x ∈ P?”

Goal: completely reveal the secret as quickly as possible

Measure of success: expected number of steps

First approach

eg:

X = {heads, tails}, π = (1, 0)

X = {heads, tails}, π = (1/2, 1/2)

X = {a, b, c , d , e, f , g, h}, π = (1/8, . . . , 1/8)

X = {a, b, c , d , e, f , g, h}

π = (1/4, 1/4, 1/8, 1/8, 1/16, 1/16, 1/16, 1/16)

First approach

Best strategy: at each step split the search space in sets of equal

probability mass

Page 14: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

First approach

At step i the total probability mass is 2−i .

If the probability of x is πx = 2−i , then at step i the search

space will only contain x . So it will take i = −log2πx steps

to reveal x .

So the expected number of steps is:

x

−πx log2 πx

(if π is not powers of 2 this will be a lower bound)

Shannon Entropy

This is the well known formula of Shannon Entropy:

H(π) =∑

x

−πx log2 πx

It’s a measure of the adversary’s uncertainty about x

Minimum value: H(π) = 0 iff πx = 1 for some x

Maximum value: H(π) = log2 |X | iff π is uniform

Shannon Entropy

The binary case X = {0, 1}, π = (x , 1− x):

Shannon Entropy

Very widely used to measure information flow

Is it always a good uncertainty measure for privacy?

Example: X = {0, . . . , 232}, π = (18, 782−32, . . . , 7

82−32)

H(π) = 28.543525

But the secret can be guessed with probability 1/8!

Undesired in may practical scenarios

Page 15: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Bayes Vulnerability

Adversary’s goal: correctly guess the secret in one try

Measure of success: probability of a correct guess

Optimal strategy: guess the x with the highest πx

Bayes Vulnerability:

Vb(π) = maxxπx

Bayes Vulnerability

Maximum value: Vb(π) = 1 iff πx = 1 for some x

Minimum value: Vb(π) = 1/|X | iff π is uniform

Min-entropy: log2(Vb(π))

Bayes risk: 1− Vb(π)

Previous example: π = (18, 782−32, . . . , 7

82−32)

Vb(π) = 1/8

Bayes Vulnerability

The binary case X = {0, 1}:

Guessing Entropy

Adversary’s goal: correctly guess the secret in many tries

Measure of success: expected number of tries

Optimal strategy: try secrets in decreasing order of

probability

Guessing entropy:

G (π) =

|X |∑

i=1

i · πxi

xi : indexing of X in decreasing probability order

Page 16: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Guessing Entropy

Minimum value: G (π) = 1 iff πx = 1 for some x

Maximum value: G (π) = (|X |+ 1)/2 iff π is uniform

The binary case:

Still not completely satisfied

What if the adversary wants to reveal part of a secret?

Or is satisfied with an approximative value?

Or we are interested in the probability of guessing after

multiple tries?

ExampleSecret: database of 10-bit passwords for 1000 users:

pwd0, pwd1, . . . , pwd999

The adversary knows that the password of some user is z ,

but does not know which one (all are equally likely)

A1: guess the complete database◮ Vb(π) = 2

−9990

A2: guess the password of a particular user i◮ Create distribution πi for that user◮ Vb(πi ) = 0.001 · 1+ 0.999 · 2

10 ≈ 0.00198

A3: guess the password of any user◮ intuitively, the secret is completely vulnerable◮ how can we capture this vulnerability?

Abstract operational scenario

A makes a guess w ∈ W about the secret

The benefit provided by guessing w when the secret is x is

given by a gain function:

g :W ×X → R

Success measure: the expected gain of a best guess

Page 17: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

g-vulnerability

Expected gain under guess w :∑

x πxg(w , x)

Choose the one that maximizes the gain

g-vulnerability:

Vg(π) = maxw∈X

x∈X

πxg(w , x)

(sup if W is infinite)

l-uncertainty measures can be also defined using loss

functions: Ul(π) = minw∑

x πx l(w , x)

The power of gain functions

Guessing a secret approximately.

g(w , x) = 1− dist(w , x)

SW1

4SE

1

4

NW1

4NE

1

4

CC

d d

d d

Guessing a part of a secret.

g(w , x) = Does w match the high-order bits of x?

Lab location:

N 39 .95185W 75 .18749

Guessing a property of a secret.

g(w , x) = Is x of gender w?

Ann

Sue

PaulBob

Tom

Guessing a secret in 3 tries.

g3(w , x) = Is x an element of set w of size 3?

PassWord

Dictionary:

superman

apple-juice

johnsmith62

secret.flag

history123

...

Password database example

Secret: database of 10-bit passwords for 1000 users:

pwd0, pwd1, . . . , pwd999

A3: guess the password of any user

W = {p | p ∈ {0 . . . 1023}}

g(p, x) =

{

1, if x [u] = p for some u

0, otherwise.

Vg(π) = 1

Expressiveness of Vg

Can we express Bayes-vulnerability using g-vulnearability?

A guesses the exact secret x in one try

Guesses W = X

Gain function:

gid(w , x) =

{

1, if w = x ,

0, if w 6= x .

Vgid coincides with Vb

Page 18: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Expressiveness of Vg

What about guessing entropy?

It’s an uncertainly measure, so we need loss functions

Guesses W = permutations of X

eg w1 = (x3, x1, x2)

(think of w as the order of guesses)

Loss function: lG (w , x) = i where i is the position of x in the

permutation w

Ulg (π) coincides with G (π)

Expressiveness of Vg

What about Shannon entropy?

Again we need loss functions, and infinitely many guesses

Guesses W = probability distributions over X

(think of w as a way to construct the search tree)

Loss function: lS(w , x) = − log2 wx(number of questions to find x using this tree)

Because of Gibb’s inequality: UlS (π) = H(π)

We can restrict to countably many guesses

Expressiveness of Vg

What other measures can we express as Vg,Ul?

What is a reasonable uncertainly measure f ?

Let’s fix some desired properties of f

Desired prop. of uncertainty measures

Domain and range: f : P(X )→ [0,∞)

Continuity: a small change in π should have a small effect in

f (π)

Concavity

We flip a coin give to the adversary the prior π1 with pb c

and prior π2 with pb 1− c

His uncertainty on average is cf (π1) + (1− c)f (π2)

If we give him a single prior π = cπ1 + (1− c)π2 hisuncertainly should be at least as big

f (∑

i ciπi ) ≥

i ci f (πi ) where

i ci = 1

Page 19: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Desired prop. of uncertainty measures

Implies continuity everywhere except on the boundary

Shannon-entropy, Bayes-vulnerability and Guessing-entropy satisfy

these properties

Desired prop. of uncertainty measures

Let UX denote the set of all uncertainty measures

(i.e. non-negative continuous concave functions of PX )

Let LX denote the set of l-uncertainty functions Ul

What’s the relationship between UX and LX?

MPRI 2.3.2 - Foundations of privacy

Lecture 3

Kostas Chatzikokolakis

Oct 3, 2016

Plan of the course

Quantitative Information Flow

◮ Motivation, application examples

◮ Secrets and vulnerability

◮ Channels and leakage

◮ Multiplicative Bayes-Capacity

◮ Comparing systems, the lattice of information

◮ Applications and exercises

Page 20: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Expressiveness of VgBayes-vulnerability

W = X

gid(w , x) =

{

1, if w = x ,

0, if w 6= x .

Guessing-entropy

W = permutations of X

lG (w , x) = index of w in x

Shannon-entropy

W = PX

lS(w , x) = − log2 wx

Expressiveness of Vg

What other measures can we express as Vg,Ul?

What is a reasonable uncertainly measure f ?

Let’s fix some desired properties of f

Desired prop. of uncertainty measures

Domain and range: f : P(X )→ [0,∞)

Continuity: a small change in π should have a small effect in

f (π)

Concavity

We flip a coin give to the adversary the prior π1 with pb c

and prior π2 with pb 1− c

His uncertainty on average is cf (π1) + (1− c)f (π2)

If we give him a single prior π = cπ1 + (1− c)π2 hisuncertainly should be at least as big

f (∑

i ciπi ) ≥

i ci f (πi ) where

i ci = 1

Desired prop. of uncertainty measures

Implies continuity everywhere except on the boundary

Shannon-entropy, Bayes-vulnerability and Guessing-entropy satisfy

these properties

Page 21: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Desired prop. of uncertainty measures

Let UX denote the set of all uncertainty measures

(i.e. non-negative continuous concave functions of PX )

Let LX denote the set of l-uncertainty functions Ul

What’s the relationship between UX and LX?

Direction UX ⊇ LX

We might have an infinite number of guess vectors w

Ul(π) = infw lw (π) where lw (π) =∑

x πx l(w , x)

Is Ul(π) concave?◮ Yes as inf of concave functions

Is Ul(π) continuous?◮ Upper semi-continuous as inf of continuous functions

◮ Lower semi-continuous due to the Gale-Klee-Rockafellar

theorem

◮ So LX ⊆ UX

what about the converse?

Geometric view of Ul

A guess w can be though as a vector in Rn containing the

loss for each secret x :

(l(w , x1), . . . , l(w , xn))

Loss for a fixed w :

lw (π) =∑

x πxwx = π · w

The graph of lw is a hyperplane with parameters

a = (−w , 1), b = 0.

Any hyperplane (a, 1) · (π, y) = b is the graph of lw for a

guess vector w = b1− a.

Direction UX ⊆ LX

Supporting hyperplane theorem

If S is a convex set and x is a point on the boundary of S , then

there is a supporting hyperplane that contains x (i.e. all points lie

on one side of the hyperplane and x lies on the hyperplane)

Page 22: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Direction UX ⊆ LX

TheoremLet f ∈ UX . There exist a loss function l with a countable

number of guesses s.t. f = Ul . Hence UX = LX .

One guess for each π in the interior of PX

Apply the separating hyperplane thm on the hypo-graph of f

From the hyperplane construct a guess vector wπ s.t.

wπ · π = f (π) for π in particular, and

wπ · π′ ≥ f (π′) for all (other) π′

Conclude that Ul = f

Restrict to π with rational coordinates

Plan of the course

Quantitative Information Flow

◮ Motivation, application examples

◮ Secrets and vulnerability

◮ Channels and leakage

◮ Multiplicative Bayes-Capacity

◮ Comparing systems, the lattice of information

◮ Applications and exercises

Channels

Basic model from information theory to capture the

bevaviour of a system

Inputs X : secret events

Outputs Y : observable events

Channelsecret

X

observable

YProgram

high value

X

low v

Y

Probabilistic systems are noisy channels: an output can correspond to different inputs, and an input can generate different outputs, according to a prob. distribution

p(oj|si): the conditional probability to observe oj given the secret si

...

s1 o1

on

......sm

p(o1|s1)

p(on|s1)

Page 23: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Channel Matrixy1 · · · yn

x1 C11 · · · C1n...

. . .

xm Cm1 · · · Cmn

Rows are probability distributions over the observations Y

Prior π: distribution over the secrets X

Given π,C we can construct a joint distribution over X × Y:

p(x , y) = πxCxy

For this distribution we have:

p(x) =∑

y p(x , y) = πx

p(y |x) =p(x , y)

p(x)= Cxy

Particular case: Deterministic systems In these systems an input generates only one output Still interesting: the problem is how to retrieve the input from the output

The entries of the channel matrix can be only 0 or 1

...

s1o1

on

......

sm

Example: DC nets (ring of 3 nodes, b=1)

Secret Information Observablesn0

n2 n1

Example: DC nets (ring of 3 nodes, b=1)

n0

Secret Information Observablesn0

n2 n1

Page 24: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Example: DC nets (ring of 3 nodes, b=1)

n1

Secret Information Observablesn0

n2 n1

Example: DC nets (ring of 3 nodes, b=1)

n2

Secret Information Observablesn0

n2 n1

Example: DC nets (ring of 3 nodes, b=1)

n2 111

Secret Information Observables

n0

n1

n2

n0

n2 n11

01

1

11

Example: DC nets (ring of 3 nodes, b=1)

n2

Secret Information Observablesn0

n2 n10

00

0

100

1 0

Page 25: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Example: DC nets (ring of 3 nodes, b=1)

n2

001

010

100

111

Secret Information Observablesn0

n2 n1

Example: DC nets (ring of 3 nodes, b=1)

n0

n2

001

n1

010

100

111

Secret Information Observablesn0

n2 n1

⅓ ²⁄₉ ²⁄₉ ²⁄₉

²⁄₉ ⅓ ²⁄₉ ²⁄₉

²⁄₉ ²⁄₉ ⅓ ²⁄₉

001

n0

n1

n2

010 100 111

Example: DC nets (ring of 3 nodes, b=1)

fair coins: Pr(0) = Pr(1) = ½strong anonymity

biased coins: Pr(0) = ⅔ , Pr(1) = ⅓The source is more likely to declare 1 than 0

001

n0

n1

n2

010 100 111

¼ ¼ ¼ ¼

¼¼¼¼

¼ ¼ ¼ ¼

Password-checker 1

Let us construct the channel matrix

Note: The string x1x2x3 typed by the user is a parameter, and K1K2K3 is the channel input

The standard view is that the input represents the secret. Hence we should take

K1K2K3 as the channel input

Page 26: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Password-checker 1

Let us construct the channel matrix

000

001

010

011

100

101

110

111

Fail

OK

Input: K1K2K3 ∈ {000, 001, . . . , 111}

Output: out ∈ {O K , F A I L }

Assume the user string is x1x2x3 = 110

Different values of x1x2x3

give different channel matrices, but they all have this kind of shape (seven inputs map to Fail, one maps to OK)

Password-checker 2

Let us construct the channel matrix{ }

Output: out ∈ {O K , ( F A I L , 1), ( F A I L , 2), ( F A I L , 3)}

Assume the adversary can measurethe execution time

000

001

010

011

100

101

110

111

(Fail,1)

(Fail,2)

(Fail,3)

OK

Input: K1K2K3 ∈ {000, 001, . . . , 111}

Assume the user string is x1x2x3 = 110

Posterior distributionsy1 y2

[ ]

x1 2/4 2/4

x2 1/4 3/4

Each observation y provides evidence for some secret(s)

Starting from π, it gives a posterior dist. σy ∈ PX :

σyx = p(x |y) =p(x , y)

p(y)Bayes thm

Eg. from π = (1/2, 1/2) we get:

σy1 = (2/3, 1/3)

σy2 = (2/5, 3/5)

Observables: prior ⇒ posterior

Page 27: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Observables: prior ⇒ posterior

⅓ ²⁄₉ ²⁄₉ ²⁄₉

²⁄₉ ⅓ ²⁄₉ ²⁄₉

²⁄₉ ²⁄₉ ⅓ ²⁄₉

001

n0

n1

n2

010 100 111

p(o|n) conditional prob

p(n)

½

¼

¼

priorsecret prob

Observables: prior ⇒ posterior

⅓ ²⁄₉ ²⁄₉ ²⁄₉

²⁄₉ ⅓ ²⁄₉ ²⁄₉

²⁄₉ ²⁄₉ ⅓ ²⁄₉

001

n0

n1

n2

010 100 111

⅙ 1/9 1/9 1/91/18 1/12 1/18 1/18

1/18 1/18 1/12 1/18

001

n0

n1

n2

010 100 111

p(o|n) conditional prob

p(n,o) joint prob

p(n)

½

¼

¼

priorsecret prob

Observables: prior ⇒ posterior

⅓ ²⁄₉ ²⁄₉ ²⁄₉

²⁄₉ ⅓ ²⁄₉ ²⁄₉

²⁄₉ ²⁄₉ ⅓ ²⁄₉

001

n0

n1

n2

010 100 111

⅙ 1/9 1/9 1/9

⅟18 ⅟12 ⅟18 ⅟18

⅟18 ⅟18 ⅟12 ⅟18

001

n0

n1

n2

010 100 111

p(o) 5 ⁄18 ¼ ¼ 2 ⁄ 9obs prob

p(n)

½

¼

¼

priorsecret prob

p(o|n) conditional prob

p(n,o) joint prob

⅓ ²⁄₉ ²⁄₉ ²⁄₉

²⁄₉ ⅓ ²⁄₉ ²⁄₉

²⁄₉ ²⁄₉ ⅓ ²⁄₉

001

n0

n1

n2

010 100 111p(n|001)

⅕post

secret prob

Bayes theoremp(n|o) =p(n, o)

p(o)

p(o|n) conditional prob

p(n,o) joint prob

p(o) 5 ⁄18 ¼ ¼ 2 ⁄ 9obs prob

⅙ 1/9 1/9 1/9

⅟18 ⅟12 ⅟18 ⅟18

⅟18 ⅟18 ⅟12 ⅟18

001

n0

n1

n2

010 100 111

Page 28: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Hyper distributions

π

[ ]

x1 1/2

x2 1/2C

y1 y2[ ]

x1 2/4 2/4

x2 1/4 3/4joint

y1 y2[ ]

x1 2/8 2/8

x2 1/8 3/8

Posteriors: σy1 = (2/3, 1/3), σy2 = (2/5, 3/5)

Output distribution δ ∈ PY:

δy = p(y) =∑

x p(x , y) =∑

x πxcxy hence

δ = πc = (3/8, 5/8)

Hyper distribution δ ∈ P2X :

3/8 5/8[ ]

x1 2/3 2/5

x2 1/3 3/5

Hyper distributions

[π,C ]: hyper distribution obtained from π,C

δ ∈ PY: outer distribution (output)

σy ∈ PX : inner distributions (posteriors)

π can be obtained by averaging the inners

π =∑

y δyσy

Abstract channel C : mapping PX → P2X :

C (π) = [π,C ]

MPRI 2.3.2 - Foundations of privacy

Lecture 4

Kostas Chatzikokolakis

Oct 10, 2016

Plan of the course

Quantitative Information Flow

◮ Motivation, application examples

◮ Secrets and vulnerability

◮ Channels and leakage

◮ Multiplicative Bayes-Capacity

◮ Comparing systems, the lattice of information

◮ Applications and exercises

Page 29: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Hyper distributions

π

[ ]

x1 1/2

x2 1/2C

y1 y2[ ]

x1 2/4 2/4

x2 1/4 3/4joint

y1 y2[ ]

x1 2/8 2/8

x2 1/8 3/8

Posteriors: σy1 = (2/3, 1/3), σy2 = (2/5, 3/5)

Output distribution δ ∈ PY:

δy = p(y) =∑

x p(x , y) =∑

x πxcxy hence

δ = πc = (3/8, 5/8)

Hyper distribution δ ∈ P2X :

3/8 5/8[ ]

x1 2/3 2/5

x2 1/3 3/5

Hyper distributions

[π,C ]: hyper distribution obtained from π,C

δ ∈ PY: outer distribution (output)

σy ∈ PX : inner distributions (posteriors)

π can be obtained by averaging the inners

π =∑

y δyσy

Abstract channel C : mapping PX → P2X :

C (π) = [π,C ]

Posterior vulnerability

After running C on π we get [π,C ]

How vulnerable is [π,C ]?

We have vulnerability measures on PX , we need to extend

them to PPX

Natural choice: averaging:

V [π,C ] =∑

y δyV (σy )

Natural geometric view

Low probability observations are not considered important!

Posterior vulnerability V [π,C ] = ∑

y δyV (σy )

Notation: [π] = hyper assigning prob 1 to π

Then V [π] = V (π)

For Bayes-vulnerability:

Vb[π,C ] =∑

y maxx πxCxy

For g-vulnerability:

Vg[π,C ] =∑

y maxw∑

x πxCxyg(w , x)

Page 30: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

LeakageV [π,C ] might be high because of π, not because of C !

eg. easy to guess password

What we care about is how much more vulnerable our secret

becomes because of C

So we compare V [π] and V [π,C ]

Additive case:

L+(π,C ) = V [π,C ]− V [π]

Multiplicative case:

L×(π,C ) =V [π,C ]

V [π]

Example

π

[ ]

x1 1/2

x2 1/2C

y1 y2[ ]

x1 2/4 2/4

x2 1/4 3/4hyper

3/8 5/8[ ]

x1 2/3 2/5

x2 1/3 3/5

Prior vulnerability: V [π] = 1/2

Individual posteriors: V [σ1] = 2/3,V [σ2] = 3/5

Posterior vuln.: V [π,C ] = 3/8 · 2/3+ 5/8 · 3/5 = 5/8

Leakage: L+(π,C ) = 1/8,L× = 5/4

Zero Leakage

Non-interfering channel: all rows are the same, i.e.

Cxy = Cx ′y for all x , x′, y

Fix some arbitrary full-support prior π

NI iff input and output are independent

If NI then L+(π,C ) = 0

Is the converse true?

Imperfect Cancer Test

π

[ ]

Y 0.008

N 0.992C

Y N[ ]

Y 0.90 0.10

N 0.07 0.93joint

Y N[

Y 0.00720 0.00080

N 0.06944 0.92256

Assuming a “generally healthy” population, the pb of cancer

is low

The probability of getting Y as a false positive is higher than

that of a true positive!

Hence the best guess is always to guess no cancer! (the test

by itself is useless)

Geometric view: posteriors fall on the same hyperplane!

Page 31: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Capacity

Often we want to abstract from a specific prior

Capacity: maximize leakage over π

ML+(C ) = maxπ L+(π,C )

ML×(C ) = maxπ L×(π,C )

Multiplicative Bayes-Capacity

ML×b (C ) = maxπVb[π,C ]Vb[π]

Given for a uniform prior

Theorem: for any channel C :

ML×b (C ) = L×b (πu,C ) =

y maxx Cxy

Multiplicative Bayes-Capacity

Mult. Bayes-Capacity is 1 iff C is non-interfering

π

[ ]

Y 0.5

N 0.5C

Y N[ ]

Y 0.90 0.10

N 0.07 0.93joint

Y N[ ]

Y 0.45 0.05

N 0.035 0.465

Geometric view: the posterior must be the same as the prior

Multiplicative Bayes-Capacity

A universal upper bound for leakage!

TheoremFor any channel C , prior π and gain function g:

L×g (π,C ) ≤ML×b (C ) =

y maxx Cxy

Page 32: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Password-checker 1

Let us construct the channel matrix

000

001

010

011

100

101

110

111

Fail

OK

Input: K1K2K3 ∈ {000, 001, . . . , 111}

Output: out ∈ {O K , F A I L }

Assume the user string is x1x2x3 = 110

Different values of x1x2x3

give different channel matrices, but they all have this kind of shape (seven inputs map to Fail, one maps to OK)

Password-checker 2

Let us construct the channel matrix{ }

Output: out ∈ {O K , ( F A I L , 1), ( F A I L , 2), ( F A I L , 3)}

Assume the adversary can measurethe execution time

000

001

010

011

100

101

110

111

(Fail,1)

(Fail,2)

(Fail,3)

OK

Input: K1K2K3 ∈ {000, 001, . . . , 111}

Assume the user string is x1x2x3 = 110

Exam 2015-16, Question 4 (20%)

Let C be a channel from X to Y.

5.1 Show that for any prior π and gain function g:

L×g (π,C ) ≤ |Y| and

L×g (π,C ) ≤ |X |

5.2 Let πu be the uniform prior. Show that

(∀g : L×g (πu,C ) = 1)

if and only if C is non-interfering.

MPRI 2.3.2 - Foundations of privacy

Lecture 5

Kostas Chatzikokolakis

Oct 17, 2016

Page 33: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Plan of the course

Quantitative Information Flow

◮ Motivation, application examples

◮ Secrets and vulnerability

◮ Channels and leakage

◮ Multiplicative Bayes-Capacity

◮ Comparing systems, the lattice of information

◮ Applications and exercises

Comparing channels

When can we say that C1 is better than C2

We could check Lg(π,C1) ≤ Lg(π,C2)

What about different π, g’s? We want robustness!

Partition refinementAny deterministic channel C induces a partition on X .

x1 and x2 are in the same block iff they map to the same output.

Ccountryperson → country of birth

USAFrance Spain

BrazilUK China

Cstateperson → state of birth

France Spain

BrazilUK China

ORCA FL

OHNY DC

Partition refinement ⊑: Subdivide zero or more of the blocks.

Ccountry ⊑ Cstate

Leakage ordering

C1 ≤m C2 m ∈ {Shannon,min-entropy, guessing entropy}

the leakage of C1 is no greater than that of C2 for all priors

Theorem (Yasuoka, Terauchi, Malacaria)

On deterministic channels, the relations below coincide:

◮ ≤Shannon entropy◮ ≤min-entropy◮ ≤guessung entropy◮ ⊑

Can we apply this to any of the examples seen so far?

Page 34: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Password-checker 1

Let us construct the channel matrix

000

001

010

011

100

101

110

111

Fail

OK

Input: K1K2K3 ∈ {000, 001, . . . , 111}

Output: out ∈ {O K , F A I L }

Assume the user string is x1x2x3 = 110

Different values of x1x2x3

give different channel matrices, but they all have this kind of shape (seven inputs map to Fail, one maps to OK)

Password-checker 2

Let us construct the channel matrix{ }

Output: out ∈ {O K , ( F A I L , 1), ( F A I L , 2), ( F A I L , 3)}

Assume the adversary can measurethe execution time

000

001

010

011

100

101

110

111

(Fail,1)

(Fail,2)

(Fail,3)

OK

Input: K1K2K3 ∈ {000, 001, . . . , 111}

Assume the user string is x1x2x3 = 110

Composition refinementHow can we generalize this to probabilistic channels?

First issue: ⊑ is not defined for probabilistic chanels

Cmerge: state → country

Ccountry = CstateCmerge

Definition (composition-refinement)

C1 ⊑◦ C2 iff C1 = C2C3 for some C3,

C1

XC2

YC3

Z

TheoremFor deterministic channels, ⊑ and ⊑◦ coincide.

⊑◦ is a promising generalization of ⊑ to probabilistic channels.

Refinement and leakage ordering

composition refinement?⇔ leakage order

for probabilistic channels?

Page 35: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Vg and optimal strategies

Strategy S : mapping outputs to guesses

Represented as a channel S : Y → W (possibly deterministic)

CS : composed

Composition CS : channel from Y to W

CS

XC

YS

W

Expected gain: Eπ,CS(g) =∑

x ,w πxCSxwg(w , x)

Vg(π,C ) = maxS Eπ,CS(g)

Composition refinement

Definition (composition-refinement)

C1 ⊑◦ C2 iff C1 = C2C3 for some C3,

C1

XC2

YC3

Z

TheoremC1 ⊑◦ C2 implies Lg(π,C1) ≤ Lg(π,C2) for all π, g

Proof: if S1 is a strategy for C1 then S2 = C3S1 is a strategy for

C2 such that C1S1 = C2S2.

Refinement and leakage ordering

composition refinement?⇔ leakage order

for probabilistic channels?

DefinitionC1 ≤G C2 iff Lg(π,C1) ≤ Lg(π,C2) for all π, g

TheoremC1 ⊑◦ C2 ⇒ C1 ≤G C2

an analogue of the data-processing inequality for g-leakage

(“post-processing can only destroy information”)

What about the converse?

It turns out that ≤min-entropy 6⇒ ⊑◦

On the other hand ≤G is strong enough:

Theorem (“Coriaceous”)

C1 ≤G C2 ⇒ C1 ⊑◦ C2

The proof uses the separating hyperplane theorem!

Page 36: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Applications

◮ Dining Cryptographers

◮ Crowds

+ exam 2015-16, Question 6

◮ Timing Attacks in Cryptosystems

Example: DC nets (ring of 3 nodes, b=1)

n0

n2

001

n1

010

100

111

Secret Information Observablesn0

n2 n1

⅓ ²⁄₉ ²⁄₉ ²⁄₉

²⁄₉ ⅓ ²⁄₉ ²⁄₉

²⁄₉ ²⁄₉ ⅓ ²⁄₉

001

n0

n1

n2

010 100 111

Example: DC nets (ring of 3 nodes, b=1)

fair coins: Pr(0) = Pr(1) = ½strong anonymity

biased coins: Pr(0) = ⅔ , Pr(1) = ⅓The source is more likely to declare 1 than 0

001

n0

n1

n2

010 100 111

¼ ¼ ¼ ¼

¼¼¼¼

¼ ¼ ¼ ¼

The Crowds protocol

Page 37: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

The Crowds protocol

Adversary: group of corrupted users

n honest, c corrupted, m = n + c total

Secrets: X = {x1, . . . , xn}

Observations: the adversary only sees messages passing

forwarded to him

Y = {y1, . . . , yn,⊥}

yi means that user i forwarded a message to the adversary

The Crowds protocol: channel

Cxi⊥ = α = n−npfm−npf

Cxiyi = β =c(m−pf (n−1))m(m−pf n)

Cxiyj = γ =cpf

m(m−pf n)i 6= j

C =

y1 · · · yn ⊥

x1 β · · · γ α...

. . .

xn γ · · · β α

The Crowds protocol

Posteriors:

σyj = (k , . . . , k ,m − pf (n − 1)

m, k , . . . , k)

σ⊥ = (1/n, . . . , 1/n)

Bayes-Capacity:

ML×b (C ) = nβ + α =n(c + 1)

c + n

independent from pf !

The Crowds protocol

Why isML×b (C ) independent from pf ?

Is it always true for other π or g?

Page 38: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

The Crowds protocoln = 2, c = 1

p : probability of user 1

Bayes-leakage as a function of pf

0.9

1

1.1

1.2

1.3

1.4

1.5

0 0.2 0.4 0.6 0.8 1

p = 0.3p = 0.4p = 0.5

The Crowds protocoln = 2, c = 1

p : probability of user 1

Bayes-leakage as a function of p

1

1.05

1.1

1.15

1.2

1.25

1.3

1.35

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

pf = 0.0pf = 0.5pf = 1.0

Modified Crowds

Modification: the adversary can somehow know whether the

forwarding is happening in the first round or not

n honest, c corrupted, m = n + c total

Secrets: X = {x1, . . . , xn}

Observations: Y = {(y1, 1), (y1, 2+), . . . , (yn, 1), (yn, 2+),⊥}

1: first round, 2+: second round or higher

Modified Crowds: channel

Cmodxi⊥= α = n−npf

m−npf

Cmodxi (yi ,1)

= β = cm

Cmodxi (yj ,1)

= 0 i 6= j

Cmodxi (yj ,2)

= γ = 1−α−βn

Cmod =

(y1, 1) · · · (yn, 1) (y1, 2) · · · (yn, 2) ⊥

x1 β · · · 0 γ · · · γ α...

. . .. . .

xn 0 · · · β γ · · · γ α

Page 39: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Modified Crowds: leakage

Cpf : original Cowds, Cmodpf: modified, for given pf

take arbitrary π, g:

Does this channel leak more or less than the real C?

Cpf ⊑◦ Cmodpf

Does Vg(π,C ) depend on pf ?

Cmodpf=◦ C

modp′f

∀pf , p′f

where =◦ means ⊑◦ ∩ ⊒◦

Modified Crowds: strategiesWhat is an optimal strategy for Bayes-vulnerability?

What is an optimal strategy for the original C?

Why is Bayes-leakage the same for uniform prior?◮ (yi , 2) offers no evidence, the posterior is the same as the

prior. Best guess: x with max πx (i.e. any x)

◮ both (yi , 1) and (yi , 2) can be mapped to xi by an optimal

strategy Smod for Cmodpf

◮ The optimal strategy S for Cpf maps yi to xi

◮ For the optimal strategies: Cpf S = CmodpfSmod , so

Vb(πuni ,Cpf ) = Vb(πuni ,Cmodpf

)

◮ And hence independent from pf

What about other priors?

What about other g’s?

Exam 2015-16, Question 6 (30%)

In the Crowds protocol, due to the probabilistic routing, each

request could pass through corrupted users multiple times before

arriving to the server, as shown in the figure below. However, in

the security analysis, we only considered as “detected” the first

user who forwards the request to a corrupted one.

To perform a more precise analysis, let us consider the first two

detected users, instead of only the first one. Let n,m be the

number of honest and total users respectively. The set of secrets

is still X = {1, . . . , n} (we are only interested in the privacy of

honest users).

Exam 2015-16, Question 6 (30%)

On the other hand, the information available to the adversary is

now more detailed. Observations are of the form y = (d1, d2)

where d1 ∈ {1, . . . , n,⊥} (the first detected user, similarly to the

original analysis) and d2 ∈ {1, . . . ,m,⊥} (the second detected

user, who might be corrupted himself).

Show that this extra information is in fact useless to the

adversary. More precisely, show that for any prior π and gain

function g:

Vg(π,C1) = Vg(π,C

2)

where C2 is the channel obtained by the detailed analysis,

considering two detected users, and C1 is the channel of the

original analysis, considering a single detected user.

Page 40: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Timing Attacks in Cryptosystems

Remote timing attack [BonehBrumley03]

1024-bit RSA key recovered in 2 hours from standard

OpenSSL implementation across LAN

Timing Attacks in Cryptosystems

Time of Dec(sk , ct) depends on both sk , ct

Channel Cct for each ciphertext ct

Input: secret key sk

Ouput: time t

Timing Attacks in Cryptosystems

First counter measure: blinding

Randomize ct before decryption: ct ⊗ r

De-randomize after decryption: obtain Dec(sk , ct) from

Dec(sk , ct ⊗ r)

Possible because of properties of RSA encryption

Makes time independent from ct

n decryptions: repeated independent runs of C

Repeated independent runs

Run C multiple times with the same secret x

Output: (a1, . . . , an) ∈ Yn

Channel Cn: Cnx ,(a1,...,an)

=∏

i Cx ,ai

Page 41: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Repeated independent runs

The probability of (y1, y1, y3) does not depend on the exact

sequence but only on the number of occurrences of eacy y

Type: sequence of occurrences of each y

(y1, y1, y3) has type (2, 0, 1)

Repeated independent runs

Channel T : X → Tn from secrets to types

Cx ,t =∏

i Cxyiti

How does Tn relate to Cn

Repeated independent runs

Cn

XT

TnA

Yn

T

XCn

YnB

Tn

Repeated independent runs

Bound the leakage of T by |Tn|

ML×b (T ) ≤ |Tn| =(

n+|Y|−1n

)

Page 42: Plan of the course - LIX · MPRI 2.3.2 - Foundations of privacy Lecture 1 Kostas Chatzikokolakis Sep 19, 2016 Plan of the course Quantitative Information Flow Motivation, application

Timing Attacks in Cryptosystems

Second counter measure: bucketing

Limit time to at most b buckets◮ Run the decryption then wait until the next bucket

◮ b should be small, eg 5 or 6

◮ much more efficient than always waiting until the max time

Combining both measures:

ML×b (Cn) ≤

(

n + b − 1

n

)

eg n = 240, b = 5, leakage at most 2155.

2048 bit key becomes as guessable as a 1893 bit key


Recommended