+ All Categories
Home > Documents > Concrete Constructions of Unbalanced Bipartite Expander ...

Concrete Constructions of Unbalanced Bipartite Expander ...

Date post: 12-Mar-2022
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
94
ETH Library Concrete constructions of unbalanced bipartite expander graphs and generalized conductors Master Thesis Author(s): Werner, Rose-Line Publication date: 2008 Permanent link: https://doi.org/10.3929/ethz-a-005664665 Rights / license: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection . For more information, please consult the Terms of use .
Transcript

ETH Library

Concrete constructions ofunbalanced bipartite expandergraphs and generalizedconductors

Master Thesis

Author(s):Werner, Rose-Line

Publication date:2008

Permanent link:https://doi.org/10.3929/ethz-a-005664665

Rights / license:In Copyright - Non-Commercial Use Permitted

This page was generated automatically upon download from the ETH Zurich Research Collection.For more information, please consult the Terms of use.

Analysis and Optimization of Spatial and Appearance Encodings of Words and

Sentences

Christian Vögeli

Master ThesisSS 2005

Prof. Dr. Markus Gross

Eidgenössische Technische Hochschule ZürichSwiss Federal Institute of Technology Zurich

Concrete Constructions of

Unbalanced Bipartite Expander

Graphs and Generalized

Conductors

Rose-Line Werner

Master's Thesis in Computer ScienceMarch - September 2008

AdvisorsProf. Dr. Ueli Maurer

Stefano Tessaro

This thesis is submitted for partial fulllment of the requirements for the degree of Masterof Science ETH in Computer Science at ETH Zurich (Swiss Federal Institute of TechnologyZurich).

Acknowledgments

First of all, I want to thank Stefano Tessaro for his great support during my master's thesis.We had a lot of valuable and instructive discussions and he gave me insights on how to writea scientic text. Furthermore, I highly appreciate that he spend many hours of his valuabletime to proofread my thesis.

Also, I would like to thank Professor Dr. Ueli Maurer for being the responsible professor atETH for my master's thesis and for his brilliant lectures about Cryptography and InformationTheory, which aroused my interest in this topic. They were the main reasons why I decidedto write my thesis in the group of theoretical computer science.

I dedicate this thesis to my family and my friends, who have always given me strong supportduring my studies at ETH Zurich.

Waeldi, September 3rd, 2008

Rose-Line Werner

ii

Abstract

Bipartite expander graphs are bipartite graphs for which the left vertex set has a guaranteedexpansion parameter to the right vertex set, or more formally, a bipartite graph G = (V1,V2, E)has expansion parameter γ if for every set X ⊂ V1 (with bounded size) with its neighborsΓ(X ) ⊂ V2, we have |Γ(X )| ≥ γ · |X |. We are interested in the unbalanced case, where the setV1 is much larger than V2. If the neighbors of each vertex in V1 can be eciently computed,the expander graph is additionally called explicit.In this thesis, we introduce a generalized notion of conductors, which are functions of the

form 0, 1n × 0, 1d → 0, 1m which take as rst input a string with certain min-entropyand as second input some few truly random bits, and provide some entropy guarantees on theoutput. These generalized conductors are usually interpreted as explicit unbalanced bipartiteexpander graphs with stronger properties. In particular, we provide strong composition the-orems for such conductors which allow us to obtain explicit unbalanced bipartite expandergraphs with good expansion parameters and suciently small left-degree, and to study theconcrete values of these parameters.Explicit unbalanced bipartite expander graphs with good expansion parameters can be

used in cryptographic schemes (like the domain extender of public random functions due toMaurer and Tessaro). In particular, a small left-degree of the expander graph is crucial forthe eciency of the protocols using such expander graphs. Therefore, we focus on nding aconstruction of an explicit expander graph with small left-degree: We show non-constructivelythat such expander graphs (as well as other types of conductors) with small left-degree andgood expansion must exist and try to nd an explicit construction of such a good expandergraph. In particular, we analyze an expander graph construction which leads to a small left-degree of the expander graph in complexity-theoretic terms and we investigate the concretevalue of this left-degree. We show that even though the left-degree is polynomial, the actualdegree of the polynomial makes it not feasible to use the construction in practice.This give us the motivation to analyze an unbalanced bipartite expander graph construction

which is based on selecting (according to some rule) substrings of length n as the neighborsof a string which has length multiple of n: Although this construction has a small left-degreeand promises a good expansion of the left vertices, we show that it is impossible to constructan expander graph with good expansion on the basis of substring selection.

iv

Contents

1 Introduction 11.1 Unbalanced Bipartite Expander Graphs . . . . . . . . . . . . . . . . . . . . . . 11.2 Domain Extension for Public Random Functions . . . . . . . . . . . . . . . . . 11.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Preliminaries and Notation 52.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Probability and Information Theory . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2.1 Entropy and Min-Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2.2 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.3 Estimations for the Binomial Coecient . . . . . . . . . . . . . . . . . . . . . . 92.4 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Expanders and Conductors 133.1 Generalized Unbalanced Bipartite Expander Graphs . . . . . . . . . . . . . . . 133.2 Generalized Conductors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.3 Composition Theorems for Conductors . . . . . . . . . . . . . . . . . . . . . . . 18

3.3.1 Conductor Cascading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3.2 Conductor Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . 193.3.3 Constructing Somewhere-Conductors by Conductor Cascading . . . . . . 22

3.4 Constructing Expander Graphs From Conductors . . . . . . . . . . . . . . . . . 253.4.1 Transforming Conductors into an Expander Graph . . . . . . . . . . . . 263.4.2 Transforming Somewhere Conductors into an Expander Graph . . . . . 27

3.5 Probabilistic Existence Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.5.1 Existence of Injective (Lossless) Conductor . . . . . . . . . . . . . . . . 293.5.2 Existence of Injective Extracting Conductors . . . . . . . . . . . . . . . 323.5.3 Existence of Injective Expanders . . . . . . . . . . . . . . . . . . . . . . 35

3.6 Application of Expander Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 373.7 Notations Used in the Literature . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4 Basic Constructions 414.1 Trevisan's Extracting Conductor . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.1.1 Nisan-Wigderson Pseudo-Random Generator . . . . . . . . . . . . . . . 414.1.2 Making the NW Generator an Injective Extracting Conductor . . . . . . 444.1.3 Instantiation of Trevisan's Conductor . . . . . . . . . . . . . . . . . . . 484.1.4 Expander Graph Construction . . . . . . . . . . . . . . . . . . . . . . . 49

4.2 Constructing Conductors from Hash Functions . . . . . . . . . . . . . . . . . . 504.3 Strong Condensing Conductor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.3.1 Reconstructive Extracting Conductors . . . . . . . . . . . . . . . . . . . 534.3.2 Strong Condensing Conductor Construction . . . . . . . . . . . . . . . . 55

v

Contents

5 Compositions of Basic Constructions 635.1 Iterated Concatenation of Trevisan's Conductor . . . . . . . . . . . . . . . . . . 635.2 Extracting Conductor with Almost Optimal Entropy Loss . . . . . . . . . . . . 645.3 Improved Conductor by First Condensing . . . . . . . . . . . . . . . . . . . . . 665.4 Final Construction of an Expander Graph . . . . . . . . . . . . . . . . . . . . . 71

6 Graph Construction with Substring Selection 736.1 Candidate for Expander Graph Construction . . . . . . . . . . . . . . . . . . . 736.2 Impossibility Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7 Conclusions and Outlook 77

Bibliography I

List of Figures III

Index V

A Task Description VIIA.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIIA.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIIIA.3 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIIIA.4 Grading of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX

vi

1 Introduction

1.1 Unbalanced Bipartite Expander Graphs

There are several functions which have interesting combinatoric properties for applicationsin cryptographic protocols. Extractors, condensers, and conductors are examples for suchfunctions. In particular, the mentioned functions are all special cases of so called bipartiteexpander graphs: A (K, γ)-expander graph is a bipartite graph G = (V1,V2, E) such that forevery set X ⊂ V1 with |X | ≤ K, the size of its neighbors Γ(X ) is at least γ · |X |.In this thesis, we focus on unbalanced bipartite expander graphs, where |V1| |V2| and we

analyze concrete constructions of such expander graphs. To simplify the analysis, we develop ageneralized framework for bipartite expander graphs. It turns out that extractors, condensers,and conductors are special instantiations of our generalized denition. We give a survey ofthe dierent expander graph properties, of the achievable parameters, and give compositiontheorems.

Several constructions of unbalanced bipartite expander graphs are discussed in the literaturebut unfortunately, the interesting ones have only an asymptotic analysis of the expander graphparameters. Our goal is to give the concrete functions describing the parameter values of theanalyzed expander graph constructions and we are mainly interested in the concrete value ofthe so called left-degree of the expander graph, which is the maximal degree of the vertices inV1.

Unbalanced bipartite expander graphs with small left-degree are interesting for crypto-graphic applications. One of this applications which uses such expander graphs is the hashfunction proposed in [MT07]. We give a short overview of this application in Section 1.2.

1.2 Domain Extension for Public Random Functions

In cryptography, functions which take as input a bit string of arbitrary length and return an(almost) random string of xed size are important for many applications. In general, such ahash function 0, 1∗ → 0, 1` is constructed by using a component function F : 0, 1n →0, 1` with n > ` and embedding this component function into an iterated construction1

H(·) resulting in a hash function H(R) : 0, 1∗ → 0, 1`. In [MT07], they investigate inhow to construct a public hash function by using a public random function2 R as componentfunction.

The main goal of [MT07] is to construct such a public random functionR : 0, 1n → 0, 1`given public random functions with smaller domain 0, 1m. They discuss on how to extend thedomain of the given public random functions from 0, 1m to the domain 0, 1n (with n > m)

1e.g. the CBC or Merkle-Damgård construction2A public random function R is similar to a secret random function but has a private (Rpriv) interface whereonly honest users have access to and a public (Rpub) interface where the adversary has access to. Bothinterfaces Rpriv and (Rpub have the same behavior.

1

CHAPTER 1. INTRODUCTION

with the help of an ecient construction C. But rst, we give an overview of the constructionC, we introduce the notion of indierentiable which is a generalization of indistinguishable tosystems with public interface and we also dene the notion of a reduction for public systems.

For a function F or more generally, a system F with a public and a private interface, wewrite F = [Fpub,Fpriv]. Further, we denote by ∆D(F,G) the distinguishing advantage3 of thedistinguisher D in distinguishing the public system F from the (ideal) public system G afterdoing k queries to the system F and G.

Then for a function α : N → R≥0 and a function σ : N → N, we say that system F is(α, σ)-indierentiable from G if there exists a simulator S such that

∆D ([Fpub,Fpriv], [S(Gpub),Gpriv]) ≤ α(k)

for all distinguishers D making at most k queries and S making at most σ(k) queries to Gpub

when interacting with D. Further, we say that a construction C(·) is an (α, σ)-reduction, if∆D ([Fpub,C(Fpriv)], [S(Gpub),Gpriv]) ≤ α(k).

In [MT07], the goal is to construct a reduction C to a public random function R whichmaps an n-bit string to an `-bit string by using r + t public random functions F1, ...,Fr :0, 1m → 0, 1tρm and G1, ...,Gt : 0, 1m → 0, 1`, where security up to k = 2(1−ε)m − rqueries is given for a constant ε ∈ (0, 1). Further, the construction C uses r eciently com-putable functions E1, ..., Er : 0, 1n → 0, 1m , which we describe later and are for us ofspecial interest.

s ∈ 0, 1n

nn

m m m

tρm tρm tρm

m m m

tρm tρm tρm

E1 E2 Er

F1 F2 Fr

- - -- - -- - -

ρm

·w1

ρm

m

·w2

ρm

m

·wt

ρm

m

`

G1 G2 Gt

`

Figure 1.1: Construction of public random function 0, 1n → 0, 1`

The construction C is illustrated in Figure 1.1 and does the following computations for aninput x ∈ 0, 1n:

1. For all p = 1, ..., r, compute Fp(Ep(x)) = F(1)p (Ep(x))|| · · · ||F(t)

p (Ep(x)), where Fp(Ep(x))∈ 0, 1tρm and F

(q)p (Ep(x)) ∈ 0, 1ρn for all q = 1, ..., t.

3in Section 2.2.2, the formal denition will be given

2

1.2. DOMAIN EXTENSION FOR PUBLIC RANDOM FUNCTIONS

2. Compute for all q = 1, ..., t the product Pq :=⊙r

p=1 F(q)p (Ep(x)) where denotes the

multiplication in GF (2ρm) with ρm-bit strings interpreted as elements of the nite eldGF (2ρm) .

3. For all q = 1, ..., t, dene wq(x) being the rst m bits of Pq.

4. Finally, calculate for each q = 1, ..., t the value Gq(wq(x)) and output the sum⊕tq=1 Gq(wq(x)).

We explain now on an abstract level why the dierent stages are needed and what require-ments the functions Ei must fulll.

We see that if we would omit the public random functions F1, ...,Fr, an unbounded adver-sary could deterministicly calculate all outputs

⊕tq=1 Gq(wq(x)) and possibly nd a subset of

outputs which are dierentiable from a truly random l-bit string. Hence, the inputs for thepublic random functions G1, ...,Gt must contain some randomness.

We discuss now what requirements the function family E1, ..., Er must fulll. First, werequire that the adversary cannot nd queries s 6= s′ ∈ 0, 1n such that the construction C

has the same output, i.e. we want to avoid that the adversary can nd a collision. To avoidcollisions, we require that the function family E1, ..., Er is injective, i.e. for all s 6= s′ ∈ 0, 1n,there must exist a p ∈ 1, ..., r with Ep(s) 6= E(s

′). Second, the construction must permitsimulations of the public random functions F1, ...,Fr and G1, ...,Gt given access only tothe public interface of a public random function R : 0, 1n → 0, 1`. Furthermore, theprobability of simulation failure must be small enough to allow security beyond the birthdaybarrier4. But if E1, ..., Er allow a relatively small number of queries to F1, ..,Ft for revealinga too large number of values w1(x), ..., wt(x), the simulator will possibly fail. To avoid thisproblem, the function family E1, ..., Er must satisfy the following requirement: The familyshould reveal only a small number of strings in 0, 1m for a given number of input strings in0, 1n. Function families being injective and having this restricting property are called input-restricting functions. In [MT07] it is shown why this properties are sucient to guarantee thesecurity of the construction beyond the birthday barrier. We will give now a precise denitionof the input-restricting functions E1, ..., Er and in Section 3.6 we will show how to constructsuch a function family with the help of an unbalanced bipartite expander graphs.

Denition 1.1 (input-restricting function). [MT07]. Let ε = ε(n) ∈ (0, 1), r = r(n),δ = δ(n), m = m(n) be functions of n and let n > m, then a family In of functionsE1, .., Er : 0, 1n → 0, 1m is called (n, δ, ε)-input restricting if it satises the following twoproperties:

Injective: ∀x 6= x′ ∈ 0, 1n, ∃i ∈ 1, ..., r such that Ei(x) 6= Ei(x′).

Input-Restricting: For all subsets M1, ...,Mr ⊂ 0, 1m such that |M1| + ... + |Mr| ≤2(1−ε)·m, we have∣∣x ∈ 0, 1n |Ei(x) ∈Mi for all i = 1, ..., r

∣∣ ≤ δ · (|M1|+ ...+ |Mr|).

In is called explicit if r(n) is polynomial in n and if Ei(·) can be computed in poly(n) time.

4With birthday barrier we mean that the number of queries is up to O(2n/2)

3

CHAPTER 1. INTRODUCTION

Choosing In = E1, ..., Er being a (δ, n, ε)-input-restricting function family, the followingresult was stated in [MT07].

Theorem 1.2. Let ρ = dn/m+ 2− εe and t = d2/ε− 1e. Then the construction C is an(α, σ)-reduction of the public random function R : 0, 1n → 0, 1` to the public randomfunctions F1, ...,Fr : 0, 1m → 0, 1t·ρm and G1, ...,Gt : 0, 1m → 0, 1`, where for allk ≤ 2(1−ε)m − r,

α(k) ≤ 2rt(δ + 1)t+1 · kt+2 · 2−mt +12t(δ + 1) · k · (k + 2r + 1) · 2n−ρm

and σ(k) ≤ δ · k.In particular, if ε is constant and δ and the cardinality r are polynomial in n, the above

advantage α(k) is negligible.Because we want the construction C be ecient, we require that In is an explicit input-

restricting function family. Especially the requirement of polynomial cardinality r will give usa strong requirement for the unbalanced expander graphs we are going to construct. Namely,we will need unbalanced bipartite expander graphs which have a small left-degree.

1.3 Contributions

In this thesis, we discuss the following points.

• In Chapter 3, we give a survey over our generalized framework. In particular,

in Section 3.1, we introduce a generalized notion of expander graphs and give theformal denition of expander graph properties we use and

in Section 3.2, a generalized notion of special combinatorial functions known asextractors, condensers or conductors is given and we call this generalized notiongeneralized conductors. Furthermore, in Section 3.3, we present strong compositiontheorems which allow us to build generalized conductors with stronger propertiesthan the underlying conductors.

In Section 3.4, we show that our generalized conductors are expander graphs withstrong properties.

In Section 3.5, we show non-constructively that generalized conductors and ex-pander graphs with strong properties exist, and in particular, with properties in-terestingly for the application of domain extension of public random functions.

• In Chapter 4, we discuss some concrete basic constructions of generalized conductorswhich we will use in Chapter 5 to construct stronger generalized conductors with thehelp of the composition theorems introduced in Section 3.3. In particular in Section 5.4,we present a concrete construction of an expander graph fullling in complexity-theoreticterms the requirements needed to get good input-restricting functions and show that thisconstruction is not applicable in practice.

• Finally in Chapter 6, we give a strong impossibility proof which states that it is im-possible to construct an expander graph with useful parameters by using a constructionbased on substring selection.

The used notation and some mathematical preliminaries are introduced in Chapter 2.

4

2 Mathematical Preliminaries and Notation

2.1 Notation

We use the following notation: With upper-case letters we denote distributions or randomvariables (RV) and for their concrete values the corresponding lower-case letter. PX standsfor the probabilistic distribution function of the random variable X and PX [x] is a shorthandfor P[x = X]. With calligraphic letters (A,B, ...) we denote events or sets. With [k] we denotethe set 1, 2, ..., k.Let y be an d-bit string and S ⊆ [d], then we denote y|S as the projection or reduction of y

to the bits specied by S. Further, we denote with ln(·) the natural logarithm and with log(·)the logarithm with base 2.

2.2 Probability and Information Theory

We give now a short overview of the concepts in probability and information theory we uselater.

2.2.1 Entropy and Min-Entropy

One of the concepts we need is the entropy function of Shannon which measures the uncertaintyof a random variable.

Denition 2.1 (entropy). For a discrete random variable X the entropy H(X) is dened as

H(X) =∑x∈X

(PX [x] · log2 PX [x]),

where 0 · log2 0 is assumed to be 0.

For a binary random variable X with bias p we dene h(p) := H(X) = −(p log p + (1 −p) log(1− p)), where h(·) is called the binary entropy function. It is easy to see that for thebinary entropy the following lemma must hold.

Lemma 2.2. For any α > β such that h(β) < h(α), we have h(α) < h(β) · αβ .

To measure how random a random variable is, we use the min-entropy which is a bettermeasurement for randomness than the entropy function.

Denition 2.3 (min-entropy). The min-entropy H∞(X) of a distribution X is dened as

H∞(X) = minx− log2(PX [x]) = − log2(max

xPX [x]).

For the uniform distribution U , we have H∞(U) = H(U), but in general, we have H∞(X) ≤H(X) for a distribution X.

5

CHAPTER 2. PRELIMINARIES AND NOTATION

x0 0.2 0.4 0.6 0.8 1.0

h(x)

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Figure 2.1: Binary entropy function

2.2.2 Distributions

For distributions, we have the notion of support.

Denition 2.4 (support). The support supp(X) of a distribution X is the smallest closed setwhose complement has probability zero.

Denition 2.5 (at distribution). A distribution X is at if it is uniform over its support S,i.e. for every x ∈ X we have PX [x] = 1

|S| .

Lemma 2.6. A distribution X has H∞(X) ≥ k if and only if X is a convex combination ofat distributions on sets of size exactly 2k.

Lemma 2.7 (Cherno bound). Let X1, X2, ...Xn be independent 0/1 random variables andlet µ be the expectation of the sum over this n RVs, than we have

P

[∣∣∣∣∣n∑i=1

Xi − µ∣∣∣∣∣ > δµ

]< 2e

−µδ23

for 0 < δ ≤ 1 .

Denition 2.8 (statistical dierence). Two distributions X and Y with range S have astatistical dierence (or distinguishing advantage) ε if

|X − Y | := maxD|PX [D(X) = 1]− PY [D(Y ) = 1]| = max

A[PX(x ∈ A)− PY (y ∈ A)]

=12

∑s∈S|PX(s)− PY (s)| = ε,

where we maximize over all functions D : S → 0, 1 or all subsets A ⊆ S. Function D isoften called a distinguisher.

The statistical dierence fullls the triangle inequality.

Lemma 2.9. For every distributions X,Y and Z we have

|X − Z| ≤ |X − Y |+ |Y − Z|.

6

2.2. PROBABILITY AND INFORMATION THEORY

Denition 2.10 (ε-close). Distribution X and Y with range S are ε-close if the statisticaldierence between X and Y is at most ε i.e.

|X − Y | ≤ ε

Denition 2.11 (k-source). A distribution X is a k-source if H∞(X) ≥ k. X is a (k, ε)-sourceif it is ε-close to some k-source.

Lemma 2.12. Let X be a distribution on a nite set S. Let be col(X) the collision probabilityof X, i.e. the probability that if chosen two elements x, y independently according to X wehave x = y. If col(X) ≤ 1+4ε2

|S| , then X is ε-close to the uniform distribution on |S|.Proof. To show the lemma, we will use the Cauchy-Schwarz inequality:

n∑i=1

(xi · yi) ≤

√√√√ n∑i=1

x2i ·

√√√√ n∑i=1

y2i .

The statistical dierence between distribution X and the uniform distribution U over S isdened as

|X − U | = 12·∑s∈S

∣∣∣∣PX [s]− 1|S|

∣∣∣∣=

12·∑s∈S

(∣∣∣∣PX [s]− 1|S|

∣∣∣∣ · 1)

≤ 12·

√√√√∑s∈S

(PX [s]− 1

|S|

)2

·√∑

s∈S12 (1)

=

√|S|2·√∑

s∈S

(PX [s]2 − 2PX [s]

|S| +1|S|2

)

=

√|S|2·√∑

s∈SPX [s]2 − 2

|S| +|S||S|2 (2)

=

√|S|2·√∑

s∈SPX [s]2 − 1

|S|

where at Step (1) we used the Cauchy-Schwarz inequality and at Step (2) we used the factthat

∑s∈S PX [s] = 1.

Note that∑

s∈S PX [s]2 is the collision probability of X and thus, if we insert∑

s∈S PX [s]2 ≤1+4ε2

|S| into the equation, we get

|X − U | ≤√|S|2·√

1 + 4ε2

|S| − 1|S|

= ε.

Hence, X must be ε-close to uniform if col(X) ≤ 1+4ε2

|S| .

7

CHAPTER 2. PRELIMINARIES AND NOTATION

We state the following fact without giving a proof.

Lemma 2.13. If a random variable Z is not ε-close to a distribution with min-entropylog(Λ/ε), then ∃S: |S| = Λ: P[Z ∈ S] > ε.

If there is not only an algorithm D which can distinguish two distributions but can alsonon-trivially predict the next output bit given the preceding output bits, we call the algorithma next-bit predictor. We give now the formal denition of a next-bit predictor.

Denition 2.14 (next-bit predictor). Let X be a distribution over 0, 1n. We call a functionT : 0, 1<n → 0, 1 a next-bit predictor for the distribution X with success p ≥ 1/2, if

Pi∈[n],X [T (x1, x2, ..., xi−1) = xi] ≥ p,

for x ∈ X with the rst i bits set to x1, x2, ..., xi.

Every next-bit predictor with success 1/2 + ε is also a distinguisher with success ε. Theconverse is also true: Every distinguisher can be transformed into a next-bit predictor butwith a loss in the advantage. This fact is stated in the following well-known lemma from Yao.

Lemma 2.15 (Yao). If distribution X over 0, 1m is not ε-close to uniform, then there existsa next-bit predictor T for distribution X with success 1/2 + ε/m.

Lemma 2.16. Let Y be a distribution over 0, 1m with min-entropy H∞(Y ) ≤ εm. Then,there exists a next-bit predictor T : 0, 1<m → 0, 1 for Y with success 1− ε.Proof. Let Y be a distribution Y = (Y1, Y2, ..., Ym) over 0, 1m with entropy H(Y ) ≤ εm.Further, let

pi|y1,...,yi−1:= P[Yi = 1|Y1 = y1, ..., Yi−1 = yi−1].

If i and y1, ..., yi−1 are given, an optimal next-bit predictor T outputs a 1 if pi|y1,...,yi−1> 1/2

and a 0 otherwise. The next-bit predictor has an error of

Ei∈[m],y∈Y

[minpi|y1,...,yi−1, 1− pi|y1,...,yi−1

].

We notice that for a 0 ≤ p ≤ 1, we have

minp, 1− p ≤ minp, 1− p · log(

1minp, 1− p

)≤ p log

(1p

)+ (1− p) log

(1

1− p

)= H(p).

Therefore, we have

Ei∈[m],y∈Y

[minpi|y1,...,yi−1, 1− pi|y1,...,yi−1

] ≤ Ei∈[m],y∈Y

[H(pi|y1,...,yi−1)]

=1m

m∑i=1

H(Yi|Y1, Y2, ..., Yi−1)

=1mH(Y ) ≤ ε.

In particular, we have H∞(X) ≤ H(X) for all distributions X and therefore 1mH∞(Y ) ≤ ε

must hold, too.

8

2.3. ESTIMATIONS FOR THE BINOMIAL COEFFICIENT

2.3 Estimations for the Binomial Coecient

Denition 2.17 (binomial coecient). The binomial coecient is dened as(n

k

):=

n!(n− k)! · k!

.

According to Stirling, we can approximate the factorial of a number as follows:

Lemma 2.18. For n ∈ N, we have

n! >√

2πn ·(ne

)n· e1/(12n+1),

where e is the Euler constant.

An application of Stirling's approximation gives as the following upper bound for the bino-mial coecient.

Lemma 2.19. For n, k ∈ N with k ≤ k, we have(n

k

)≤(e · n

k

)k,

where e is the Euler constant.

For our calculations we need a special upper bound for the sum over binomial coecientswhich we denote in the following lemma.

Lemma 2.20. For n ∈ N and 0 ≤ α ≤ 1/2 with αn ∈ N we have

αn∑i=0

(n

i

)≤ 2n·h(α),

with h(α) being the binary entropy function.

Proof. We have

1 = (α+ (1− α))n ≥αn∑i=0

(n

i

)αi(1− α)n−i

≥ (1− α)nαn∑i=0

(n

i

)(α

1− α

)αn= ααn(1− α)n(1−α)

αn∑i=0

(n

i

)hence, we have

αn∑i=0

(n

i

)≤ α−αn(1− α)−n(1−α)

= 2−n(α log(α)+(1−α) log(1−α)) = 2n·h(α)

9

CHAPTER 2. PRELIMINARIES AND NOTATION

We will also need a lower bound for the binomial coecient described in the next lemma.

Lemma 2.21. For n ∈ N and 0 ≤ α ≤ 1/2 with αn ∈ N, we have(n

αn

)≥ 2n·h(α)

e ·√

2πα(1− α)n,

with h(α) being the binary entropy function.

Proof. We have (n

αn

)=

n!(αn)! ((1− α)n)!

.

and applying Stirling's Approximation n! ≥√

2πn ·(ne

)n · e1/(12n+1) leads to(n

αn

)>

√2πn ·

(ne

)n√

2παn ·√

2π(1− α)n ·(αne

)αn · ( (1−α)ne

)(1−α)n· e1/(12n+1)

e1/(12αn+1) · e1/(12(1−α)n+1)

If we regard at the last factor

e1/(12n+1)

e1/(12αn+1) · e1/(12(1−α)n+1),

we have that this factor has its minimum for α = 0 and its minimum is 1/e. Thus,(n

αn

)>

√2πn ·

(ne

)n√

2παn ·√

2π(1− α)n ·(αne

)αn · ( (1−α)ne

)(1−α)n· 1e

=1

e ·√

2πα(1− α)n· en

e−αn · e−(1−α)n· nn

(αn)αn · ((1− α)n)(1−α)n

=1

e ·√

2πα(1− α)n· nn

(nα · n1−α · (αα · (1− α)1−α))n

=1

e ·√

2πα(1− α)n· nn

nn2n·(α logα+(1−α) log(1−α))

∗=1

e ·√

2πα(1− α)n· 1

2−n·h(α)=

1e ·√

2πα(1− α)n· 2n·h(α),

where at Step (*) we used the denition of the binary entropy function.

2.4 Graph Theory

For a graph G we use the notation G = (V, E) where V is the vertex set and E is the (multi)set of all edges of graph G. We say G is a multi graph if there are two or more identicalelements in E , i.e. there are pairs of vertices which are connected by more than one edge. Inthis work, we will only use undirected graphs which means that (v1, v2) ∈ E implies that wecan also go from vertex v2 to vertex v1 but we do not explicitly insert (v2, v1) into E , too. An

10

2.4. GRAPH THEORY

undirected (multi) graph G = (V, E) is called bipartite if there exists a partition V = V1 ∪ V2

of the vertex set with V1 ∩V2 = ∅ such that every edge in E is of the form (v1, v2) for v1 ∈ V1

and v2 ∈ V2. We call G balanced if we have |V1| = |V2|. In the remaining sections we willwrite G = (V1,V2, E) for a bipartite (multi) graph.An interesting property of a vertex v in G is its degree d(v) which is the number of edges

incident to v. For bipartite graphs, the concept of the degree can be generalized to a propertyof the whole graph. Namely, we say that a bipartite graph G = (V1,V2, E) has left-degree D ifthe degree of all v ∈ V1 is upper bounded by D. The right-degree of G is dened analogous.

11

CHAPTER 2. PRELIMINARIES AND NOTATION

12

3 Generalized Unbalanced Bipartite

Expander Graphs and Generalized

Conductors

In the rst two sections of this chapter, we introduce the framework of our generalized unbal-anced bipartite expander graphs (short expander graphs) and of our generalized conductors.And in the end of this chapter, we give the relation between our generalized notion and thenotion used in the literature. In Section 3.3, we present strong composition theorems whichallow us to build generalized conductors with stronger properties than the underlying conduc-tors. Furthermore in Section 3.4, we show that every generalized conductor is a generalizedexpander graph. In Section 3.5, we prove non-constructively the existence of generalized con-ductors and expander graphs with strong properties and in Section 3.6 we investigate in theapplication introduced in Section 1.2 and show how to interpret expander graphs to get aninput-restricting function family.

3.1 Generalized Unbalanced Bipartite Expander Graphs

There are several denitions of what a graph has to fulll to be an expander graph. Forexample, for all subsets of vertices with a certain size, the number of outgoing edges from thesubset has to be bigger than the size of the subset. But we use another denition which iswidely accepted. Namely, we will require that for every subset of vertices with size up to agiven bound, the number of its neighbors is at least the size of the subset multiplied by anexpansion factor. In this work, we require additionally, that an expander graph has to be abipartite (multi) graph G = (V1,V2, E) and we require only good vertex expansion of the leftvertex set. Or more formally, for every X ⊂ V1 (with restricted size), we have |Γ(X )| ≥ γ · |X |where Γ(X ) are all neighbors of X in V2 and γ is called the expansion factor. We give now theformal denition of bipartite expander graphs which has additional restrictions about the sizeof X and which is a generalized notion of the commonly used denition because we introducean additional parameter, namely the lower bound for the set size |X |.

V1 V2

Figure 3.1: Example for an expander graph: K5,3

13

CHAPTER 3. EXPANDERS AND CONDUCTORS

Denition 3.1 (expander graph). A bipartite (multi) graph G = (V1,V2, E) with |V1| = N ,

|V2| = M and left-degree D is an (N,Kmin,Kmax)× (D)γ→ (M) expander graph if |Γ(X )| ≥

γ · |X | for all subsets X ⊂ V1 such that |X | = K and K ∈ [Kmin,Kmax], where Γ(X ) ⊆ V2 isthe set of neighbors of X .

It is easy to see that γ ≤ D. One example of such an expander graph is the K5,3 graph1.Every subset of the left vertex set has three neighbors and we get a minimal vertex expansionof 3/5. However, we would like such a graph to be sparse and having small left-degree.Furthermore, we will only consider unbalanced expander graphs with |V1| |V2| which is areason why we are only interested in the expansion property of the left vertex set. In particular,whenever we will talk about expander graphs or expanders we mean actually unbalancedbipartite expander graphs. We introduce now the notion of injectivity for an expander graphwhich will be later in Section 3.6 of special interest for the application of expander graphs.

Denition 3.2 (injective expander graph). Let Γ(v, i) be the ith neighbor of vertex v. An

(N,Kmin,Kmax)× (D)γ→ (M) expander graph G = (V1,V2, E) is injective if

∀v 6= v′ ∈ V1 : ∃i such that Γ(v, i) 6= Γ(v′, i).

Every expander graph can be interpreted as a function. Let G = (V1,V2, E) with |V1| = 2n,|V2| = 2m and D = 2d be an expander graph, then we can dene a function FG : 0, 1n ×0, 1d → 0, 1m, where FG(x, i) is the i-th neighbor of x. Formally:

Lemma 3.3. An (2n,Kmin,Kmax)×(2d)γ→ (2m) expander graph G = (V1,V2, E) is a function

FG : 0, 1n × 0, 1d → 0, 1m, for which every distribution X over 0, 1n with Kmin ≤|supp(X)| ≤ Kmax, we have |supp(FG(X,Ud))| ≥ γ · |supp(X)|.

Proof. Because not every left vertex has 2d = D neighbors, we double some edges until everyleft vertex has degree D. Though we get a multi graph, it is still a valid expander graph.Furthermore, we know that for every set X ⊆ V1 with Kmin ≤ |X | ≤ Kmax we have |Γ(X )| ≥γ · |X | . Let X be a distribution with Kmin ≤ |supp(X)| ≤ Kmax. We set X := supp(X) andbecause Kmin ≤ |supp(X)| ≤ Kmax, we have that |Γ(supp(X))| ≥ γ · |supp(X)|. Togetherwith the denition of function FG, this leads to |supp(FG(X,Ud))| ≥ γ · |supp(X)|.

We are interested in expander graphs having big vertex sets V1 and V2 and thus, it is notpracticable to describe the expander graph with a |V1| × |V2|-matrix. Hence, we want thatthe function FG : 0, 1n × 0, 1d → 0, 1m runs in poly(n) time for all inputs and we cantherefore, eciently build the expander graph. We call expander graphs explicit if they aresuch functions F computable in poly(n) time.

Denition 3.4 (explicit expander graph). An (N,Kmin,Kmax)×(D)γ→ (M) expander graph

G = (V1,V2, E) is explicit if function FG : 0, 1n × 0, 1d → 0, 1m can be evaluated inpoly(n) time for every input.

Often, it is easier to see an expander graph as a function as described in Lemma 3.3 andto do the analysis for this function rather than for the graph. In particular, we will considerfunctions which have stronger properties besides being an expander graph and afterwards,we

1K5,3 denotes the complete bipartite graph G = (V1,V∈, E) with |V1| = 5 and |V2| = 3

14

3.2. GENERALIZED CONDUCTORS

show how to construct further expander graphs which rely on these additional properties ofthe underlying functions.Therefore, we will introduce in the next section a family of special combinatorial functions,

called generalized conductors, and show in Section 3.4 that every generalized conductor is anexpander graph.

3.2 Generalized Conductors

Building an expander graph is a non-trivial problem, and often achieved by rst constructingspecial combinatorial functions which extract some random bits from a k-source and after-wards, transforming this functions into an expander graph. This functions are known in theliterature as condensers, conductors or extractors. Each of this combinatorial functions havetheir own properties and theorems but actually, we point out that they have a lot in commonand thus, we develop a generalized framework and call the generalized combinatorial functiona generalized conductor. Condensers, conductors and extractors are a special instantiation ofour conductors and in Section 3.7 we will show how to instantiate our generalized conductorsto get condensers, conductors and extractors. We call our generalized conductors just con-ductors and point out which version we mean when confusion to the original conductors ispossible.We introduce now our generalized conductor framework and start by giving a denition of

the most general form of a conductor.

Denition 3.5 (generalized conductor). A function C : 0, 1n × 0, 1d → 0, 1m is an(n, kmin, kmax) × (d) →ε (m, km(k′)) conductor if for every distribution X over 0, 1n withmin-entropy k′ ∈ [kmin, kmax], the output distribution C(X,Ud) is ε-close to a distributionwith km(k′) min-entropy, where km is a monotone growing function in k′.

The second input in 0, 1d is often called the seed. Note that the distributionX over 0, 1ncan have more than kmax min-entropy but the function km is only dened for k′ ∈ [kmin, kmax].Hence, we get only the guarantee that up to km(kmax) min-entropy will be extracted even sothe input min-entropy is larger than kmax.Sometimes a stronger property is needed, namely that even if one reveals the d random

input bits, the output bits contain still the condensed randomness:

Denition 3.6 (strong conductor). An (n, kmin, kmax) × (d) →ε (m, km(k′)) conductor C iscalled strong if C(X,Ud) Ud is ε-close to a distribution with km(k′) + d min-entropy.

As for expander graphs, we introduce a notion of injectivity for our conductors.

Denition 3.7 (injective conductor). We call an (n, kmin, kmax) × (d) →ε (m, km(k)) con-ductor injective if

∀x 6= x′ ∈ 0, 1n,∃y ∈ 0, 1d such that C(x, y) 6= C(x′, y).

As for expander graphs, we dene explicit conductors.

Denition 3.8 (explicit conductor). Let kmin, kmax, d,m and km(k′) be functions of n. Then,C is an explicit (n, kmin, kmax) × (d) →ε (m, km(k′))-conductor if C(·, ·) can be computed inpoly(n) time2.

2More formally, C is a family of conductors Cn which are ecient in respect to the security parameter n,where all parameters depend on n

15

CHAPTER 3. EXPANDERS AND CONDUCTORS

Ideally, one can hope that a conductor can extract all k′ random bits from the rst inputand that also all d random bits are transformed to the output. But not every conductor isable to achieve this. The gap between actual obtained min-entropy in the output and ideallypossible min-entropy is called the entropy loss.

Denition 3.9 (entropy loss). The entropy loss ∆ of an (n, kmin, kmax)× (d)→ε (m, km(k′))conductor C is a function ∆(k′) := k′ + d− km(k′) which is dened for k′ ∈ [kmin, kmax]. ForC being strong, the entropy loss is ∆(k′) := k′ − km(k′).

If the conductor preserves at least the min-entropy of the input distribution, we say that itis a condensing conductor3.

Denition 3.10 (condensing conductor). An (n, kmin, kmax)× (d)→ε (m, km(k′)) conductorC is condensing if km(k′) ≥ k′ for k′ ∈ [kmin, kmax].

A stronger requirement is to have a conductor which retains all the input min-entropy andnot only the min-entropy of the rst input. We call such conductors lossless.

Denition 3.11 (lossless conductor). An (n, kmin, kmax)× (d)→ε (m, k′ 7→ k′+d) conductorC is lossless.

It is easy to see that the following is true.

Lemma 3.12. Every strong condensing conductor is a lossless conductor.

r1 ri rk

r1 r2 rp

n bits

m bits

d bits

C

Figure 3.2: Function C extracting p random bits from a k-source

An other special family of conductors is where the m-bit output is actually an (m, ε)-source,i.e. almost truly random, if the input has min-entropy kmax. This conductors are known asextracting conductors.

Denition 3.13 (extracting conductor). An (n, kmin, kmax)× (d)→ε (m, km(k′)) conductorExt is called an extracting conductor if km(kmax) = m.

An important fact is that a non-trivial extracting conductor cannot be lossless where withnon-trivial we mean that kmax ≥ 1. There is always an entropy loss and its lower bound isshown in [RT00]:

3We use the term condensing as a synonym for preserving. In the literature, there are functions calledcondensers and our conductors are a generalization of those condensers. More details about condenserscan be found in [TUZ01]

16

3.2. GENERALIZED CONDUCTORS

Lemma 3.14 (lower bound of min-entropy). Every non-trivial extracting (n, kmin, kmax) ×(d)→ε (m, km(k′)) conductor has an entropy loss ∆(k′) = d+ k′− km(k′) ≥ 2 log(1/ε)−O(1)for k′ ∈ [kmin, kmax].

Additionally, [RT00] states the the lower bound for the seed length:

Lemma 3.15 (lower bound of seed length). Every non-trivial extracting (n, kmin, kmax) ×(d) →ε (m, km(k′)) conductor has seed length d ≥ log(n − kmax) + 2 log(1/ε) − O(1) fork′ ∈ [kmin, kmax].

In Section 3.5.2 we will show non-constructively the existence of extracting conductors withshort seed length and which reach the lower bound of the entropy loss up to a constant term.

We introduce now another special form of a conductor C. Let C be a function C : 0, 1n×0, 1d → 0, 1m·t which outputs t blocks of length m and for all but a fraction of σ inputsthere is at least one m-bit block in the output which has the needed conductor properties. Wecall this functions σ-somewhere conductors because the output contains somewhere a blockwith the wished properties. We give now the formal denition.

Denition 3.16 (somewhere conductor). A function C : 0, 1n ×0, 1d → 0, 1m·t is a σ-somewhere (n, kmin, kmax)×(d)→ε (m·t, km(k′)) conductor if for all k′-source X over 0, 1n,there exists a selection function IPX : 0, 1n → 1, ..., t ∪ ⊥ such that PIPX (X)[⊥] ≤ σ

and PC(i)(X,Ud)|IPX (X)=i is a (km(k′), ε)-source for all i = 1, ..., t with PIPX (X)(i) > 0, where

C(X,Ud) = C(1)(X,Ud)||...||C(t)(X,Ud) and C(i)(X,Ud) ∈ 0, 1m for all i = 1, ..., t.

I.e., for every but a fraction of σ inputs x ∈ 0, 1n of a k′-source X, the selection functionIPX (X) associates an index i to it such that C(i)(x, Ud) is ε-close to a km(k′)-source. Further,It is easy to see that every conductor is also a 0-somewhere conductor. A special case ofa somewhere conductor is when we use extracting conductors as basic building blocks C(i).Let Ext(k) be an extracting (n, k, k) × (d(k)) →ε (m(k),m(k)) conductor for min-entropy k,which needs d(k) random bits as seed and outputs an almost random m(k)-bit string, wherem(·) is a monotone growing function. Depending on k, the length of the seed and the outputof Ext(k) changes. We will use k dierent extracting conductors Ext(1), Ext(2), ..., Ext(k)

for every min-entropy in 0, 1, ..., k. Let d be the longest seed length needed and ignorethe last few bits if Ext(i) does not need it. We denote with m the maximal output lengthm(kmax) and will extend every output to the length m by padding zeros, i.e. we get an(n, i, i)× (d)→ε (m,m(i)) conductor C(i)(x, y) = Ext(i)(x, y)||00 · · · 0. For every distributionX we set the selection function IPX (·) ≡ i, where 2i ≤ H∞(X) ≤ 2i+1 must hold. Note thatPIPX (X)[⊥] = 0 and hence, σ = 0 because for every i-source X we can associate a conductor

C(i) which returns an m(i)-source for all inputs x ∈ X.

Lemma 3.17. If we use extracting conductors as C(i)'s as described above, we get a 0-somewhere (n, 0, k)× (d)→ε (m · k,m(k′)) conductor.

We point out that we do not analyze expander graph constructions which use this specialkind of somewhere conductors. Instead, we will present an alternative construction in Section5.4 which leads to expander graphs with smaller left-degree.

17

CHAPTER 3. EXPANDERS AND CONDUCTORS

3.3 Composition Theorems for Conductors

In this section, we present dierent constructions to combine conductors to get a new conductorwith stronger properties. Some compositions were already introduced in the literature forspecial combinatorial functions like extractors and condensers [RRV99, TUZ01]. We presentimproved versions of this dierent compositions by also considering injectivity and set themin a more generalized framework to t into our generalized conductor notion.

3.3.1 Conductor Cascading

We start by dening a cascading operator for conductors as follows:

Denition 3.18 (cascading). Let C1 : 0, 1n × 0, 1d1 → 0, 1m1 and C2 : 0, 1n ×0, 1d2 → 0, 1m2 be two functions, then we dene the cascading (C2 C1) : 0, 1n ×0, 1d1+d2 → 0, 1m2 by

(C2 C1)(x; y1, y2) = C2(C1(x, y1), y2).

An illustration of cascading two functions C1 and C2 is given in Figure 3.3. If we cascade

d1

d2

n

m1

m2

C1

C2

Figure 3.3: Conductor cascading

two conductors with special properties, than their cascading (C2 C1) is also a conductorwhich inherits the properties of the basic C1 and C2 conductors. We state this fact in thenext lemma.

Lemma 3.19 (conductor cascading). Given two conductors C1 and C2 where

C1 is a (strong) (injective) (n, kmin, kmax)× (d1)→ε1 (m1, k1(k′)) conductor and

C2 is a (strong) (injective) (m1, k1(kmin), k1(kmax))× (d2)→ε2 (m2, k2(k′))conductor,

their cascading (C2C1) is a (strong) (injective) (n, kmin, kmax)×(d1+d2)→ε1+ε2 (m2, k2(k′))conductor.

Proof. First, we show the non-strong case: Let X ⊆ 0, 1n be a k′-source with k′ ∈[kmin, kmax]. We know that C1(X,Ud1) is ε1-close to a distribution Y with k1(k′) min-entropy. Further, C2(Y,Ud2) is ε2-close to a distribution having k2(k1(k′)) min-entropy. There-fore, by applying the triangle inequality for statistical dierence (Lemma 2.9), we have thatC2(C1(X,Ud1), Ud2) is at least (ε1 + ε2)-close to a distribution with k2 (k1(k′)) min-entropy.

18

3.3. COMPOSITION THEOREMS FOR CONDUCTORS

Second, we prove the case where C1 and C2 are strong conductors: Let again X ⊆ 0, 1nbe a k′-source with k′ ∈ [kmin, kmax]. Because C1 is strong, we have that C1(X,Ud1) Ud1 isε1-close to a distribution Y1 Ud1 ⊆ 0, 1n × 0, 1d1 with k1(k′) + d1 min-entropy and thusY1 has at least k1(k′) min-entropy. We know that C2(C1(X,Ud1), Ud2) Ud2 is (ε1 + ε2)-closeto a distribution Y2 Ud2 ⊆ 0, 1m1×0, 1d2 with k2 (k1(k′)) +d2 min-entropy because C2 isa strong conductor. Hence, C2(C1(X,Ud1), Ud2) Ud1 Ud2 is (ε1 + ε2)-close to a distributionhaving k2 (k1(k′)) + d1 + d2 min-entropy which implies that C2 C1 is a strong conductor.Finally, we show that if C1 and C2 are injective conductors then so C2C1: Let x, x

′ ∈ 0, 1nbe two distinct values. Because C1 is injective, there must exist a u1 ∈ 0, 1d1 such thatC1(x, u1) 6= C1(x′, u1). We set y = C1(x, u1) and y′ = C1(x′, u1), where we know thaty, y′ ∈ 0, 1m1 are two distinct values. From the fact that C2 is injective, there must be au2 ∈ 0, 1d2 such that C2(y, u2) 6= C2(y′, u2). Therefore, if we set u = u1||u2, we have founda u ∈ 0, 1d1+d2 such that C2(C1(x, u1), u2) 6= C2(C1(x′, u1), u2) and hence C2 C1 must beinjective, too.

We will use the cascading composition in Section 5.3.

3.3.2 Conductor Concatenation

In this section, we present a concatenation operator || which is based on the concatenationintroduced in [RRV99]. The proof is adapted for our generalized conductors.

Denition 3.20 (conductor concatenation). Let C1 : 0, 1n × 0, 1d1 → 0, 1m1 andC2 : 0, 1n × 0, 1d2 → 0, 1m2 be two functions. Then we dene their concatenation(C1||C2) : 0, 1n × 0, 1d1+d2 → 0, 1m1+m2 as

C(x, (y1, y2)) = C1(x, y1)||C2(x, y2).

In Figure 3.4, we illustrate the concatenation of the functions C1 and C2.

d1 d2n

m1 m2

C1 C2

Figure 3.4: Conductor concatenation

If we require that C1 and C2 are strong conductors with according parameters, than wecan construct a conductor with much less entropy loss than the original conductors have. Westate this property in the next lemma and we will make use of it in the Sections 5.1 and 5.2.

Lemma 3.21 (conductor concatenation). Let s > 0. Given two conductors C1 and C2, where

• C1 is a strong extracting (n, kmin, kmax)× (d1)→ε1 (m1, k1(k′)) conductor with entropyloss ∆1(k′) = k′ − k1(k′) for k′ ∈ [kmin, kmax] and k1(·) a monotone growing functionwith k1(kmax) = m1,

19

CHAPTER 3. EXPANDERS AND CONDUCTORS

• and C2 is a strong extracting (n,∆1(kmin) − s,∆1(kmax) − s) × (d2) →ε1 (m2, k2(k′′))conductor with entropy loss ∆2(k′′) = k′′−k2(k′′) for k′′ ∈ [∆1(kmin)−s,∆1(kmax)−s].and k2(·) a monotone growing function k2(·) with k2(∆1(kmax)− s) = m2.

Then their concatenation (C1||C2) is a strong (n, kmin, kmax)×(d1 +d2)→ε (m1 +m2, km(k′))conductor with error ε = ( 1

1−2−s ) · ε1 + ε2, entropy loss ∆(k′) = ∆2(∆1(k′) − s) + s andkm(k′) = k1(k′) + k2(k′). Furthermore, the nal conductor (C1||C2) is injective if at least oneof the two conductors C1 and C2 is injective.

Proof. First, we show why ∆(k′) = ∆2(∆1(k′) − s) + s: When we apply the rst extractingconductor C1 we get an entropy loss of at most ∆1(k′). We show that for all s > 0, there arestill ∆1(k′) − s min-entropy in X which has not been extracted. The second conductor C2

extracts ∆1(k′)− s−∆2(∆1(k′)− s) min-entropy from X. Therefore, the remaining entropyloss is just ∆2(∆1(k′)− s) + s.It remains to show that a k′-source X has still ∆1(k′) − s unused min-entropy in it

after applying C1. We dene a set of bad inputs for which the output's probability of C1 issmaller than 2k1(k′) by a factor of 2s: Let X be a k′-source and let BAD be the set of pairs(u, z) ∈ 0, 1d1 × 0, 1m1 such that

P[C1(X,u) = z] < 2−(k1(k′)+s). (3.3.1)

We will show now that for all good pairs, i.e. every (u, z) /∈ BAD, the distributionPX|C1(X,u)=z has still at least ∆1(k′) − s min-entropy and thus, C2 can be applied withoutany problems.Let (u, z) /∈ BAD. Then for every x ∈ X with C1(x, u) = z we have

PX|C1(X,u)=z [x] =PX [x]

P[C1(X,u) = z]≤ 2−k

2−(k1(k′)−s) = 2−(∆1(k′)−s)

where at the last step we used ∆1(k′) = k′ − k1(k′) because C1 is a strong conductor. Hence,the distribution PX|C1(X,u)=z has min-entropy of at least ∆1(k′)− s.Furthermore, for every good pair, we have that the conditional distribution of (Ud2 , C2(X,Ud2))

given that (Ud1 , C1(X,Ud1)) = (u, z) is ε2-close to Ud2 Y2, where Y2 ⊆ 0, 1m2 is some dis-tribution with min-entropy k1(k′).We show now that the probability of (u, z) being a pair in BAD is relatively small. For all

(u, z) ∈ BAD we have that P[(Ud1 , C1(x, Ud1)) = (u, z)] < 2−d1−(k1(k′)+s). Let Y1 ⊆ 0, 1m1

be a distribution with min-entropy k1(k′). Then, we have for every (u, z) ∈ BAD

P[Ud1 Y1 = (u, z)] = 2−d1−k1(k′)

> 2s · P[(Ud1 , C1(x, Ud1)) = (u, z)]⇒ P[(Ud1 , Y1) ∈ BAD] ≥ 2s · P[(Ud1 , C1(x, Ud1)) ∈ BAD].

By using the denition of the statistical dierence we get for the distance between Ud1 Y1

and Ud1 C1(X,Ud1)

|Ud1 Y1 − Ud1 C1(X,Ud1)| ≥ P[(Ud1 , A) ∈ BAD]− P[(Ud1 , C1(x, Ud1)) ∈ BAD]≥ (2s − 1) · P[(Ud1 , C1(x, Ud1)) ∈ BAD] (3.3.2)

20

3.3. COMPOSITION THEOREMS FOR CONDUCTORS

Additionally, because C1 is a strong conductor with error at most ε1, we have

ε1 ≥ |Ud1 Y1 − Ud1 C1(X,Ud1)| (3.3.3)

and hence by combining (3.3.2) and (3.3.3),

P[(Ud1 , C1(x, Ud1)) ∈ BAD] ≤ ε12s − 1

.

Finally, it remains to show that C1(X,Ud1) Ud1 C2(X,Ud2) Ud2 is ( ε11−2−s + ε2)-close to

Y1 Ud1 Y2 Ud2 .We know that C1(X,Ud1) Ud1 is ε1-close to Y1 Ud1 because C1 is strong. Furthermore,

we showed above that the probability of bad inputs for C2 is just ε12s−1 and for the other

1− ε12s−1 fraction of inputs we know that C2(X,Ud2) Ud2 is ε2-close to Y2 Ud2 because C2 is

a strong conductor. Overall, C1(X,Ud1) Ud1 C2(X,Ud2) Ud2 is(ε1 + ε1

2s−1 + ε2

)-close to

Y1 Ud1 Y2 Ud2 because of the triangle inequality for the statistical dierence and because(1− ε1

2s−1) · ε2 ≤ ε2. Reforming leads to ε1 + ε12s−1 + ε2 = ε1

1−2−s + ε2.

Furthermore, if at least one of the two conductors is injective, the overall output must beinjective because already a subpart of it is injective.

We can also dene a dierent version of the conductor concatenation according to [RRV99]which states a concatenation for non-strong extracting conductors. We will not use this kindof concatenation in our constructions discussed in the later sections. But for completeness,we state also this concatenation which is interesting if one wants to concatenate non-strongconductors and we use the operator ∦ to clarify the distinction to the operator of Denition3.20.

Denition 3.22 (adapted conductor concatenation). Let C1 : 0, 1n × 0, 1d → 0, 1m1

and C2 : 0, 1n+d1×0, 1d2 → 0, 1m2 be two functions. Then we denote their concatenation(C1 ∦ C2) : 0, 1n × 0, 1d1+d2 → 0, 1m1+m2 as

(C1 ∦ C2)(x, (y1, y2)) = C1(x, y1)||C2((x, y1), y2).

Note that the dierence is by applying C2 to both inputs (x, y1) of C1 rather than just tox.

Also for this kind of operation, we can state properties about the concatenation if the basicconductors have the according parameters.

Lemma 3.23 (adapted conductor concatenation). Let s > 0. Given two conductors C1 andC2, where

• C1 is an extracting (n, kmin, kmax) × (d1) →ε1 (m1, k1(k′)) conductor with entropy loss∆1(k′) = k′ + d1 − k1(k′) for k′ ∈ [kmin, kmax] and k1(·) a monotone growing functionwith k1(kmax) = m1,

• and C2 is an extracting (n + d1,∆1(kmin) − s,∆1(kmax) − s) × (d2) →ε1 (m2, k2(k′′))conductor with entropy loss ∆2(k′′) for k′′ ∈ [∆1(kmin) − s,∆1(kmax) − s]. and k2(·) amonotone growing function with k2(∆1(kmax)− s) = m2.

21

CHAPTER 3. EXPANDERS AND CONDUCTORS

Then their concatenation (C1 ∦ C2) is an (n, kmin, kmax) × (d1 + d2) →ε (m1 + m2, km(k′))conductor with error ε = ( 1

1−2−s ) · ε1 + ε2 and entropy loss ∆(k′) = ∆2(∆1(k′) − s) + s.Furthermore, the nal conductor (C1 ∦ C2) is injective if at least one of the two conductorsC1 and C2 is injective.

Proof sketch. The proof is analogue to the proof of Lemma 3.21. The dierences are that theset BAD contains now elements z ∈ 0, 1m1 such that

P[C1(X,Ud1) = z] < 2−(k1(k′)+s)

and that we get

PXUd1 |C1(X,Ud1 )=z[x] =PX(x) · 2−d1

P[C1(X,Ud1) = z]≤ 2−(∆1(k′)−s).

The remaining parts of the proof is done as in the proof of Lemma 3.21 but without concate-nating the used truly random strings, respectively concatenating the uniform distributions.

3.3.3 Constructing Somewhere-Conductors by Conductor Cascading

We show now how to get a somewhere conductor C by cascading a strong extracting conductorC1 : 0, 1n × 0, 1d1 → 0, 1d2 with a strong conductor C2 : 0, 1n × 0, 1d2 → 0, 1m.This cascading diers from the cascading dened in Section 3.3.1 and will be of special interestin Section 5.4 where we give a concrete construction of an expander graph with small left-degree. the construction of somewhere-conductors introduced in this section and the proof inSection 3.3.3 are based on [NT99, BJST03, MT07].First, we introduce a new notation; For a string x ∈ 0, 1n, let x[a,b] be the string consisting

of the bits xa, xa+1, ..., xb−1, xb with extra 0's appended to make |x[a,b]| = n. If b < a then

we set x[a,b] = 0n. We dene C : 0, 1n × 0, 1d1 → 0, 1n·(d1+d2+m) such that C(x, y) =C(1)(x, y)||...||C(n)(x, y), where for all 1 ≤ i ≤ n, we have

z(i)1 := y z

(i)2 := C1(x[i,n], z

(i)1 ) z

(i)3 := C2(x[1,i−1], z

(i)2 )

and set C(i)(x, y) := z(i)1 ||z

(i)2 ||z

(i)3 ∈ 0, 1d1+d2+m.

In Figure 3.5, an illustration of the construction of the ith output C(i)(x, y) is given, wherethe gray areas mark the padding with zeros.

Lemma 3.24. Let ν > 0 be given, and C being constructed as above. If C1 is a strong(n, 0, d2−a1)×(d1)→ε1 (d2, k

′ 7→ k′+a1) conductor, and C2 is a strong (n, 0, kmax)×(d2)→ε2

(m, k′ 7→ k′+ a2) conductor, then C is a σ-somewhere (n, 0, d2− a1 + kmax + s)× (d1)→ε1+ε2

(n ·(m+d1 +d2), k′ 7→ k′+a) conductor with σ = 7n ·2−ν/3 and a = min a1, a1 + a2+d1−ν.

Proof of Lemma 3.24. Let X be a k-source with k ≤ d2 − a1 + kmax + ν. we distinguishtwo cases.

Case 1. H∞(X) = k = k + ν with k ≤ d2 − a1.The distribution X has less than d2−a1 min-entropy up to a small summand ν. Therefore, we

22

3.3. COMPOSITION THEOREMS FOR CONDUCTORS

00 · · · 0

i

i 00 · · · 0 d1

d2

m d1d2

n

n

n

C1

C2

Figure 3.5: Construction of C(i)(x, y)

just set PIPX (X)[i = 1] = 1 and get [Z(1)1 , Z

(1)2 ] = [Ud1 , C

(1)(X,Ud1)] which is a (d1 +k+a1, ε1)-

source. Because ν > 0, we have that [Z(1)1 , Z

(1)2 ] is a (d1 + k + a1 − ν, ε1)-source, too. When

we append Z(1)3 , we get the distribution [Z(1)

1 , Z(1)2 , Z

(1)3 ] which has at least the min-entropy

of [Z(1)1 , Z

(1)2 ] and thus is also a (k + a1 + d1 − ν, ε1)-source.

Case 2. H∞(X) = k = d2 − a1 + k + ν with k ≤ kmax.For this case, we will show that there exists a selector function IPX : 0, 1n → 1, ..., n∪⊥such that

1. PIPX (X)[⊥] = σ ≤ 7n · 2−ν/3

2. If PIPX (X)|X[1,i−1][i, x[1,i−1]] > 0, then H∞(X[i,n]|IPX (X) = i ∧ X[1,i−1] = x[1,i−1]) ≥

d2 − a1

3. H∞(X[1,i−1]|I(X) = i) ≥ k

Proof for Point 1. Let ζ1 = 2ζ2, ζ2 = 2ζ3 and ζ3 = 2−ν/3. Further, we will dene the selectionfunction I(X) similar to [NT99]. Let the function f : 0, 1n → 1, ..., n return the last i oninput x ∈ X such that the remaining block x[i,n] is still random enough, i.e.

P[X[i,n] = x[i,n]|X[1,i−1] = x[1,i−1]] ≤ (ζ2 − ζ3) · 2−(d2−a1). (3.3.4)

Some splitting points i are rare and might lead to a strange behavior. Let BAD be the setof all bad x. We dene x to be in BAD if f(x) = i and

• Px′∈X [f(x′) = i] ≤ ζ1 or

• Px′∈X [f(x′) = i|x[1,i−1] = x′[1,i−1]] ≤ ζ2 or

• Px′∈X[x′i = xi|x[1,i−1] = x′[1,i−1]

]≤ ζ3

We dene IPX (X) such that it lters this bad cases:

IPX (x) =⊥ x is badf(x) otherwise.

23

CHAPTER 3. EXPANDERS AND CONDUCTORS

Putting everything together, we get P(x ∈ BAD) = PIPX (X)[⊥] = n(ζ1 + ζ2 + ζ3) ≤ 7n ·2−ν/3.

Proof for Point 2. To show Point 2, we use the following lemma which was shown in AppendixC of [NT99].

Lemma 3.25. For any i and x[1,i−1], if

Px′∈X [IPX (x′) = i|x′[1,i−1] = x[1,i−1]] > 0

then Px′∈X [IPX (x′) = i|x′[1,i−1] = x[1,i−1]] > ζ2 − ζ3, where ζi are dened as above.

For any x such that IPX (x) = i, we have

P[X[i,n] = x[i,n]|IPX (X) = i ∧X[1,i−1] = x[1,i−1]]

≤P[X[i,n] = x[i,n]|X[1,i−1] = x[1,i−1]]P[IPX (x) = i|X[1,i−1] = x[1,i−1]]

≤ (ζ2 − ζ3) · 2−(d2−a1)

P[IPX (x) = i|X[1,i−1] = x[1,i−1]](3.3.5)

≤ (ζ2 − ζ3) · 2−(d2−a1)

(ζ2 − ζ3)= 2−(d2−a1) (3.3.6)

In the rst step we used P[X|Y ] ≤ P[X]/P[Y ], at Step (3.3.5) we applied the requirement(3.3.4) of function f and in (3.3.6) we used Lemma 3.25.

Hence, it immediately follows that H∞(X[i,n] = x[i,n]|IPX (X) = i ∧ X[1,i−1] = x[1,i−1]) ≥d2 − a1.

Proof for Point 3. We x an x with IPX (x) = i and get

P[X[1,i−1] = x[1,i−1]] =P[X[1,n] = x[1,n]]

P[X[1,n] = x[1,n]|X[1,i−1] = x[1,i−1]]

≤ 2−(d2−a1+k+ν)

P[Xi = xi|X[1,i−1] = x[1,i−1]] · P[X[i+1,n] = x[i+1,n]|X[1,i−1] = x[1,i−1]](3.3.7)

≤ 2−(d2−a1+k+ν)

ζ3 · (ζ2 − ζ3)2−(d2−a1)(3.3.8)

≤ 2−k−ν

ζ3 · (ζ2 − ζ3), (3.3.9)

where in Step (3.3.7) we used the fact that H∞(X) = d2 − a1 + k + ν and in Step (3.3.8) weapplied the Requirement (3.3.4) of f(x) = i and that x /∈ B. For the following steps, we needa lemma from [NT99]:

Lemma 3.26. For any i, if Px∈X [IPX (x) = i] > 0, then Px∈X [IPX (x) = i] ≥ ζ1− ζ2− ζ3, whereζi are dened as above.

24

3.4. CONSTRUCTING EXPANDER GRAPHS FROM CONDUCTORS

Finally, to prove H∞(X[1,i−1]|IPX (X) = i) ≥ k, we show that P[X[1,i−1] = x[1,i−1]|IPX (x) =i] is at most k:

P[X[1,i−1] = x[1,i−1]|IPX (x) = i] ≤P[X[1,i−1] = x[1,i−1]]

P[IPX (x) = i](3.3.10)

≤ 2−k−ν

ζ3 · (ζ2 − ζ3) · (ζ1 − ζ2 − ζ3)(3.3.11)

=2−k−ν

2−ν/3 · (2 · 2−ν/3 − 2−ν/3) · (4 · 2−ν/3 − 2 · 2−ν/3 − 2−ν/3)

= 2−k.

At (3.3.10) we used P[X|Y ] ≤ P[X]/P[Y ] and for Step (3.3.11) we applied Lemma 3.26 andEquation (3.3.9).

According to Point 2, we can conclude that for all x[1,n] with the property that

PIPX (X)|X[1,i−1][i, x[1,i−1]] > 0

we have that the distribution PZ

(i)1 Z

(i)2 |IPX (X)=i∧X[1,i−1]=x[1,i−1]

is ε1-close to a (d1 + d2)-source

because the strong conductor C1 has a (d2 − a1)-source as input. Further, we know thatPZ

(i)1 Z

(i)2 X[1,i−1]|IPX (X)=i

is a (d1 +d2 + k, ε1)-source because of Point 3. Applying the conductor

C2 to the k-source X[1,i−1] leads to being PZ(i)1 Z

(i)2 C2(X[1,i−1],Z

(i)2 )|IPX (X)=i

a (d1+d2+k+a2, ε1+

ε2)-source. Using the precondition k = d2 − a1 + k + ν we get that PZ

(i)1 Z

(i)2 Z

(i)3 |IPX (X)=i

is a

(k + a1 + a2 + d1 − ν, ε1 + ε2)-source.

Putting the two analyzed cases together leads to a σ-somewhere conductive (n, 0, k) ×(d1)→ε (m, k′ 7→ k′ + a) conductor with

• k = d2 − a1 + k2 + ν

• a = min a1 + d1 − ν, a1 + a2 + d1 − ν

• ε = max ε1, ε1 + ε2 = ε1 + ε2

• σ = max 0, 7n · 2−ν/3 = 7n · 2−ν/3

which concludes the proof.

3.4 Constructing Expander Graphs From Conductors

As already mentioned in the previous sections, conductors are expander graphs. In this section,we show how to achieve this transformation from a conductor to an expander graph. Weknow that every (2n,Kmin,Kmax) × (2d) → (2m, γ) expander graph G = (V1,V2, E) can beinterpreted as a function FG : 0, 1n × 0, 1d → 0, 1m which calculates the neighbors ofa vertex v ∈ V1. If we compare function FG with an (n, kmin, kmax) × (d) →ε (m, km(k′))conductor C, we see that the conductor has the same form 0, 1n × 0, 1d → 0, 1m andx ∈ 0, 1n interpreted as a vertex v ∈ V1, the conductor C would calculate the ith neighbor

25

CHAPTER 3. EXPANDERS AND CONDUCTORS

of v where i is randomly chosen from 0, 1d. As we will show later in this section, the lowerand upper bound Kmin and Kmax and the min-entropy bounds kmin and kmax are highlycorrelated, namely we will have Kmin = 2kmin and Kmax = 2kmax . Note that in general, theconverse is not true: Not every expander graph is a conductor.In the following, we will show how to transform a conductor into an expander graph and

also the special case where the conductor is a somewhere conductor.

3.4.1 Transforming Conductors into an Expander Graph

In this section, we show how to generally transform a (strong) conductor into an expandergraph. We start with the case where we have a strong conductor C: To get an expandergraph, we interpret every input x ∈ 0, 1n as a vertex v1 ∈ V1, and every input y ∈ 0, 1das the label for the yth neighbor of v1. The output C(x, y) Ud ∈ 0, 1m+d of the strongconductor C will then describe a vertex v2 ∈ V2 being the yth neighbor of vertex v1.The exact relation between the parameters of the strong conductor C and the achieved

expander graph G is given in the next Theorem 3.27.

Theorem 3.27. Let C : 0, 1n × 0, 1d → 0, 1m be a function. The bipartite graphG = (V1,V2, E) with |V1| = 2n, |V2| = 2m+d and

(v1, (y, v2)) ∈ E ⇔ C(v1, y) = v2

is an (explicit) (injective) (2n,Kmin,Kmax) × (D)γ→ (2m+d) expander graph with Kmin =

2kmin , Kmax = 2kmax , left-degree D = 2d and expansion factor γ = (1− ε)2d+α if C is a strong(explicit) (injective) (n, kmin, kmax)× (d)→ε (m, k′ 7→ k′ + α) conductor. Note that α can benegative.

Proof. Let X be a k′-source with k′ ∈ [kmin, kmax] and |X| ∈ [2kmin , 2kmax ]. According tothe strong conductor property, we know that the distribution A = C(X,Ud) Ud is ε-closeto a distribution A′ on 0, 1m × 0, 1d with H∞(A′) ≥ k′ + α + d. The set of neighbors isΓ(X) = supp(A). This leads to

ε ≥ |A−A′| ≥∑

w∈Γ(X)

|A(w)−A′(w)| = 1−∑

w∈Γ(X)

A′(w) ≥ 1− |Γ(X)| · 2−(k′+α+d)

By rearranging terms we get the wished expansion property: |Γ(X)| ≥ (1 − ε)2k′+α+d ≥

(1− ε)2d+α|X|. Furthermore, it is clear from the transformation that if the conductor C isexplicit or injective than so the expander graph.

Even if the conductor is not strong, we get an expander graph but with a worse expansionfactor.

Corollary 3.28. Let C : 0, 1n × 0, 1d → 0, 1m be a function. The bipartite graphG = (V1,V2, E) with |V1| = 2n, |V2| = 2m+d and

(v1, (y, v2)) ∈ E ⇔ C(v1, y) = v2

is an (explicit) (injective) (2n,Kmin = 2kmin ,Kmax = 2kmax) × 2d → (2m+d, γ = (1 − ε)2α)expander graph if C is an (explicit) (injective) (n, kmin, kmax) × (d) →ε (m, k′ 7→ k′ + α)conductor. Note that α can be negative.

Proof sketch. The proof is analogue to the proof of Theorem 3.27. Just use the distributionA = C(X,Ud) instead, i.e. do not append Ud to the output.

26

3.5. PROBABILISTIC EXISTENCE PROOFS

3.4.2 Transforming Somewhere Conductors into an Expander Graph

In this section, we show how to transform a σ-somewhere conductor into an expander graph.This special case of a conductor is of special interest because as we will see later in Section 5.4,if we rst apply the somewhere conductor composition of Section 3.3.3 to special conductorsand second, transform the received somewhere conductor to an expander graph, we get a muchsmaller left-degree than if we had directly transformed the special conductors to an expandergraph.

Lemma 3.29. If C : 0, 1n × 0, 1d → 0, 1tm is a σ-somewhere (n, kmin, kmax)× (d)→ε

(tm, k′ 7→ k′ + a) conductor with σ < 1, then graph G = (V1,V2, E) with V1 = 0, 1n,V2 =0, 1m and

(x, z) ∈ E if ∃i ∈ [t], y ∈ 0, 1d : C(i)(x, y) = z

is a (2n, 2kmin , 2kmax)×D → (2m, γ = 2a(1− ε))-expander graph with left-degree D = t · 2d.

Proof. The proof is similar to the proof of Theorem 3.27. Let X be a k′-source with kmin ≤k′ ≤ kmax and |X| ≤ 2kmax . Let IPX : 0, 1n → 1, .., t ∪ ⊥ be the selection functionaccording to the distribution X4. Let i be such that PIPX (X)(i) > 0 and let A be a (k′ + a)-source which is ε-close to PC(i)(X,Ud)|IPX (X)(X)=i. We dene the set of neighbors as Γ(X) =supp(A). This leads to

ε ≥ |PrC(i)(X,Ud)|IPX (X)=i −A(w)| ≥∑

w∈Γ(X)

|PrC(i)(X,Ud)|IPX (X)=i(w)−A(w)|

= 1−∑

w∈Γ(X)

A(w) ≥ 1− |Γ(X)|2−(k′+a)

By rearranging terms we get the wished expansion property: |Γ(X)| ≥ (1 − ε)2k′+a ≥

(1− ε)2a|X|.

3.5 Probabilistic Existence Proofs of Conductors and Expanders

In this section, we prove non-constructively the existence of injective (lossless) conductorsand injective extracting conductors with short seed length and almost ideal entropy loss. Theexistence of such conductors is often stated in the literature but hardly ever proven. Especiallythe result we will give in Section 3.5.1 seems to never been shown in a publication. In Section3.5.3, we show non-constructively the existence of an injective expander graph for an arbitraryexpansion factor. All the existence proofs in this section are done with the help of the so calledProbabilistic Method, where the existence is implied if the probability for the non-existenceis strictly smaller than 1. But all this proofs are non-constructively and it is still an openquestion if one can give an explicit construction of such good conductors and expanders.

General Structure of the Proofs. We show the existence of such almost ideal injectiveconductors and injective expanders by a non-constructive probabilistic existence proof. Thestructure of the dierent proofs is quite similar: We assume that a randomly chosen functionwith the same domain and range as a conductor has not the wished conductor properties.

4A possible construction of IPX (X) is given in the proof of Lemma 3.24

27

CHAPTER 3. EXPANDERS AND CONDUCTORS

Then, we show that the probability for not being a conductor with this properties is strictlysmaller than 1, which implies the existence of the wished conductor. In particular, we willshow the existence of the wished conductors for the special case, where the input distributionX is a at k-source. This restriction to at sources is sucient because every k-source is aconvex combination of at k-sources, and hence, the proof stated for at k-sources impliesthe validity for general k-sources. For the expander graph existence proof, we assume thata randomly chosen graph has not the wished expansion properties. Then, we show that theprobability of being a badly chosen graph is strictly smaller than 1. Hence, there is a non-zeroprobability that a graph with the wished expansion properties must exist.

More detailed, we will show in Section 3.5.1 that for a randomly chosen function C :0, 1n × 0, 1d → 0, 1m, we have

P[C not injective] + P[C not a conductor] < 1

and in Section 3.5.2 we show accordingly,

P[C not injective] + P[C not an extracting conductor] < 1

and nally, in Section 3.5.3 we show for a randomly chosen graph G that

P[G not injective] + P[G not an expander graph] < 1.

In the rst two cases, we will show that P[not injective] < 1/2 and the corresponding secondprobability is strict smaller than 1/2, and for the third case we will show that P[not injective] <2/3 and the second summand is strict smaller than 1/3. Hence, the overall probability mustbe strict smaller than 1.

Injectivity. In particular, the probability for not being injective is always calculated in thesame way. We give now the general approach and instantiate it in the according sections.

First, recall that injectivity means that there exist no two distinct values x, x′ ∈ 0, 1n forwhich

C(x, 0) |C(x, 1) | · · · |C(x, 2d − 1) = C(x′, 0) |C(x′, 1) | · · · |C(x′, 2d − 1) (3.5.1)

is true, where C : 0, 1n × 0, 1d → 0, 1m 5. Let us assume that C is not injective. Then,there must be a collision C(x, u) = C(x′, u) for a u ∈ 0, 1d. We interpret the left-handside and the right-hand side of Equation (3.5.1) as a string of length m · 2d. In general, theprobability for having at least one collision between two strings is q22−r, where q is the numberof possible strings and r the length of a string. Here, the length of a string is r = m2d andthere are at most q = 2n dierent strings because we have 2n dierent values in 0, 1n. Thisgives for the collision probability

P[C not injective] = P[a collision occurs] ≤ 22n · 2−m2d . (3.5.2)

We will show in the according Sections 3.5.1, 3.5.2 and 3.5.3 that this probability of notbeing injective is smaller than 1/2 respectively smaller than 2/3.

5For the expander graph, interpret graph G as a function as described in Lemma 3.3

28

3.5. PROBABILISTIC EXISTENCE PROOFS

Conductor Properties. For the proofs in Section 3.5.2 and in Section 3.5.2 we take thefollowing approach to bound the probability of not being a (lossless) conductor respectivelyan extracting conductor: We will assume the existence of a testing set T ⊆ 0, 1m whichallows us to distinguish the output of C(X,Ud) from a km(k′)-source with success > ε, withX being a at k′-source.Then, we x a at k′-source and the testing set T and dene a new random variable such

that we get an upper bound for the probability with the help of Cherno's bound (Lemma 2.7).Afterwards, we sum up over all possible distributions X having min-entropy in the intervall[kmin, kmax] and over all possible testing sets T to get the nal union bound for the probability.

We start by the non-constructively existence proof of an injective (lossless) conductor.

3.5.1 Existence of Injective (Lossless) Conductor

In this section, we prove non-constructively the existence of the following conductor.

Theorem 3.30. For every n, kmin ≤ kmax ≤ n and ε ∈ (0, 1), there exists an injective(n, kmin, kmax) × (d) →ε (m, km(k′)) conductor C with km(k′) = k′ + d − q for a constantq ≥ 0 and k′ ∈ [kmin, kmax], seed length d = log(n) + log(1/ε) + O(1) and output lengthm = km(kmax) + log(1/ε) +O(1).

Note that C can be lossless (setting q = 0), i.e. the output min-entropy is km(k′) = k′ + dwith output length m = kmax + d+ log(1/ε) +O(1).

Proof. To show the existence of such an injective conductor by showing that

P[C not a conductor] + P[C not injective] < 1,

we will x the min-entropy to a value k′ and show afterwards, that for every min-entropyk′ ∈ [kmin, kmax] we get a probability smaller than 1. We set km := km(k′) = k′ + d− q for aq ≥ 0 and assume that m′ := km(k′) + log 1/ε+ a and d = log(n) + log(1/ε) + b for constantsa and b. Note that here, m′ depends on k′ and describes the rst m′ bits of the output. Wewill show that already this rst m′ bits of the output of length m has km(k′) min-entropyand we can nally set the output length to m for all input min-entropies [kmin, kmax] becauseenlarging the output does not reduce the output min-entropy. Furthermore, we dene A = 2a,B = 2b, N = 2n , K ′ = 2k

′and M ′ = 2m

′. Hence, M ′ = 2km+log 1/ε+a = KmA

ε = K′DAQε for

Km = 2km , D = 2d and Q = 2q.We will assume that there exists no such conductor C fullling Theorem 3.30, or in other

words,∃ k′-source X : ∀ km-source Y ⊆ 0, 1m : |C(X,Ud)− Y | > ε,

which is equivalent to the following requirement by applying Lemma 2.13:

If C(X,Ud) = Z is not ε-close to a distribution with min-entropy km = log(|T |/ε) , then∃T ⊆ 0, 1m with |T | = εKm = εDK ′/Q, s.t. P[C(X,Ud) ∈ T ] > ε and X being a atk′-source.

Therefore, we have

P[C(X,Ud) ∈ T ] =|(x, u) ∈ X × Ud |C(x, u) ∈ T |

K ′D> ε.

29

CHAPTER 3. EXPANDERS AND CONDUCTORS

Let event B be such that

|(x, u) ∈ X × Ud |C(x, u) ∈ T | > εK ′D.

We dene now a new random variable Xx,u: It is 1 if C(x, u) ∈ T and 0 otherwise. We

know that P[Xx,u = 1] = |T |M ′ = εKm

AKm/ε= ε2

A . Hence, the expected value of the sum∑

x,uXx,u

is µ = K ′D · |T |M ′ = K′Dε2

A .

This gives for the probability of occurring event B

P[B] = P[∑

Xx,u > εK ′D]≤ P

[∣∣∣∑Xx,u − µ∣∣∣ > εK ′D − µ

]= P

[∣∣∣∑Xx,u − µ∣∣∣ > (εK ′D

µ− 1)· µ]

and we set

δ :=εK ′Dµ− 1 =

εK ′DAK ′Dε2

− 1 =A

ε− 1.

Using the Cherno bound of Lemma 2.7

P[∣∣∣∑Xx,u − µ

∣∣∣ > δµ]< 2e

−µδ23 = 2−γ(µδ2) for a constant γ

gives the upper bound

2−γ(µδ2) = 2−γ(DK′ ε2

A·(Aε−1)2) = 2−γDK

′(A−ε)2 ≤ 2−γDK′,

where at the last step, we xed A = 2 such that A− ε lies in the interval (1, 2) and the termmaximizes for A− ε = 1 and thus, we can replace (A− ε)2 with 1.

We regard now all possible at sources X having min-entropy in [kmin, kmax] and hence,|X| ∈ [Kmin,Kmax], where Kmin = 2kmin and Kmax = 2kmax . We calculate the union boundof the probability by summing up over all possible distributions X and all possible sets T .We get

30

3.5. PROBABILISTIC EXISTENCE PROOFS

P[C not a conductor] ≤Kmax∑i=Kmin

((N

i

)(M ′

|T |

)· 2−γDi

)

≤Kmax∑i=Kmin

((N

i

)(M ′

|T |

)· 2−γDi

)

≤Kmax∑i=Kmin

((N

i

)(M ′

εiD/Q

)· 2−γDi

)

≤Kmax∑i=Kmin

(N i ·

(eM ′QεiD

)εiD/Q· 2−γDi

)

≤Kmax∑i=Kmin

(N i ·

(eA

ε2

)εiD· 2−γDi

)(1)

≤Kmax∑i=Kmin

(N ·

(eA

ε2

)εD· 2−γD

)i

=Kmax∑i=Kmin

(2n · 2εD(log e+logA+2 log(1/ε)) · 2−γD

)i !< 1/2

where at Step (1) we used M ′ = K′DAεQ and

(eAε2

)εiD/Q ≤ ( eAε2

)εiD. Knowing that logA = 1

and inserting the value for d leads to D = 2logn+log(1/ε)+b = nBε and gives the requirement:

Kmax∑i=Kmin

(2n(1+B(log e+1+2 log(1/ε)− γB

ε)))i !< 1/2

To fulll this requirement, it is sucient to show that the next requirement holds:

2n(1+B(log e+1+2 log(1/ε)− γBε

) !<

14

(I)

becauseKmax∑i=Kmin

(14

)i<

∞∑i=1

(14

)i=

13<

12.

where we used∑∞

i=11pi

= 11−p − 1 for a 0 < p < 1.

Requirement I is equivalent to the requirement

n

(1 +B

(log e+ 1 + 2 log(1/ε)− γB

ε

))︸ ︷︷ ︸

=:ω

!< −2.

31

CHAPTER 3. EXPANDERS AND CONDUCTORS

We choose B > ε·(log e+1+2 log ε+1)γ > 3 which leads to ω < −2 and thus

ωn < −2n ≤ −2,

which is fullled if n > 0. Thus, Requirement I is satised.We show now that P[C not injective] is smaller than 1/2. Inserting 2d = nB

ε into Equation(3.5.2) gives us the requirement

P[C not injective] ≤ 22n−m′nBε

!<

12. (II)

Requirement II is equivalent to 2n− m′nBε + 1

!< 0. We know that ε ∈ (0, 1) and if we choose

B > 2 then Requirement II holds for all n,m′ > 0.Putting everything together, we have

P[C not an injective conductor] <∞∑i=1

(14

)i+

12

=13

+12< 1.

We showed that for an output length m′ depending on k′ where k′ is the input min-entropy,we get an injective conductor with km = km(k′) output min-entropy. We can set now theoutput length for all k′ ∈ [kmin, kmax] to m := m′(kmax) because m′(k′) ≤ m′(kmax) and whenthe conductor output contains km(k′) min-entropy in the rst m′(k′) output bits, it will alsocontain km(k′) min-entropy in the (possibly) longer output length m.Therefore, there must exist with non-zero probability an injective (n, kmin, kmax)× (d) →ε

(m, km(k′)) conductor C with k′ ∈ [kmin, kmax], seed length d = log n + log(1/ε) + O(1) andoutput length m = km(kmax) + log(1/ε) +O(1).

3.5.2 Existence of Injective Extracting Conductors

A special case which can not be shown by the proof in Section 3.5.1 is where the conductoris extracting. Therefore, we present a separate proof for this special case in this section. Wewill show non-constructively the existence of the following conductor:

Theorem 3.31. For every n, kmin ≤ kmax ≤ n and ε ∈ (0, 1), there exists an injectiveextracting (n, kmin, kmax) × (d) →ε (m, km(k′)) conductor C with k′ ∈ [kmin, k−max], seedlength d = log(n−kmin)+2 log(1/ε)+O(1) and output length m = kmax+d−2 log(1/ε)−O(1).

Note that this extracting conductor has optimal entropy loss up to a constant term accordingto Lemma 3.14.

Proof. As in Section 3.5.1, we x the input min-entropy to a value k′ and generalize it af-terwards for the case where the min-entropy lies in the interval [kmin, kmax]. For the outputlength, we assume that m′ := km(k′). Note that here, m′ depends on k′ and describes therst m′ output bits which are almost randomly distributed. We denote N = 2n, M ′ = 2m

′,

D = 2d and K ′ = 2k′. Let X be a at k′-source with support S, where |S| = K ′. We assume

that C is not an extracting conductor and therefore there must exist a testing set T such that

|P[C(X,Ud) ∈ T ]− P[Um′ ∈ T ]| > ε.

32

3.5. PROBABILISTIC EXISTENCE PROOFS

We know that P[Um′ ∈ T ] = |T |M ′ and

P[C(X,Ud) ∈ T ] =|(x, u) ∈ S × 0, 1d |C(x, u) ∈ T |

K ′D

hence, we have ∣∣∣∣ |(x, u) ∈ S × 0, 1d |C(x, u) ∈ T |K ′D

− P[Um′ ∈ T ]∣∣∣∣ > ε.

Let event B be such that∣∣∣∣(x, u) ∈ S × 0, 1d |C(x, u) ∈ T −K ′D · |T |M ′

∣∣∣∣ > ε ·K ′D

We dene now a new random variable Xx,u: It is 1 if C(x, u) ∈ T and 0 otherwise. We

know that P[Xx,u = 1] = |T |M ′ . Thus, the expected value of the sum

∑x,uXx,u is µ = K ′D · |T |M ′ .

Hence, we get

P[B] = P[|(x, u) |C(x, u) ∈ T − µ| > εK ′D

]= P

[∣∣∣∑Xx,u − µ∣∣∣ > εK ′D

]= P

[∣∣∣∑Xx,u − µ∣∣∣ > εM ′

|T | · µ]

Using the Cherno bound of Lemma 2.7

P[|∑

Xx,u − µ| > δµ] < 2e−µδ2

3 = 2−γ(µδ2)

for a constant γ, gives

2−γ(µδ2) = 2−γ(DK′|T |M′ ·

(εM′|T |

)2)

= 2−γDK′ε2 |T |

M′ ≤ 2−γDK′ε2 ,

where at the last step, we used the fact that |T |M ′ ≤ 1.For a xed k′ and a xed k′-source X we get the union bound for the probability that C is

not an extracting conductor:

P [C not an extracting conductor] ≤∑

T ⊆0,1m′P[B] ≤ 2M

′ · 2−γDK′ε2

= 2K′Dε2A2−γDK

′ε2 = 2−γ′DK′ε2

for the constant γ′ = γ −A.And for all possible at k′-sources X with k′ ∈ [kmin, kmax] and thus, |X| ∈ [Kmin =

2kmin ,Kmax = 2kmax ], we get

33

CHAPTER 3. EXPANDERS AND CONDUCTORS

P[C not an extracting conductor] ≤Kmax∑i=Kmin

((N

i

)· 2−γ′Diε2

)

≤Kmax∑i=Kmin

((Ne

i

)i· 2−γ′Diε2

)(1)

≤Kmax∑i=Kmin

(Ne

i· 2−γ′Dε2

)i

≤Kmax∑i=Kmin

(Ne

Kmin· 2−γ′Dε2

)i!<

12

where at Step 1 we used the upper bound(Ni

)≤(Nei

)iaccording to Lemma 2.19.

As in Section 3.5.1, we show the following Requirement I which implies the requirement ofP[C not an extracting conductor] < 1/2 :

Ne

K ′· 2−γε2D !

<14

(I)

For Requirement I, we have

Ne

Kmin· 2−γ′ε2D !

<14

log(Ne/Kmin)− γ′ε2D !< −2

n− kmin + log e+ 1!< γ′ε2D

1γ′

(n− kmin) +O(1)!< ε2D

log(n− kmin) +O(1)!< d+ 2 log ε

log(n− kmin) + 2 log(1/ε) +O(1)!< d

which is fullled by our choice of d = log(n− kmin) + 2 log(1/ε) +O(1). Hence, RequirementI is fullled and we get for P[C not an extracting conductor].

Kmax∑i=Kmin

(Ne

Kmin· 2−γ′Dε2

)i≤

Kmax∑i=Kmin

(14

)i<

∞∑i=1

(14

)i=

13<

12,

where we used∑∞

i=11qi

= 11−q − 1 for a 0 < q < 1.

For the probability of not being an injective conductor, we instantiate Equation 3.5.2 with2d = (n−k)B

ε2. Thus, we get the second requirement

P[C not injective] ≤ 22n−m′(n−k)Bε2

!<

12

(II)

34

3.5. PROBABILISTIC EXISTENCE PROOFS

Requirement II can be reformed to the requirement

2n− m′(n− k)Bε2

+ 1!< 0.

Knowing that ε2 ∈ (0, 1), it is sucient to choose the constant B such that B > 3nn−kmin

and Requirement II will be fullled for all n,m′ > 0.Putting everything together, we get

P[C not an injective extracting conductor] <∞∑i=1

(14

)i+

12

=13

+12< 1.

We showed that for an output length m′ depending on k′ where k′ is the input min-entropy,we get an injective extracting conductor with km(k′) = m′(k′) = m′ output min-entropy. Wecan set now the output length for all k′ ∈ [kmin, kmax] to m := m′(kmax) because m′(k′) ≤m′(kmax) and when the conductor output contains km(k′) min-entropy in the rst m′ outputbits, it will also contain km(k′) min-entropy in the (possibly) longer output length m. Inparticular, for input min-entropy kmax, we have that the output contains km(kmax) = mmin-entropy which fullls the requirement of being an extracting conductor.Hence, It follows that there must exist with non-zero probability an injective extracting

(n, kmin, kmax)× (d)→ε (m, km(k′)) conductor C with k′ ∈ [kmin, kmax], output min-entropykm(k′) = k′ + d− log(1/ε)− c, seed length d = log(n− kmin) + 2 log(1/ε) +O(1) and outputlength m = kmax + d− log(1/ε)− c for a constant c.

3.5.3 Existence of Injective Expanders

In the last two sections, we have non-constructively shown the existence of good conductors inthe sense of using a short seed. This conductors are expander graphs with left-degreeD ∈ Θ(n)and γ = (1− ε) ·D · 2−∆ where ε is the error and ∆ the entropy loss of the conductors. In thissection, we show non-constructively the existence of expander graphs which have a left-degreein O(n) for an arbitrary expansion factor. Namely, we prove the Theorem 3.32.

Theorem 3.32. There exists an injective (N,Kmin,Kmax) × (D)γ→ (M) expander graph

G = (V1,V2, E) with N = 2n, M = 2m and D = 2+γ log e+nm−log(Kmaxγ) + γ > 3 and γ ≤ D.

Proof. We assume that there exists no graph which is an expander graph as described inTheorem 3.32. We construct a random graph G by setting V1 = 0, 1n, V2 = 0, 1m andD = 2+γ log e+n

m−log(Kmaxγ) + γ > 3 and for every vertex v ∈ V1, we choose independently random Dneighbors in V2.Let S ⊆ V1 be a set with |S| = K for a K ∈ [Kmin,Kmax] . Assume G is not an expander

graph with expansion factor γ, i.e. there exist a S with Γ(S) < γ|S|. It follows that theremust exist a set T ⊆ V2 with |T | = γ|S| and Γ(S) ⊂ T .For the probability that a neighbor of S lies in T , we get |T |M = γ|S|

M = γKM , where M =

|V2| = 2m and the probability of all D ·K neighbors of S being in T is(γKM

)DK.

In total, the probability that there exists a set S withKmin ≤ |S| ≤ Kmax and |Γ(S)| < γ|S|is maximal

P[G is not an expander] ≤kmax∑

K=Kmin

((N

K

)·(M

γK

)(γK

M

)DK).

35

CHAPTER 3. EXPANDERS AND CONDUCTORS

With(NK

)≤ NK and

(MγK

)≤(eMγK

)γKfrom Lemma 2.19, we get

P[G is not an expander] ≤Kmax∑

K=Kmin

NK ·(eM

γK

)γK· (γKM

)DK

≤Kmax∑

K=Kmin

(NKeγK

(γK

M

)(D−γ)K)

≤Kmax∑

K=Kmin

(Neγ

(γK

M

)(D−γ))K

≤Kmax∑

K=Kmin

(Neγ

(γKmax

M

)(D−γ))K

≤Kmax∑

K=Kmin

(Neγ

(2log(γKmax)−m

)(D−γ))K

Inserting D = 2+γ log e+nm−log(Kmaxγ) + γ leads to

P[G is not an expander] ≤Kmax∑

K=Kmin

(Neγ

(2log(γKmax)−m

) −(2+γ log e+n)log(Kmaxγ)−m

)K

≤Kmax∑

K=Kmin

(Neγ

(2−2−γ log e−n

))K=

Kmax∑K=Kmin

(14

)K<∞∑K=1

(14

)K=

13

Thus, we get P[G is not an expander ] < 1/3. Because we are interested in injective expandergraphs, it remains to show that the probability of being not injective is smaller than 2/3.Replacing 2d with D in Equation (3.5.2), leads to the probability

22n · 2mD =N2

MD

!<

23,

where we set 2n = N and 2m = M .We assume that m ∈ O(n) and thus, there must exist a constant α ∈ ( 1

n , 1) such thatm = αn.We get

N2

M2= 2n(2−αD) < 2n(2−3·α) ≤ 2−n ≤ 1

2<

23,

where we used D > 3 from the preconditions and assumed an n > 0.Putting everything together leads to

P[G is not an injective expander] <12

+23

= 1

36

3.6. APPLICATION OF EXPANDER GRAPHS

Hence, with non-zero probability there exist an injective (N,Kmin,Kmax) × (D)γ→ (M)

expander graph G with left-degree D = 2+γ log e+nm−log(Kmaxγ) + γ for an arbitrary γ ≤ D.

3.6 Application of Expander Graphs

Expander Graphs have many applications such as interpreting them as input-restricting func-tions done in [MT07]. We already gave an overview of this application in Section 1.2. In thissection, We show how to interpret an expander graph as an input-restricting function family.Further, to be interesting for [MT07], the expander graph should be constructably in polyno-mial time and highly unbalanced, i.e. explicit expander graphs with |V1| = 2n |V2| = 2m

and most importantly for the application in [MT07], the graph should have a left-degreepolynomial in n.First, we recall the denition of In being an input-restricting function family.

Denition 3.33 (input-restricting function). Let ε = ε(n) ∈ (0, 1), r = r(n), δ = δ(n),m = m(n) be functions of n and let n > m, then a family In of functions E1, .., Er :0, 1n → 0, 1m is called (n, δ, ε)-input restricting if it satises the following two properties:

Injective: ∀x 6= x′ ∈ 0, 1n, ∃i ∈ 1, ..., r such that Ei(x) 6= Ei(x′).

Input-Restricting: For all subsets M1, ...,Mr ⊂ 0, 1m such that |M1| + ... + |Mr| ≤2m·(1−ε), we have∣∣x ∈ 0, 1n |Ei(x) ∈Mi for all i = 1, ..., r

∣∣ ≤ δ · (|M1|+ ...+ |Mr|).In is called explicit if r(n) is polynomial in n and if Ei(·) can be computed in poly(n) time.

It is clear that δ ≥ 1/r must hold. Furthermore, we are interested in In being explicit. Weshow now how to interpret an expander graph to get In.Let us assume that we have an explicit (injective) (2n, 0,K)× (D)

γ→ (2m) expander graphG = (V1,V2, E) with left-degree D in poly(n). As already mentioned in Lemma 3.3, suchan expander graph can be interpreted as a function FG : 0, 1n × 0, 1d → 0, 1m whichcalculates the ith neighbor for a vertex in V1. We dene Ei(x) := FG(x, i) being the i-th neighbor of x in the expander graph G, i.e. Ei : 0, 1n → 0, 1m for i = 1, ..., D. Letx(i) = xim+1, ..., x(i+1)m be the i-thm-bit substring of x where extra zeros are appended to x tomake it a multiple of m. Then, if G is not injective, we dene additionally ED+1, ..., ED+dn/meas ED+i(x) = x(i) for i = 1, ..., dn/me. We get In = E1, ..., Er, where r := D in the injectivecase and r := D + dn/me if G is not injective.We show now that this transformation of an expander graph to a family of functions In =

E1, ..., Er gives actually an input-restricting function family.

In is explicit: Because G is an explicit expander graph with polynomially-bounded left-degree D, we have that In is also explicit.

In is injective: If the expander graph G is injective, it trivially follows that In must beinjective, too. For the case, where G is not injective, we assume without loss of generality thatn is a multiple of m. Then, we have for all x ∈ 0, 1n that x = ED+1(x)||...||ED+dn/me(x).Let now x′ ∈ 0, 1n with x′ 6= x and hence, there must exist an i ∈ 1, ..., dn/me such thatED+i(x) 6= ED+i(x′). Thus, In must be injective.

37

CHAPTER 3. EXPANDERS AND CONDUCTORS

In is input-restricting: We show this by contradiction. Let M1, ...,Mr ⊆ 0, 1m be rsubsets with |M1|+ ...+ |Mr| ≤ 2m(1−ε). Furthermore, let

X := x ∈ 0, 1n |Ei(x) ∈Mi ∀ i = 1, ..., rsuch that it would not fulll the requirement of input-restricting, i.e.:

|X | > δ · (|M1|+ ...+ |Mr|) (3.6.1)

and let M :=⋃ri=1Mi. According to the denition of the Ei(·) for i = 1, .., D, we have

Γ(X ) ⊆M and therefore, |Γ(X )| ≤ |M|. We distinguish now two cases:

Case 1. |X | ≤ K.With γ = 1/δ, we get the following contradiction:

|M| ≥ |Γ(X )| ≥ 1δ· |X |

>1δ· δ · (|M1|+ ...+ |Mr|) (3.6.2)

≥ |M|

where we used Equation (3.6.1) at Step (3.6.2).

Case 2. |X | > K.We use the denition of ε and get

2m(1−ε) < 2m(1−(1− log(γK)m

)) = γK.

Let X ′ ⊂ X such that |X ′| = K. This gives

γK > 2m(1−ε) ≥ |M1|+ ...+ |Mr| ≥ |M| ≥ |Γ(X )| ≥ |Γ(X ′)| ≥ γ · |X ′| = γK

which is again a contradiction.

Therefore, |X | ≤ δ · (|M1|+, ...,+|Mr|) must hold and not Equation (3.6.1) which leads toIn being input-restricting.

Hence, In satises all requirements needed to be an input-restricting function family andwe state the result in the next lemma.

Lemma 3.34. Let n be such that n > m. Assume that there exists an explicit (2n,K)×(D)→(2m, γ) expander graph G = (V1,V2, E) with poly(n) left-degree D where V1 = 0, 1n and

V2 = 0, 1m. Then, for all ε > 0 such that ε > 1 − log(Kγ)m for m large enough, there

exists an explicit (n, δ, ε)-input-restricting family of functions with δ = 1/γ and cardinalityr := D + dn/me. If G is injective than there exists the same input-restricting family offunctions but with smaller cardinality r := D

For the application described in Section 1.2 we want ε being as small as possible such thatthe number of allowed queries 2(1−ε)·m exceed the so called birthday barrier O

(2n/2

). Hence,

together with m ∈ O(n), we require a big upper bound K of the order 2Θ(n).In our derivation of input-restricting functions, we assumed the existence of an explicit

(N, 0,K)×(D)γ→ (M) expander graph with polynomially-bounded left-degree andK ∈ 2Θ(n).

The purpose of the next Chapters 4 and 5 will be to nd an explicit construction of such anexpander graph and in Section 5.4 we will actually present a construction which leads to anexpander graph satisfying the requirements stated in Lemma 3.34 in computationally-theoreticterms.

38

3.7. NOTATIONS USED IN THE LITERATURE

3.7 Notations Used in the Literature

In Section 3.1 and 3.2 we introduced a generalized notion of expander graphs and conductors.We give here a short reference to the notations and notions of dierent combinatorial func-tions and expander graphs used in the literature which are special cases of our generalizedconductors, and show how they map our denition of conductors and expander graphs. Wealso give citations to publications as an example for the usage of the described notions.In the literature, the notion of a (K, γ)-expander graph G = (V1,V2, E) with left-degree D

is often used [TUZ01]. In our framework, G is a special case of expander graphs, namely, an

(|V1|, 0,K)× (D)γ→ (|V2|) expander graph. Note that outside my thesis, the lower bound for

the size of the set X is always assumed to be zero.In the setting of conductors, a (k, α, ε)-conductor C : 0, 1n×0, 1d → 0, 1m as described

in [CRVW02, MT07] would be in our generalized framework an (n, 0, k) × (d) →ε (m, k′ 7→k′+α) generalized conductor and the notion of an (n,m, d, k, ε)-extractor as dened in [Tre98,TUZ01], is an extracting (n, k, k)× (d)→ε (m,m) generalized conductor. For the case, wherewe have a xed input min-entropy as for extractors, but have km(k′) = km for a value kmnot necessarily equals to the output length, the notion of an (n, k) →ε (m, km) condenser isknown [TUZ01].

39

CHAPTER 3. EXPANDERS AND CONDUCTORS

40

4 Basic Constructions

The purpose of this chapter is to introduce some basic constructions of conductors which wewill use later in Chapter 5 to construct new conductors by using the composition theoremsof Section 3.3, and in particular, to construct an expander graph needed for the applicationdescribed in Section 1.2. In Section 4.1, we introduce a construction due to Trevisan1 [Tre98].Then, in Section 4.2, we describe an extracting conductor which works for small min-entropiesand which uses hash functions as basic elements. Finally, we explain a construction of a losslessconductor in Section 4.3.

4.1 Trevisan's Extracting Conductor

In this section, we present a construction of an explicit strong extracting conductor due toTrevisan [Tre98]. This conductor is of special interest, because it is often used as a sub functionto construct other conductors.

The main idea of Trevisan's construction is to use a special kind of a pseudo-random gen-erator (PRG) due to Nisan-Wigderson [NW94] which relies on a hard to compute function.This PRG can be used to obtain a strong injective extracting conductor.

4.1.1 Nisan-Wigderson Pseudo-Random Generator

A pseudo-random generator (PRG) is a function for which there is no small circuit distinguish-ing the PRG output eciently from a uniformly distributed string. In [NW94], it was shownthat if we have a function f : 0, 1` → 0, 1 which is on average-case hard to compute, thenwe can construct a PRG with the help of black-box calls to the function f . With hard tocompute, we mean that for all algorithms A with circuit complexity at most 2γ` for a constantγ > 0, we have PX [A(x) = f(x)] < 1/2 + negligible.

Ww will present an example for such a hard function f in Section 4.1.2, where we use anerror-correcting (EC) encoding of an n-bit string with enough min-entropy and interpret theencoding as a truth table of a boolean function. But for this section, we just assume theexistence of such a hard function f and show how to construct the NW generator.

The NW generator NWS ,f : 0, 1d → 0, 1m is dened as

NWS ,f (y) := f(y|S1) · · · f(y|Sm),

where S = S1, ...,Sm is a collection of subsets of [d] each of size ` and y|Si is the string in0, 1` obtained by projecting y onto the coordinates specied by Si.To ensure that the y|Si look like independently random `-bit strings, the set S has to be a

weak (m, d, `, ρ)-design as we will show later.

1which is one of the most famous extractor constructions, or in our setting an extracting conductor construc-tion

41

CHAPTER 4. BASIC CONSTRUCTIONS

Note. The original NW generator used (m, d, `, a)-designs but Vadhan [Vad98] showed thata weaker notion of designs also meets the requirements, namely so called weak designs. Weuse a stronger notion of weak designs than [Vad98], which is required later to get a conductorwith kmin 6= kmax with Trevisan's construction. We go more into detail in Section 4.1.2.

Denition 4.1 (weak design). A family of sets S = S1, ...,Sm ⊂ [d] is a weak (m, d, `, ρ)-design if

1. For all i, |Si| = `

2. For all i,∑

j<i 2|Si∩Sj | ≤ ρ · (i− 1).

Let now f be a bad function for NW, i.e. there exists a distinguisherD which can distinguishthe output of the NW pseudo-random generator from a random string with success probabilityof at least 1/2 + ε and hence, NW is not a pseudo-random generator as we prove below. Inthis case, D can be used to calculate a function g : 0, 1` → 0, 1 which has the sameoutput as the hard function f for a fraction of 1/2 + ε/m inputs (we say g approximates fwithin 1/2+ε/m). If we use a weak (m, d, `, ρ)-design S , the number of needed functions g toapproximate all bad functions f , given a distinguisherD, is upper bounded by 21+logm+ρ·(m−1).Additionally, we show that even revealing the random d-bit string used for NW does notcompromise the security because the hard function f is not eciently computable and hence,one cannot use the seed for distinguishing the NW output from a truly random bit string. Inparticular, we prove the following lemma.

Lemma 4.2. Let S be a weak (m, d, `, ρ)-design, and D : 0, 1m → 0, 1 a distinguisher.Then there exists a family GD of at most 21+logm+ρ·(m−1) functions such that for every functionf : 0, 1` → 0, 1 satisfying

|P[D(NWS ,f (Ud), Ud) = 1

]− P [D(Um, Ud) = 1] | ≥ ε (4.1.1)

there exists a function g : 0, 1` → 0, 1, g ∈ GD, such that g(·) approximates f(·) within1/2 + ε/m.

In the proof of Lemma 4.2 we show additionally that every such function g can be describedby using at most 1 + logm+ ρ · (m− 1) bits, given D as a circuit. We call this description bitstring for function g the advice string . Hence, given the advice string and the distinguisherD, we can construct a reconstruction function g for the function f . We will talk more aboutthis when introducing a strong condensing conductor in Section 4.3.

Proof of Lemma 4.2. Our proof method is based on [Tre98, Vad98] but is extended to thecase where we append the used random bit string y to the output of the NW pseudo-randomgenerator. For the proof, we use the so called hybrid argument. If the distinguisher D can dis-tinguish (NWS ,f (y), y) from (Um, Ud) with success of at least ε, then there must be at least oneposition in the output string where this distinction is noticeable otherwise D can not nd anydierence between the two output strings and therefore, does not distinguish (NWS ,f (y), y)from (Um, Ud). Let H0, ...,Hm be m + 1 distributions with Hi := (v1...viri+1...rmy1...yd)where v = NWS ,f (y) for a random y and r ∈ 0, 1m be a truly random bit string. It iseasy to see that Hm is (NWS ,f (y), y) and H0 is the uniform distribution over 0, 1m×0, 1d.

42

4.1. TREVISAN'S EXTRACTING CONDUCTOR

Given the Equation (4.1.1), there must be a bit b0 ∈ 0, 1 such that

P[D(NWS ,f (Ud), Ud)⊕ b0 = 1

]− P [D(Um, Ud)⊕ b0 = 1] ≥ ε.

To simplify the notation, we just dene a new distinguisher D′(·) := D(·)⊕ b0. Note that thevalue of b0 can be easily found by choosing b0 ∈ 0, 1 such that

P[D(NWS ,f (Ud), Ud)⊕ b0 = 1

]− P [D(Um, Ud)⊕ b0 = 1]

is maximized. Using the distributions Hi and the new distinguisher D′ this can be rewrittento

ε ≤ P[D′(NWS ,f (Ud), Ud) = 1

]− P

[D′(Um, Ud) = 1

]= P

[D′(Hm) = 1

]− P

[D′(H0) = 1

]=

m∑i=1

P[D′(Hi) = 1

]− P

[D′(Hi−1) = 1

]and hence, there must be an index i with

P[D′(Hi) = 1

]− P

[D′(Hi−1) = 1

]≥ ε/m

and

Hi = f(y|S1) · · · f(y|Si)ri+1 · · · rmy1 · · · ydHi−1 = f(y|S1) · · · f(y|Si−1

)ri · · · rmy1 · · · yd.

We can assume without loss of generality that Si = 1, .., `. Let y := (x, z) where x = y|Si ∈0, 1` and z = y|[d]−Si ∈ 0, 1d−` the not taken bits. Further, let hj(x, z) := y|Sj for everyj < i and y = (x, z). We see that hj(x, z) depends on |Si ∩ Sj | bits of x and on `− |Si ∩ Sj |bits of z. Putting everything together and using the fact that PX [1] = E[X], we have

Eri,..,rm,u,x,z

[D′(f(h1(x, z)), · · · , f(hi−1(x, z)), f(x), ri+1, · · · , rm, y1, · · · , yd)

]− E

ri,..,rm,y,x,z

[D′(f(h1(x, z)), · · · , f(hi−1(x, z)), ri, ri+1, · · · , rmy1, · · · , yd)

]= E

ri,..,rm,y,x,z

[D′(f(h1(x, z)), · · · , f(hi−1(x, z)), f(x), ri+1, · · · , rmy1, · · · , yd)

−D′(f(h1(x, z)), · · · , f(hi−1(x, z)), ri, ri+1, · · · , rmy1, · · · , yd)]

≥ ε/m.

Without loss of generality2, we x the random ri+1, ..., rm to some values ci+1, ..., cm and z tosome value w because z is independent of x.

Eri,x

[D′(f(h1(x,w)), · · · , f(hi−1(x,w)), f(x), ci+1, · · · , cm, y1, · · · , yd)

−D′(f(h1(x,w)), · · · , f(hi−1(x,w)), ri, ci+1, · · · , cm, y1, · · · , yd)]≥ ε/m.

2using an averaging argument

43

CHAPTER 4. BASIC CONSTRUCTIONS

We rename ri to b and dene now a new function F : 0, 1`+1 → 0, 1m with F (x, b) =f(h1(x,w)), · · · , f(hi−1(x,w)), b, ci+1, · · · , cm, y1, · · · , yd for a xed w. Note that the bitsy1, ..., yd are fully described by x and w and we can therefore omit y1, ..., yd as inputs for thefunction F . Inserting in above equation gives

Px,b[D′(F (x, f(x))) = 1

]− Px,b

[D′(F (x, b)) = 1

]> ε/m.

Therefore, function F and D′ can be used to distinguish the pair (x, f(x)) from U`+1. Wewill show how one can use F and D′ to construct a function g(·) which agrees with f(·) ona fraction of 1/2 + ε/m of the domain. Choose b ∈R 0, 1 and compute D′(F (x, b)). IfD′(F (x, b)) = 1 then output g(x) = b, else output g(x) = 1 − b. The probability that g(x)agrees with f(x) is

Pb,x [g(x) = f(x)] = Pb,x [g(x) = f(x)|b = f(x)]Pb,x [b = f(x)]+ Pb,x [g(x) = f(x)|b 6= f(x)]Pb,x [b 6= f(x)]

=12Pb,x

[D′(F (x, b)) = 1|b = f(x)

]+

12Pb,x

[D′(F (x, b)) = 0|b 6= f(x)

]=

12

+12(Pb,x

[D′(F (x, b)) = 1|b = f(x)

]− Pb,x

[D′(F (x, b)) = 1|b 6= f(x)

])=

12

+ Pb,x[D′(x, f(x)) = 1

]− Pb,x

[D′(x, b) = 1

]≥ 1

2+

ε

m.

Putting everything together, we see that for describing function F we need logm bits tospecify i, we need one bit for b and for every j < i and x we have to describe f(hj(x,w)) andfor j > i we have to describe cj . Note that for xed w, the outcome hj(x,w) for j < i dependsonly on |Si∩Sj | bits of x. Hence, although f cannot be eciently computed, we can just giveall possible outputs for inputs hj(x,w) to describe function g, which are 2|Si∩Sj | values of ffor every j < i. Overall, g(·) can be described by using 1 + logm +

∑j<i 2|Si∩Sj | + (m − i)

≤ 1 + logm+ ρ · (m− 1) bits. Thus, there are at most 21+logm+ρ·(m−1) possible functions g(·)and therefore |GD| ≤ 21+logm+ρ·(m−1).

4.1.2 Making the NW Generator an Injective Extracting Conductor

In this section, we show how to get an injective strong extracting conductor from the NWgenerator.The NW generator extends d truly random bits to m almost truly random bits. To achieve

this extension, one needs a hard predicate function f . When we declare an (n, kmin, kmax)×(d)→ε (m, km(k′)) extracting conductor Ext, we do not just have the d random bits as input,but also n bits from a k′-source where k′ ∈ [kmin, kmax]. The idea is to use this n additionalbits for dening a hard predicate function f .In particular, one uses an error-correcting code EC : 0, 1n → 0, 1n code and computes

the encoding x = EC(x) of the n input bits x. Further, we interpret the encoding x as atruth table of a predicate function x : 0, 1` → 0, 1 with ` = log n and we will show thatfunction x is on average a hard function when the n-bit string is drawn from a distributionwith sucient min-entropy. To illustrate the idea, let EC(x) = x = b1b2 · · · bn where bi is thei-th bit of the EC encoding x. Then we can interpret b1b2 · · · bn as a function table as Table4.1.

44

4.1. TREVISAN'S EXTRACTING CONDUCTOR

all values ∈ 0, 1` output

00...00 b100...01 b2· · · · · ·

11...10 bn−1

11...11 bn

Table 4.1: Function table of a function 0, 1` → 0, 1

We dene Ext : 0, 1n × 0, 1d → 0, 1m as:

ExtS ,EC(x, y) := NWS ,x(y) = x(y|S1) · · · x(y|Sm).

where x ∈ 0, 1n is the error-correcting encoding EC(x) of x.The EC encoding should be such that in any Hamming ball3 of suciently small radius

there are only few codewords.

Lemma 4.3. [Tre98] For every n and 0 ≤ δ < 1/2, there is a polynomial-time computableencoding ECδ : 0, 1n → 0, 1n where n = poly(n, 1/δ) such that every ball of Hammingradius (1/2− δ)n in 0, 1n contains at most 1/δ2 codewords. Furthermore, n can be assumedto be a power of 2.

For our conductor, we use such an error-correcting encoding with δ = ε/m, we will showlater in the proof of Theorem 4.5 why. We state now that if δ = ε/m, there are only fewx ∈ 0, 1n where its encoding x is not a truth table of a hard predicate function and wouldtherefore compromise the randomness of the NW generator output.

Lemma 4.4. Let EC : 0, 1n → 0, 1n be an error-correcting code fullling Lemma 4.3 withδ = ε/m. Then for ExtS ,EC(x, y) = NWS ,x(y) = x(y|S1) · · · x(y|Sm) with x = EC(x) andfor every distinguisher D : 0, 1m → 0, 1, there are at most 21+logm+ρ(m−1) · (ε/m)2 stringsx ∈ 0, 1n such that

|P[D(ExtS ,EC(x, Ud), Ud) = 1

]− P [D(Um, Ud) = 1] | ≥ ε. (4.1.2)

Proof. Figure 4.1 is a helpful illustration for this proof. If x is a bad string, i.e. it is suchthat Equation (4.1.2) holds, then from Lemma 4.2 we know that there exists a functiongi : 0, 1` → 0, 1 in GD such that g(·) approximates x(·) within 1/2 + ε/m = 1/2 + δ.Additionally, we know that there are at most |GD| ≤ 21+logm+ρ(m−1) such functions gi's. In

particular, every function g ∈ GD can be an approximation of at most(mε

)2dierent functions

x(·), because of the EC encoding according to Lemma 4.3 and hence, every Hamming ball

with relative distance 1+ε/m to g contains at most(mε

)2encodings x of dierent x's. Overall,

|GD| ·(mε

)2 = 21+logm+ρ(m−1) · (m/ε)2 is an upper bound on the number of bad strings x forwhich Expression (4.1.2) can occur.

3a Hamming ball with radius r for a string x contains all strings which diers in at most r positions from x

45

CHAPTER 4. BASIC CONSTRUCTIONS

gi

12 + ε

m

gi+1

12 + ε

m

g1

12 + ε

m

g2

12 + ε

m

g|GD|

12 + ε

mf

Figure 4.1: Function family GD

Note. If we did not use error-correcting codes, then there would be too many functions withrelative distance t = 1/2 + ε/m to x. Let w = 1 + logm+ρ(m−1) then the number of stringswith Hamming distance at most t · w is:

tw∑i=0

(w

tw

)≈ 2w·h(t) ≈ 2w = 21+logm+ρ(m−1)

In the rst step we used the upper bound from Lemma 2.20 and in the second step the factthat t is about 1/2 and for that the binomial entropy function is 1. Overall, we would have21+logm+ρ(m−1) · 21+logm+ρ(m−1) strings x satisfying Expression (4.1.2) which are too many.

Theorem 4.5. Let n > m, and ε > 0 be a constant. If S is a weak (m, d, `, ρ)-design forρ = (kmax − 3 log(m/ε)− 1)/m, and EC from Lemma 4.3 with δ = ε/m and ` = log(n), thenExtS ,EC : 0, 1n × 0, 1d → 0, 1m is an injective strong extracting (n, 0, kmax)× (d)→2ε

(m, km(k′)) conductor with km(k′) = (k′ − 3 log(m/ε) − 1)/ρ and k′ ∈ [0, kmax]. If S can beconstructed in poly(m, d) time than ExtS ,EC is ecient.

Proof. As in the probabilistic proof of Section 3.5.2, we x the input min-entropy to a valuek′ ∈ [0, kmax] and denote the function m′ := m′(k′) = km(k′) which does describe the rstm′(k′) output bits. Note that m = m′(kmax) and with the weak (m, d, `, ρ)-design we have forevery Si ∑

j<i

2|Si∩Sj | ≤ ρ · (i− 1).

Hence, we can just ignore the last m−m′ subsets of S and get a new weak (m′, d, `, ρ)-designS ′ = S1, ...,Sm′ .We show now that for input min-entropy k′ the output is 2ε-close to Y Ud, where Y is a

m′-source.Redoing the proof of Lemma 4.2 and 4.4 for the input-min-entropy k′ and output length

m′, we see that there are at most 21+ρ(m′−1)+log(m′)+2 log(m/ε) possible bad strings x for whichcondition

|P[D(Ext′S ,EC(x, Ud), Ud) = 1

]− P [D(Um′ , Ud) = 1] | > ε

46

4.1. TREVISAN'S EXTRACTING CONDUCTOR

holds, where we denote with Ext′S the rst m′ output bits of ExtS . Because x is selectedfrom a k′-source, the probability of each string is at most 2−k

′to be selected. In total, the

probability to select a bad string x is

21+ρ(m′−1)+log(m′)+2 log(m/ε) · 2−k′ ≤ 2 ·m2/ε2 ·m′ · 2ρm′ · 2−k′

= 2 ·m2/ε2 ·m′ · 2k′−log(m/ε)−1 · 2−k′ (1)

≤ 2 ·m3/ε2 · 1/2 · ε3/m3 (2)

= ε

where at Step (1) we used m′ = (k′ − 3 log(m/ε)− 1)/ρ and at Step (2) we used m′ ≤ m.

Then we have

|P[D(Ext′S ,EC(X,Ud), Ud) = 1

]− P [D(Um′ , Ud) = 1] |

≤ Ex∈X

[|P[D(Ext′S ,EC(x, Ud), Ud) = 1

]− P [D(Um′ , Ud) = 1] |

](2)

=∑x∈B

P [X = x] · |P[D(Ext′S ,EC(x, Ud), Ud) = 1

]− P [D(Um′ , Ud) = 1] |

+∑x/∈B

P [X = x] · |P[D(Ext′S ,EC(x, Ud), Ud) = 1

]− P [D(Um′ , Ud) = 1] |

≤ ε+ ε (3)

= 2ε

where (2) is an application of the triangle inequality and at (3) we used the upper bound 1for the expressions

|P[D(Ext′S ,EC(x, Ud), Ud) = 1

]− P [D(Um′ , Ud) = 1] | if x ∈ B

and

P [X = x|x /∈ B].

ExtS ,EC(x, y) is therefore 2ε-close to Y Ud for a km(k′)-source Y .It remains to show that ExtS (x, y) is an injective conductor. Recall that for being injective,

the following must be true:

∀x′ 6= x ∈ 0, 1n : ∃y ∈ 0, 1d : ExtS ,EC(x, y) 6= ExtS ,EC(x′, y).

We show the injectivity by contradiction: We assume that

∃x′ 6= x ∈ 0, 1n : ∀y ∈ 0, 1d : ExtS ,EC(x, y) = ExtS ,EC(x′, y).

This would mean that:

x(y|S1) · · · x(y|Sm) = x′(y|S1) · · · x′(y|Sm)

We x an i ∈ [1, ...,m]. Then we have that

|y|Si | for an i and ∀y| = 2|Si| = 2`.

47

CHAPTER 4. BASIC CONSTRUCTIONS

Because x(·) (and x′(·)) is a function from 0, 1` to 0, 1, this means that x and x′ are equalfor all possible inputs and in particular x = EC(x) = x′ = EC(x′). This contradicts theproperty of the EC encoding because x 6= x′. Hence, such a x′ does not exist.Note that if the weak design S can be constructed in poly(m, d) time than ExtS ,EC(x, y)

can be computed in poly(n, d) time because the EC encoding is eciently computable (inpoly(n)). Hence, ExtS ,EC(x, y) is explicit in this case.

4.1.3 Example Instantiation of Trevisan's Extracting Conductor

In this section, we present a concrete instantiation of Trevisan's strong extracting conductorand calculate its parameters. We will set m := kmax/2, hence the conductor we will get,will extract half of the maximal input min-entropy and preserves the randomness of the seed.With this setting, we get

ρ = 2 · (kmax − 3 log(kmax/(2ε))− 1)/kmax = 2− 6 log(kmax/ε)− 4kmax

which has to be at least 1. Thus, we choose ε such that 6 log(kmax/ε)−4kmax

< 1 which is achievedby the choice

ε > 2log(kmax)−kmax/2+2/3.

Example for an EC Encoding. We use the notation [N,K,D]Q for linear error-correctingcodes where N is the output length, K is the input message length, D the minimal Ham-ming distance of the code and Q the alphabet size. A possible linear error-correcting codefullling the requirements of Lemma 4.3 is the concatenation4 of a Reed-Solomon (RS) code[p, k, p− (k − 1)]q

5 and a Hadamard (Ha) code [p, log p, p2 ]. The encoding and decoding ofthis concatenated code can be done in polynomial time. Let p be a power of 2 and σ > 1,then we can set q := p, k := p/σ and by the fact that a [p, k, d]p code is also a [p, k, d− 1]pcode we get the RS code

[p, pσ ,

σ−1σ p]pcode. Concatenating the RS code with the Ha code

leads to the following linear code[p,p

σ,σ − 1σ

p

]p

·[p, log p,

p

2

]2⇒[p2,

p

σlog p,

(1/2− 1

)· p2

]2

with relative Hamming distance 12 − 1

2σ .

The strings x ∈ 0, 1n are used as input for the[p2, pσ log p, (1/2− 1

2σ )p2]2code and hence,

we have n = (p/σ) · log p and n = p2. We see that p ≤ σn and thus, n is smaller than (σn)2.We show that this encoding actually has the property required by Lemma 4.3. To achievethis, we use the following bound from [Tre98].

Lemma 4.6. Suppose EC is an error-correcting code with relative minimum distance ≥ 1/2−β/2. Then every Hamming ball of relative radius 1/2−√β contains at most 1/(3β) codewords.

4by rst interpreting the input message as a messages in GF (qk) of length K and applying an [N,K,D]qk

code and second, interpreting the encoding as N messages of length k and applying N times an [n, k, d]qcode on it, we get an [nN, kK, dD]q code

5k ≤ p ≤ q

48

4.1. TREVISAN'S EXTRACTING CONDUCTOR

We set β := 1/σ and δ := 1/√σ. Applying Lemma 4.6 we get at most σ/3 ≤ σ = 1/δ2

codewords in a Hamming ball of radius 1/2− 1/√σ = 1/2− δ which fullls the requirements

of Lemma 4.3.For our instantiation of Trevisan's conductor, we set σ = m2/ε2 to get δ ≤ m/ε. We get

n = n2m4/ε2 and for the parameter `:

` = log(n)

= log(n2m4/ε4)= 2 log n+ 4 logm+ 4 log(1/ε)< 6 log n+ 4 log(1/ε),

where at the last step, we assumed that n = c ·m for a constant c > 1.

Example for a Weak Design. The existence of an eciently constructible weak design withalmost ideal d is stated in [Vad98]:

Lemma 4.7. For every `,m ∈ N and ρ > 1, there exists a weak (m, d, `, ρ)-design S =S1, ...,Sm ⊂ [d] with

d =⌈`

ln ρ

⌉· `.

Moreover, such a weak design can be found in poly(m, d) time.

We denote a new function ψ(kmax, ε) :=[ln(2− 6 log(kmax/ε)−4

kmax)]−1

which computes the

value 1/ ln(ρ) depending on kmax and ε. We will omit the input parameters and write forshort-hand just ψ. Hence, we know now that we can set the seed length to

d = dψ · `e · ` ≤ ψ · `2 + `.

Putting Everything Together. With the chosen EC and weak design, the value of d is

d ≤ ψ · `2 + ` ≤ ψ · (6 log(n) + 4 log(1/ε))2 + 6 log(n) + 4 log(1/ε)

= 36 · ψ · log2(n) + ψ · 48 · log(n) · log(1/ε) + 6 log(n) + ψ · 16 log2(1/ε) + 4 log(1/ε)

∈ O(log2(n))

Overall, we get a strong injective extracting (n, 0, kmax)× (d) → (m, km(k′)) conductor withm = kmax/2 and km(k′) = (k′ − 3 log(m/ε) − 1)/ρ for k′ ∈ [0, kmin] which needs O(log2(n))truly random bits and has maximal entropy loss ∆(kmax) = kmax/2.

4.1.4 Expander Graph Construction

Section 3.6 introduced an application of expander graphs and showed how to build input-restricting functions out of an expander. Furthermore, we showed in Theorem 3.27 that everyconductor is actually an expander. Hence the Trevisan extracting conductor can be used toconstruct an expander graph. If we apply Theorem 3.27, we get the following expander graph:

49

CHAPTER 4. BASIC CONSTRUCTIONS

Lemma 4.8. Let Ext be the strong extracting (n, 0, kmax) × (d) →2ε (m, km(k′)) conductorfrom Section 4.1.3. Then Ext is also an (2n, 0,Kmax = 2kmax) × (D) → (2m+d, γ = (1 −2ε)2d+m−kmax) expander graph with left-degree D = 2d where d = ψ(kmax, ε) · l2 + l.

The problem of this expander graph is that it has superpoly(n) degree D because of `2 =O(log2(n)) and therefore, not interesting for the application in [MT07] presented in Section1.2. We are going to introduce in Section 5.4 a construction of an expander graph which hasonly poly(n) degree and hence, is applicable for [MT07].

4.2 Constructing Conductors from Hash Functions

In this section, we investigate in extracting conductors which use a family of hash functionsH = h : 0, 1n → 0, 1m as seed instead of truly random bits. We can dene a strongextracting conductor ExtSZ : 0, 1n × [H] → 0, 1m where ExtSZ(x, h) = h(x) and hrandomly chosen from a class of hash functions H.

In the following, we show how to get such a strong extracting conductor based on [SZ98]and additionally to [SZ98], we calculate the exact values of the seed length.

Let now H being a universal family of hash functions. A universal family H of hashfunctions is dened as a set of hash functions h : 0, 1n → 0, 1m such that for two dierentx, y ∈ 0, 1n, we have

|h ∈ H : h(x) = h(y)||H| ≤ 2−m.

We have the following well known lemma for universal families of hash function.

Lemma 4.9 (Leftover Hash Lemma [IZ89]). Let X be a k-source over 0, 1n. Let H be auniversal family of hash functions mapping 0, 1n → 0, 1m=k−∆. Then the distribution of(h, h(x)) is 2−∆/2 = ε/2-close to Uu+m if (h, x) is chosen uniformly at random from H ×X,where u are the number of bits needed to describe the family H.

We could apply the Leftover Hash Lemma to show that ExtSZ is indeed a strong extractingconductor, but the problem is that more than n bits are needed to describe such a universalfamily H of hash functions. Therefore, to avoid a big value of u, we let H be a so called t-wiseρ-biased sample space and show that the statement of the Leftover Hash Lemma in [IZ89] isalso valid for this t-wise ρ-biased sample space H.

Denition 4.10 (t-wise ρ-biased sample space). A set S ⊆ 0, 1` is called a t-wise ρ-biasedsample space S of `-bit vectors if for s ∈ 0, 1` chosen uniformly random from S, we havethat for all I ⊆ [`], with |I| ≤ t and for all |I|-bit strings b,∣∣∣Ps∈S [s = b]− 2−|I|

∣∣∣ ≤ ρ. (4.2.1)

We see the functions h ∈ H as ` bit vectors of the sample space H with ` = m2n, becauseevery function f : 0, 1n → 0, 1m can be described by m2n bits. Further, we set ε = 21−∆/2

where ∆ is a constant. As in the proof of Theorem 4.5, we x the min-entropy to kmax andassume that the output length m(k′) is a function depending on the input min-entropy anddescribes the rst m(k′) output bits which are almost randomly distributed. Let m(k′) :=km(k′) := k′ −∆ for k′ ∈ [0, kmax]. Because we have the min-entropy xed to k′ = kmax, we

50

4.2. CONSTRUCTING CONDUCTORS FROM HASH FUNCTIONS

have the output length m = m(kmax) = kmax−∆. We will argue later why we can generalizethe results for min-entropies in [0, kmax].Let the parameters of the sample space H be t = 2m and ρ = 2−2m−∆(ε22∆ − 1). Note

that with this denitions, we have ρ ≥ 0.According to Lemma 2.12, it is sucient to show that the collision probability of the distri-

bution over (h, h(x)) is smaller than 1+4ε2

|H|·|Um| to prove that the output of ExtSZ is ε-close toUd Um.Let A be the distribution of (h, h(x)). The collision probability of A is

col(A) = Px1,x2∈X,h1,h2∈H [h1 = h2 ∧ h1(x1) = h2(x2)].

Therefore,

col(A) =1|H|Px1,x2∈X,h∈H [h(x1) = h(x2)]

≤ 1|H|(Px1,x2∈X [x1 = x2] + Px1,x2∈X,h∈H [h(x1) = h(x2)|x1 6= x2])

=1

|H| · 2kmax +1|H|Px1,x2∈X,h∈H [h(x1) = h(x2)|x1 6= x2]

≤ 1|H| · 2kmax +

1|H| max

x1 6=x2

Ph∈H [h(x1) = h(x2)]. (1)

For any x1 6= x2 ∈ X we have

1|H|Ph∈H [h(x1) = h(x2)] =

1|H|

∑b∈0,1m

Ph∈H [h(x1) = h(x2) = b]

≤ 1|H|

∑b∈0,1m

(2−t + ρ)

=1|H|

∑b∈0,1m

(2−2m + 2−2m−∆(ε22∆ − 1))

=1|H| · 2

m · (2−2m + 2−2m(ε2 − 2−∆))

=1

|H| · 2m (1 + ε2 − 2m−kmax)

=(1 + ε2)|H| · 2m −

1|H| · 2kmax .

Inserting in Equation (1) leads to col(A) ≤ (1+ε2)|H|·2m ≤

(1+4ε2)|H|·2m . Thus, the distribution A over

(h, h(x)) is ε-close to Ud · Um.At this point, we showed that ExtSZ is a strong extracting (n, kmax, kmax)×(u)→ (m, k′ 7→

k′−∆) conductor for a constant ∆. To see that ExtSZ is also a strong extracting (n, 0, kmax)×(u) → (m, km(k′) = k′ − ∆) conductor for k′ ∈ [0, kmax], we show that the min-entropy iscondensed in the rst km(k′) output bits.Recall that we assumed m being a function depending on k′ and have set m(k′) = km(k′).

Let now k′ = kmax− r for a r. Thus, we see m(k′) = k′−∆ as a function depending on r andhave m(r) = kmax − r −∆ = m− r.

51

CHAPTER 4. BASIC CONSTRUCTIONS

Then, we ignore the last r bits from the output h(x). We will show now that the new samplespace H ′ containing description strings of functions h′ : 0, 1n → 0, 1m−r still fullls therequirement of being a t(r)-wise ρ(r)-biased sample space with with t(r) = 2(m − r) andρ(r) = 2−2(m−r)−∆(ε22∆ − 1). Note that t = t(0) and ρ = ρ(0).We have to show that the dierence between the collision probability of H ′ and the collision

probability of Um−r is less than ρ(r). To achieve this, we will show a stronger result whichimplies a small collision probability.Let b = h′2(x2) = y2 for a h′2 ∈ H ′. We get for xed x1, x2 ∈ X and y1, y2 ∈ 0, 1m−r∣∣∣Ph′1∈H′ [h′1(x1) = y1 ∧ h′2(x2) = y2 ∧ y1 = y2

]− 2−2(m−r)

∣∣∣=

∑y′1,y

′2∈0,1r

∣∣Ph1∈H[h1(x1) = y1||y′1 ∧ h2(x2) = y2||y′2 ∧ y1||y′1 = y2||y′2

]− 2−2m

∣∣≤

∑y′1,y

′2∈0,1r

ρ(0) (2)

= 22r · ρ(0) = 2−2(m−r)−∆(ε22∆ − 1) = ρ(r)

where at Step (2) we used the fact that H is a t(0)-wise ρ(0)-biased sample space. Hence,H ′ fullls the requirement of being a t(r)-wise ρ(r)-biased sample space and thus, for everyh′ ∈ H ′ and x ∈ X, h′(x) is ε-close to Um−r.Therefore, the output ExtSZ contains still (m−r) min-entropy which is the value of km(k′)

for k′ = kmax − r and ExtSZ is a (n, 0, kmax)× (u)→ (m, k′ 7→ k′ −∆) conductor.Finally, we will show that the number of bits needed to describe the sample space H is

smaller than n. To get this number of bits u, we apply a result of [AGHP90] which gives anupper bound of the needed bits to describe the t-wise ρ-biased sample space H.

Lemma 4.11. Let t be an odd integer. A t-wise ρ-biased sample space H of `-bit vectors canbe fully described by using

2 ·⌈

log(1/ρ) + log(

1 +t− 1

2· log(`+ 1)

)⌉bits, such that there exists an ecient algorithm calculating the h ∈ H for a specic x ∈ X.

Inserting our parameters gives

u = 2 ·⌈

log(1/ρ) + log(

1 +t− 1

2· log(`+ 1)

)⌉= 2 ·

⌈log(22m · (ε2 − 2−∆)−1

)+ log

(1 +

2m− 12

· log(m2n + 1))⌉

= 2 ·⌈

2m− log(ε2 − 2−∆

)+ log

(1 +

2m− 12

· log(m2n + 1))⌉

= 2 ·⌈

2m+ ∆− log 3 + log(

1 +2m− 1

2· log(m2n + 1)

)⌉= 2 ·

⌈∆ + log

(1 +

2m− 12

· log(m2n + 1))⌉

+ 4m− log 9

≈ d2∆e+ 4 logm+ 2 log n+ 4m− log 9≤ 6 log n+ 4m+ 4∆ = 6 log n+ 4kmax

52

4.3. STRONG CONDENSING CONDUCTOR

Overall, we get the following strong extracting conductor ExtSZ :

Theorem 4.12. For every n, kmax ≤ n let ∆ = 2 · log(1/ε) + 2 be the constant entropy lossand therefore ε = 21−∆/2. Then there exists an explicit strong extracting (n, 0, kmax)× (d)→ε

(m, k′ 7→ k′ − ∆) conductor ExtSZ : 0, 1n × [H] → 0, 1m for k′ ∈ [0, kmax] and withm = kmax −∆ and seed length d = 6 log n+ 4kmax.

Note. Because we need O(k+ log n) truly random bits, using this conductor makes sense onlyif the source min-entropy k is relatively small, e.g. k ∈ O(log n).

4.3 Building a Strong Condensing Conductor from Trevisan's

Conductor

Before we analyze the lossless conductor introduced in [TUZ01], we discuss a special propertyof the Trevisan extracting conductor ExtT and show afterward how to get the wished strongcondensing conductor6.

4.3.1 Reconstructive Extracting Conductors

In Section 4.1, we showed that Trevisan's construction leads to an injective strong extractingconductor ExtT with the following method: If a distinguisher Dε exists which can distinguishthe output of NWS ,x(Ud)7 from Um with success at least ε than we can compute a shortadvice string and with the help of Dε and this advice string, we can compute a function gwhich approximates function x over a fraction of the domain of at least 1/2+ε/m. I.e. functiong is an EC encoding with less than 1/2 + ε/m relative hamming distance to the encoding x.We than argued that the number of possible g's and the number of x with x having hammingdistance smaller than 1/2 + ε/m to g are relatively small and if we choose the parametersright, we get an extracting conductor. But this implies that if we set the parameters badlythan the probability to choose a bad x will be high. Choosing a bad x implies that thereexist a short advice string describing an approximation of x, given a distinguisher Dε andbecause the EC encoding can be eciently decoded, we could actually reconstruct x withthe help of the distinguisher Dε and the short advice string. The reconstruction would beachieved by converting the distinguisher to a next-bit predictor and then guessing x bit bybit. Hence, NWS ,x(Ud) cannot be a good pseudo-random generator and thus, ExtT cannotbe an extracting conductor.

We present now an abstraction of conductors which prove there extracting property throughthe above argument.

Denition 4.13 (reconstructive extracting conductor). Let

• E : 0, 1n × 0, 1d → 0, 1m

• A : 0, 1n × 0, 1dA → 0, 1a

• RT : 0, 1a × 0, 1dA × 0, 1r → 0, 1n

6Recall that every strong condensing conductor is a lossless conductor7Recall that x is an EC encoding of input x interpreted as a truth table of a boolean function x

53

CHAPTER 4. BASIC CONSTRUCTIONS

be functions, called the extractor, advice and reconstruction functions, respectively. Then wecall the triple (E,A,R) a (p, q)-reconstructive extracting conductor if for every distribution Xover 0, 1n and every next-bit predictor T : 0, 1<m → 0, 1 for E(X,Ud) with success p,we have

Px∈X,y∈UdA ,z∈Ur [RT (A(x, y), y, z) = x)] ≥ q,

where RT means that the function R makes black-box calls to the next-bit predictor T forreconstructing input x.

Function A calculates the advice string of length a for every x ∈ 0, 1n and function RT

reconstructs x with the help of the advice string if there exists a next-bit predictor T withsuccess at least p. Generally, the function RT takes additional randomness of length r toreconstruct the value of x. Note that we use a next-bit predictor rather than a distinguisherfor this denition. According to Lemma 2.15, we know that every distinguisher can be trans-formed to a next-bit predictor but with a loss in the advantage. Therefore, we have denedreconstructive extracting conductors directly with a next-bit predictor to avoid this loss.The main idea of the conductor construction of this section is the observation that the advice

function of an reconstructive extracting conductor is actually a strong condensing conductorif the extractor function E is not an extracting conductor. This can be seen by the followingconsideration: If E is not an extracting conductor, then RT can reconstruct x (which was cho-sen from a k-source) with the help of the short a-bit advice string calculated by A. Therefore,the advice string must still contain k min-entropy and with a < n we non-trivially condensedthe randomness of x. We will show how to choose the parameters of the extractor function Esuch that it is not an extracting conductor and such that RT can fully reconstruct the originalinput x.This is formalized by the following lemma.

Lemma 4.14. Let (E,A,R) be a (p, q = 1 − ε) reconstructive extracting conductor and X ⊂0, 1n a subset with |X| = k such that there exists a next-bit predictor T : 0, 1<m → 0, 1for E(X,Ud) with success p. Then the distribution UdA A(X,UdA) is 2ε-close to a distributionUdA D with dA+k min-entropy, that is, A is a strong condensing (n, k, k)× (dA)→ (m, k′ 7→k′) conductor.

Proof. Let G be the set of good pairs such that (x, y) ∈ G if

Pz[RT (A(x, y), y, z) = x] > 1/2 (4.3.1)

The Equation (4.3.1) implies A(x1, y) 6= A(x2, y) if both pairs (x1, y) and (x2, y) are in G. Inparticular, we get a bijective mapping A′ on the set G if we dene A′(x, y) = A(x, y) y.Furthermore, we have

Px∈X,y,z[RT (A(x, y), y, z) = x] ≥ 1− ε. (4.3.2)

We show now by contradiction that Px∈X,y[(x, y) ∈ G] ≥ 1− 2ε must hold. We know

Px∈X,y[RT (A(x, y), y) = x] = Px∈X,y[(x, y) ∈ G] · Px∈X,y[RT (A(x, y), y) = x|(x, y) ∈ G]

+ Px∈X,y[(x, y) /∈ G] · Px∈X,y[RT (A(x, y), y) = x|(x, y) /∈ G]≤ Px∈X,y[(x, y) ∈ G]

+ Px∈X,y[(x, y) /∈ G] · Px∈X,y[RT (A(x, y), y) = x|(x, y) /∈ G].

54

4.3. STRONG CONDENSING CONDUCTOR

We assume that Px∈X,y[(x, y) ∈ G] is strict smaller than 1− 2ε and therefore, Px∈X,y[(x, y) /∈G] ≥ 2ε must hold. Furthermore, we can conclude from the denition of being a good pairthat Px∈X,y[RT (A(x, y), y) = x|(x, y) /∈ G] ≤ 1/2. This gives

Px∈X,y[RT (A(x, y), y) = x] < (1− 2ε) + 2ε · 12

= 1− ε

which contradicts Equation (4.3.2) and therefore Px∈X,y[(x, y) ∈ G] ≥ 1−2εmust hold. Hence,almost all but a fraction of 2ε of the pairs on X × UdA are in G and therefore 2ε-close to thedistribution X × UdA . Because A′(X,UdA) is a bijective mapping onto G, we can concludethat A′(X,UdA) is also 2ε-close to X UdA and has min-entropy k + dA.

In the next section, we present a concrete strong condensing conductor constructed out ofTrevisan's conductor.

4.3.2 Strong Condensing Conductor Construction

We introduce now a strong condensing conductor which will be used as a basic building blockfor the extracting conductor of Section 5.3. First, we state that the Trevisan conductor isindeed a reconstructive extracting conductor and second, we give an advice function A suchthat it is a strong condensing conductor.

Lemma 4.15. Let ExtS ,EC : 0, 1n × 0, 1d → 0, 1m be the function of Section 4.1 withm = kmax+d

ε . Then there exist a function A and a function R such that (ExtS ,EC , A,R) is a(1− ε, 1− 10ε) reconstructive extracting conductor.

The proof will be given at the end of this section.

We will develop now the nal strong condensing conductor step by step: First, we constructa strong condensing conductor A : 0, 1n × 0, 1dA → 0, 1a, and second, we take thisconductor A and iteratively cascade it with itself to reduce the nal output length.

The basic conductor A is constructed with the help of Trevisan's conductor, or more pre-cicely, conductor A will be the advice function of the reconstructive extracting conductor ofLemma 4.15. We x an EC code EC : 0, 1n → 0, 12` with relative distance > 0.1 and wex a weak (m, d, `, ρ) design S = S1, ...,Sm for m = kmax+d

ε . We will give the value of ρ laterwhen we compute the value of d.

The advice function A makes the following calculation steps described in Algorithm 4.3.1.

We see that the output of A has maximal length of∑

j<i 2|Si∩Sj | ≤ ρm because of the usedweak design and dA = logm+ d− `. Because logm < ` we have dA ≤ d and can therefore usethe same randomness space 0, 1d for the advice function A as for the extracting function E.

The reconstruction function RT would then take the output out of function A to reconstructthe input x. To achieve this, function RT reconstructs the indices j and γ used in Algorithm4.3.1 to nd the corresponding evaluations of function x(·) in the output out of function A.With the help of the next-bit predictor T , function R can reconstruct x = EC(x) and hencex. The detailed description on how the reconstruction function RT works, can be found inthe proof of Lemma 4.15 at the end of this section.

The next lemma states that the advice function A described in Algorithm 4.3.1 is indeedan injective strong condensing conductor.

55

CHAPTER 4. BASIC CONSTRUCTIONS

Algorithm 4.3.1 Advice function A(x, u)out // output of A

calculate x = EC(x)

interpret u as follows: rst logm bits describe i ∈ [m] and the next (d− `) bits describe aβ ∈ 0, 1d−`.

for all j < i and γ ∈ 0, 1|Si∩Sj | do

// construct a new string y ∈ 0, 1d as follows:

set β at positions [d]\Si of yset γ at positions Si ∩ Sj of yll remaining positions of y with zeros

set ω(j, γ) := y|Sj

out := out||x(ω(j, γ))end for

return out

Lemma 4.16. Let S be a weak (m, d, `, ρ) design with m ≥ kmax+dε and ` = log(n). Then

functionA : 0, 1n × 0, 1d → 0, 1ma

as described above is an explicit and injective strong condensing (n, 0, kmax)×(d)→20ε (ma, k′ 7→

k′) conductor with ma = ρm = ρkmax+dε .

Proof. We choose the reconstructive extracting conductor from Lemma 4.15 and apply Lemma4.14. Hence, A is a strong condensing (n, kmax, kmax) × (d) →20ε (ma, k

′ 7→ k′) conductor.Because Trevisan's conductor is explicit and Algorithm 4.3.1 can be eciently computed, wecan conclude that A is an explicit conductor.That A is actually a strong condensing (n, 0, kmax) × (d) → (ma, k

′ 7→ k′) conductor canbe seen by analyzing the calculation steps which A does: As in Trevisan's construction of aconductor, conductor A uses weak designs to calculate the projections y|Sj and as we alreadyargued in the proof of Theorem 4.5, the conductor construction works for all input min-entropies k′ ∈ [0, kmax]. Thus, A is a strong condensing (n, 0, kmax) × (d) → (ma, k

′ 7→ k′

conductor.It remains to show that A is injective. We show it similarly as for the Trevisan conductor.Assume that A is not injective. This would lead to

∃x′ 6= x ∈ 0, 1n : ∀u ∈ 0, 1d : A(x, u) = A(x′, u).

Function A constructs for each input u 2l strings y and calculates x(y|Sj ) for all j < i. Wex a j, then for being non-injective, we have

x(ω(j, γ)) = x′(ω(j, γ)) ∀γ ∈ 0, 1|Si∩Sj |

56

4.3. STRONG CONDENSING CONDUCTOR

and in particular

x(y|Sj ) = x′(y|Sj ).

We see that by iterating through all u ∈ 0, 1d and γ ∈ 0, 1|Si∩Sj |, we called the functionsx() and x′() with all possible inputs in 0, 1` and hence non-injectivity of A would implyequality of the truth tables of x() = EC(x) and x′() = EC(x′) which contradicts the propertyof the EC encoding because x 6= x′. Hence, such an x′ cannot exist and conductor A must bean injective function.

The value of d. We choose 1 < ρ = eα` for a constant α > 0. From Lemma 4.7 we know

that there exists an eciently computable weak design with d =⌈

`ln ρ

⌉· ` =

⌈`α

⌉. Recall that

the length of the EC encoding is 2`. We will assume that we have ` = s · log n for a constants and n being suciently big. We get

ma = ρm = eα` · kmax + `/α

ε= ns·α log e

(kmax + s · log n/α

ε

).

To reduce the output length ma to a length ma = O((kmax)1+2δ

)for a constant δ, we will

cascade the strong condensing conductor A with itself several times as described in the endof this section. This leads to the following nal strong condensing conductor CTUZ :

Theorem 4.17 (nal strong condensing conductor). For every n, kmax ≤ n, a constantε ∈ (0, 1) and a constant δ > 0, there exists an explicit strong injective condensing (n, 0, kmax)×(d) →ε (n′, k′ 7→ k′) conductor CTUZ with d = (2 + δ) ·

⌈`α

⌉and n′ =

(kmax+`/α

ε

)1+2δ, where

` = s · log n for a constant s and α = δ2+δ · 1

s·log e .

The proof is given at the end of this section.

Proof of Lemma 4.15. We will present an adapted version of the proof given in [TUZ01].We rst prove that a next-bit predictor T exists if m ≥ kmax+d

ε .To show that such a next-bit predictor T exists for the Trevisan extracting conductor

ExtT (X,Ud), we use Lemma 2.16 in the following way: Let Y be the distribution of ExtT (X,Ud).We know that H∞(ExtT (X,Ud)) ≤ H∞(X) +H∞(Ud) ≤ kmax + d must hold. Furthermore,to get the existence of a next-bit predictor T with success 1− ε, we have to fulll

1mH∞(Ext(X,Ud)) ≤

kmax + d

m≤ ε

which is done by setting m :≥ kmax+dε . Therefore, a next-bit predictor T with success 1 − ε

exists for ExtT being the extracting function.It remains to show that

Px∈x,y∈Ud [RT (A(x, y), y) = x)] ≥ 1− 10ε

holds.We choose our advice function A to be the function described in Algorithm 4.3.1. Recall

that we have n = 2`. The reconstruction function we will describe, will not need additionallyrandomness and thus, we set r = 0. Let the reconstruction function RT : 0, 1a × 0, 1d →

57

CHAPTER 4. BASIC CONSTRUCTIONS

Algorithm 4.3.2 Reconstruction function RT (out, u)

interpret u = u1||u2||u3 with |u1| = logm and |u2| = d− `

set i ∈ [m] // dened by u1

set β := u2

for all a ∈ 0, 1` do

// construct a new string y ∈ 0, 1d as follows:

set β at positions [d]\Si of yset a at positions Si of y

set γj := y|Si∩Sj for all 1 ≤ j < i

set wa = TX,y(out1,γ1 , · · · , outi−1,γi−1

)= TX,y

(x(y|S1), · · · , x(y|Si−1

))

end for

w := w1w2 · · ·w2` // view w as a bit string in 0, 1n

if ∃ unique x ∈ 0, 1n s.t. dist(x, w) > 0.1 then

return xelse

return ⊥end if

58

4.3. STRONG CONDENSING CONDUCTOR

0, 1n be a function which does the steps dened in Algorithm 4.3.2, where dist(·, ·) is thenormalized Hamming distance. Note that the output out of A is indexed by 1 ≤ j < i and aγ ∈ 0, 1|Si∩Sj |.From Lemma 2.16 we know that

Px∈X,y∈Ud,i[TX,y(x(y|S1), · · · , x(y|Si−1)) = x(y|Si)] ≥ 1− ε. (4.3.3)

Note that the values of x(y|S1), · · · , x(y|Si−1) are known because there are described in the

advice string.Let G be the set of good pairs (X,Ud). We say that (x, y) ∈ G if

Pi[TX,y(x(y|S1), · · · , x(y|Si−1)) = x(y|Si)] ≥ 0.9 (4.3.4)

We show by contradiction that Px∈X,y∈Ud,i[(x, y) ∈ G] ≥ 1− 10ε must hold. We have

Px,y,i[TX,y(x(y|S1), · · · , x(y|Si−1)) = x(y|Si)] =

Px,y,i[(x, y) ∈ G] · Px,y,i[TX,y(x(y|S1), · · · , x(y|Si−1)) = x(y|Si)|(x, y) ∈ G]

+ Px,y,i[(x, y) /∈ G] · Px,y,i[TX,y(x(y|S1), · · · , x(y|Si−1)) = x(y|Si)|(x, y) /∈ G]

≤ Px,y,i[(x, y) ∈ G] + Px,y,i[(x, y) /∈ G]· Px,y,i[TX,y(x(y|S1), · · · , x(y|Si−1

)) = x(y|Si)|(x, y) /∈ G]

To contradict, we assume that Px,y,i[(x, y) ∈ G] < 1 − 10ε and thus, Px,y,i[(x, y) /∈ G] ≥10ε must hold. Additionally, we can conclude from the denition of being a good pair thatPx,y,i[TX,y(x(y|S1), · · · , x(y|Si−1

)) = x(y|Si) /∈ G] ≤ 1/10. This leads to

Px,y,i[TX,y(x(y|S1), · · · , x(y|Si−1)) = x(y|Si)] < (1− 10ε) + 10ε · 1

10= 1− 9ε

which contradicts Equation (4.3.4) and therefore Px∈X,y∈Ud,i[(x, y) ∈ G] ≥ 1− 10ε must hold.For every a ∈ 0, 1` we have wa ∼= x(a) = x(y|Si), where ∼= should denote that it is a

guessing, and hence, after doing guesses for all possible a, we have w ∼= x. If we used forthe Trevisan conductor an EC encoding with relative distance > 0.1 we can conclude that RT

outputs x with probability at least 1−10ε because for (x, y) ∈ G we know that dist(x, w) ≤ 0.1and that x represent an encoding for a unique x in this Hamning ball. Hence, we have

Px,y,i[RT (A(x, y), y) = x] ≥ 1− 10ε

Proof of Theorem 4.17. We show how to get the nal strong condensing conductor ofTheorem 4.17 by composing the strong condensing conductor A from Lemma 4.16 with itselfseveral times: The problem of conductor A is the too long output length ma. Because ma

contains still k min-entropy we could just apply conductor A again on the output ma and geta new output length < ma. This reduction is not for free, indeed we add an additional errorof ε with each iteration step. In the following Lemma 4.18 we present such a construction.

Lemma 4.18 (iterated condensing conductor cascading). If C = Cn : 0, 1n×0, 1d(n) →0, 1m(n) is a family of (strong) condensing (n, kmin, kmax) × (d(n)) →ε (m(n), k′ 7→ k′)conductors, they can be composed repeatedly. Let n1, kmin, kmax and ε > 0 be given, then wedene C(1) := Cn1 and for i > 1

C(i) := Cn′ C(i−1),

59

CHAPTER 4. BASIC CONSTRUCTIONS

where n′ is the output length of C(i−1). If m(n) ≤ naω for a xed a < 1 and ω > 0,d(n) = b · log n ≤ d(n1) for a constant b and n′ ≤ n1 for all n′, then for all i ≥ 1, C(i) is a(strong) condensing (n1, kmin, kmax)× (d)→iε (m, k′ 7→ k′) conductor with

• m ≤ ω 11−a · nai1

• d ≤ ib1−a · log(ω) + s

1−a · log n1

Proof. A special version of the above iterated cascading was presented in [TUZ01] for the casewhere kmin = kmax and we use the same method to prove our generalized iterated cascadinglemma.

We compose the conductors according to the cascading dened in Lemma 3.19, hence, thenal error is i · ε. Let di be the seed length and ni be the input length of C(i). Further, wehave for i > 1 that ni ≤ ω · nai−1 with a < 1. Thus

m ≤ ni ≤ ωωaωa2 · · ·ωainai1 ≤ ω

11−ana

i

1 ,

where we used∑j=i

j=0 qj ≤∑j=∞

j=0 qj = 11−q for a q < 1.

For the seed length d we get

d =i∑

j=1

d(nj) ≤ b ·i∑

j=1

log(nj) ≤ b ·i∑

j=1

log(ω

11−a · naj1

)

≤ ib · 11− a · log(ω) + b ·

i∑j=1

(aj) · log n1

≤ ib

1− a · log(ω) +b

1− a · log n1

To get the wished conductor of Theorem 4.17, we cascade i times conductor A of Lemma4.16 with itself and set the error of conductor A to εA = ε

i . Further, we have ` = s · log n for

a constant s and we set α = δ2+δ · 1

s·log e for a constant δ. Note that 3/2 ≤ ρ = eα` is still

fullled for n big enough. Furthermore, we set a = s · α log e = δ2+δ . In particular, we have

ma = m(n) = ns·α log e(kmax+s·logn/α

εA

)= na · ω for ω :=

(kmax+s·logn/α

εA

). Let n be the initial

input length n1 and ` := `(n).We choose i := log 1

a

2 lognδ logω and get therefore

ai =δ logω2 log n

. (1)

Furthermore, we have

11− a =

11− δ

2+δ

= 1 +δ

2. (2)

60

4.3. STRONG CONDENSING CONDUCTOR

We get for the nal output length n′:

n′ ≤ ω 11−a · nai (1)

= ω1

1−anδ logω2 logn = ω

11−an

δ2·logn ω

(2)= ω1+δ/2 · ωδ/2 = ω1+δ.

Additionally, we have i(kmax+`/α

ε

). Therefore,

n′ ≤ ω1+δ =(kmax + `/α

εA

)1+δ

=(i(kmax + `/α)

ε

)1+δ

≤(kmax + `/α

ε

)1+2·δ.

For the value of d we get

d ≤ is

α· 1

1− a · log(ω) +s

α· 1

1− a · log n

≤ s · log nα

· 11− a +

11− a ·

s · log nα

(3)

=2

1− a · `/α(2)= (2 + δ) · `/α

where we set b = s/α and at Step (3) we used i = log 1a

(2 lognδ logω

)≤ 2 logn

δ logω .

We got the wished conductor.

61

CHAPTER 4. BASIC CONSTRUCTIONS

62

5 Compositions of Basic Constructions

In the previous chapters we have introduced basic conductor constructions as well as strongcomposition theorems. In this chapter, we will discuss possible combinations of the basicconductors. In Section 5.1, we show how to reduce the error of the Trevisan constructionof Section 4.1 and in Section 5.2 we will get an almost ideal conductor with respect to theentropy loss with the help of the conductors of Sections 5.1 and 4.2. In Section 5.3, we presentan improvement of this almost ideal conductor in the sense of using a shorter seed. Finally,in Section 5.4 we construct an expander graph fullling the requirements needed to be usefulfor the application described in Section 1.2 which is one of the main purposes of this thesis.

5.1 Iterated Concatenation of Trevisan's Conductor

In Section 4.1.3 we presented a concrete instantiation of Trevisan's conductor constructionwith entropy loss ∆(kmax) = kmax/2. We show now how to reduce the entropy loss byiteratively applying the concatenation composition of Section 3.3.2 to this conductor withitself. The goal is to reduce the entropy loss to ∆(k′) = log(k′). The idea of iteratively applyingTrevisan's conductor to get a small entropy loss was mentioned in [MST04]. we give now adetailed construction based on this idea. We have the strong extracting (n, 0, kmax)×(d1)→ε1

(m1, k1(k′)) conductor Ext1 with m1 = kmax/2, k1(k′) = (k′ − 3 log(m1/ε1) − 1)/ρ1 fork′ ∈ [0, kmax] and entropy loss ∆(kmax) = kmax/2. Clearly, Trevisan's conductor can also bealso instantiated as a strong extracting (n, 0, kmax/2−1)×(d1)→ε1 (m2 = kmax/4−1/2, k2(k′))conductor Ext2 with k2(k′) = (k′ − 3 log(m2/ε1) − 1)/ρ2 for k′ ∈ [0, kmax/2 − 1]. If we takethese two conductors and concatenate them together, we get according to Lemma 3.21 (withs = 1) a third extracting conductor Ext3 = Ext1||Ext2 which is an (n, 0, kmax) × (d1 +d1) →ε3 (m3 = 3 · k′max/4 − 1/2, k3(k′) = k′ −∆3(k′)) conductor with maximal entropy loss∆3(kmax) = kmax/4 + 1/2 and ε3 = 3ε1. This can be seen by the following: For the outputlength, we have m3 = m1 +m2 = kmax/2 + kmax/4− 1/2 = 3 · kmax/4− 1/2, for the maximalentropy loss ∆3(kmax) = ∆2(kmax/2−1)+1 = (kmax/2−1)/2+1 = kmax/4+1/2 and for theerror ε3 = 2 ·ε1 +ε1 = 3ε1. Now, we take again a Trevisan extracting conductor which extractsup to ∆3(kmax) − 1 min-entropy as second conductor in the concatenation and concatenateit with Ext3. We do this iterated concatenation until we get at step j an entropy loss of∆j(kmax) = log(kmax).

The question is now how many iteration steps we need to reduce the entropy loss to log(kmax)because at each iteration we resize the seed by the summand d. By analyzing the iteratedconcatenation, we see that at step i we use as second conductor an (n, 0,∆i−1(kmax) − 1) ×

63

CHAPTER 5. COMPOSITIONS OF BASIC CONSTRUCTIONS

(d)→ε1 (mi, ki(k′)) conductor with

∆i−1(kmax)− 1 =kmax2i−1

− 12i−2

mi =kmax

2i− 1

2i−1

Thus, for the number of needed iteration steps j we have

kmax2j− 1

2j−1+ 1 = log(kmax)

2j =kmax − 2

log(kmax)− 1

j = log(

kmax − 2log(kmax)− 1

).

Note that if at step j we get ∆j(kmax) < log(kmax) we can remove some bits from mj toget a loss of log(kmax).For our approximations of the nal error ε and the nal seed length d = j · d1, we bound

j ≤ log(kmax) and hence we get

ε ≤ (2log(kmax) − 1) · ε1 = (kmax − 1) · ε1

and

log(kmax) · d1 = log(kmax) ·⌈

`

ln(ρ)

⌉· ` ≤ ψ(kmax, ε) · `2 · log(kmax) + ` · log(kmax)

where ψ and ` are as dened in Section 4.1.3.Summarizing, we constructed the following conductor:

Theorem 5.1. For kmax ≤ n and a constant 2log(kmax)+log(kmax−1)−kmax/6+2/3 < ε < 1,there exist a strong injective extracting (n, 0, kmax)× (d)→ε (m, km(k′)) conductor ExtT withkm(k′) = k′ − log(k′) for k′ ∈ [0, kmax] and seed length d = ψ(kmax, ε) · `2 · log(kmax) + ` ·log(kmax).

5.2 Extracting Conductor with Almost Optimal Entropy Loss

In this section, we present a strong extracting conductor which is almost ideal in respect of itsentropy loss and which will be used as a building block in the construction of Section 5.3. Theconstruction of this section was originally discussed in [RRV99, MST04]. We present a moredetailed construction and give additionally the exact values of the conductor parameters. Thegeneral idea of getting such an almost ideal extracting conductor is to rst apply a strongextracting conductor Ext1 which works for big min-entropy, e.g. k > log n. The output stringwill then contain k − ∆1 min-entropy. But this means that ∆1 min-entropy of the input isyet not extracted. Therefore, we will apply a second strong extracting conductor Ext2 whichworks well for small min-entropies to extract most of the remaining ∆1 min-entropy. Overall,this leads to a much smaller entropy loss ∆2 then if we would just have applied the rstconductor Ext1.

64

5.2. EXTRACTING CONDUCTOR WITH ALMOST OPTIMAL ENTROPY LOSS

We present now a possible choice for Ext1 and Ext2 such that their concatenation accordingto Lemma 3.21 leads to a strong injective extracting (n, 0, kmax) × (d) →ε (m, k′ 7→ k′ −∆)conductor with almost ideal constant entropy loss ∆ and k′ ∈ [0, kmax]. See Figure 5.1 for anillustration of the construction.

Choosing First Conductor. We instantiate Ext1 : 0, 1n × 0, 1dT → 0, 1mT with thestrong injective extracting (n, 0, kmax) × (dT ) →εT (mT , kT (k′)) iterated Trevisan conductorwith entropy loss ∆T (k′) from Section 5.1 for k′ ∈ [0, kmax] such that:

• ε1 = εT /(kmax − 1)

• mT = kmax − log(kmax)

• kT (k′) = k′ − log(k′)

• ∆T (k′) = log(k′)

• εT = ε/4

• dT = ψ · `2 · log(kmax) + ` · log(kmax), where we used ψ as short-hand for ψ(kmax, ε1)

Choosing Second Conductor. We know that there are ∆T (k′) bits of the n-bit input stringwhich we have not yet extracted. Therefore, for the second strong extracting conductor Ext2,we use the same n-bit input string but set now the maximal min-entropy to ∆T (kmax)−1. Weinstantiate Ext2 : 0, 1n×0, 1dSZ → 0, 1mSZ with the extracting (n, 0, kSZ)×(dSZ)→εSZ

(mSZ , k′ 7→ k′ −∆SZ) conductor from Section 4.2 as following:

• kSZ = ∆T (kmax)− 1 = log(kmax)− 1

• εSZ = ε/2

• ∆SZ = 2 log(2/ε) + 2 = 2 log(1/ε) + 4

• mSZ = kSZ −∆SZ = log(kmax)− 2 log(1/ε)− 5

• dSZ = 6 log(n) + 4kSZ = 6 log(n) + 4 log(kmax)− 4

Concatenating Ext1 with Ext2 according to Lemma 3.21 (setting s = 1) gives the followingstrong injective extracting conductor ExtRRV with dRRV = dT+dSZ and entropy loss ∆RRV =∆SZ + s.

Theorem 5.2. For every constant 2log(kmax)+log(kmax−1)−kmax/6+8/3 < ε < 1 and kmax ≤ n,ExtRRV : 0, 1n × 0, 1d → 0, 1m is a strong injective extracting (n, 0, kmax) × (d) →ε

(m, km(k′)) conductor with km(k′) = k′−∆ for ∆ = 2 log(1/ε) + 5 and k′ ∈ [0, kmax] and seedlength d = ψ(kmax, ε) · `2 · log(kmax) + ` · log(kmax) + 6 log(n) + 4 log(kmax)− 4.

65

CHAPTER 5. COMPOSITIONS OF BASIC CONSTRUCTIONS

k′ − ∆T

dT

dT dSZ∆T − ∆SZ − 1

n

T SZ

dSZ

k′

εT = ε/4 εSZ = ε/2

∆ = ∆SZ + 1 = 2 log(1/ε) + 5ε = 2 · εT + εSZ

∆T − 1

Figure 5.1: Construction of an extracting conductor with almost optimal entropy loss

A Possible Value for Seed Length d. If we use the EC code from Section 4.1.3, we have

` ≈ 6 log n+ 4 log(1/ε)

and hence, for d we get

d = ψ · (6 log(n) + 4 log(1/ε))2 · log(kmax) + (6 log(n) + 4 log(1/ε)) · log(kmax)+ 6 log(n) + 4 log(kmax)− 4

= 36 · ψ · log2(n) · log(kmax) + (6 + 48 · ψ · log(1/ε)) · log(n) · log(kmax)+ 6 log(n) + ((16 · ψ + 4) · log(1/ε) + 4) · log(kmax)− 4

We will use this instantiation of ExtRRV later in Section 5.4.

5.3 Improved Conductor by First Condensing

In this section, we introduce a strong injective extracting conductor ExtMST : 0, 1n ×0, 1d → 0, 1m due to [MST04] and give its concrete parameters. It is an improvement ofthe conductor presented in the previous Section 5.2. The main idea is to rst apply a stronginjective and condensing conductor to the input x ∈ 0, 1n to preserve the min-entropy butto reduce the string length to n′ and second, apply a strong injective extracting conductorwith almost ideal entropy loss. As condensing conductor, we use CTUZ from Section 4.3.2and as extracting conductor, we choose ExtRRV from Section 5.2. Given the condensing

66

5.3. IMPROVED CONDUCTOR BY FIRST CONDENSING

conductor CTUZ and the extracting conductor ExtRRV , we can describe the extracting con-ductor ExtMST as follows. We interpret input y ∈ 0, 1d of ExtMST as a concatenation oftwo strings y1 and y2 where y1 ∈ 0, 1dTUZ is used for the condensing conductor CTUZ andy2 ∈ 0, 1dRRV is used for the extracting conductor ExtRRV . By applying Lemma 3.19, theconductor ExtMST : 0, 1n × 0, 1d → 0, 1m is dened as

ExtMST := ExtRRV (CTUZ(x, y1), y2).

The overall construction is illustrated in Figure 5.2. Note that because CTUZ and ExtRRVare both injective, the nal conductor ExtMST will be injective, too.

dTUZ

dRRV

n

k′ −∆

n′

dTUZ dRRV

CTUZ

ExtRRV

dRRVdTUZ

Figure 5.2: Conductor ExtMST

We instantiate now the dierent parameters of the conductors such that we get a conductorwith almost optimal min-entropy as the conductor of Section 5.2 but needs a shorter seed.For the EC algorithm used in the construction of ExtRRV (for the Trevisan subpart), we

use the EC encoding introduced in Section 4.1.3 and set EC ′ : 0, 1n′ → 0, 1n′ for ExtRRV .In the construction of CTUZ we need an EC encoding with relative distance > 0.1. We willassume such an encoding being of the form EC : 0, 1n → 0, 1ns for a constant s1. Wex the constants ε and δ, choose a kmax ≤ n and we see that the total length of the seed isd = dTUZ + dRRV .For the condensing (n, 0, kTUZ)× (dTUZ)→ (mTUZ , k

′ 7→ k′) conductor CTUZ we set

• εTUZ = ε/8 (we will show why)

• kTUZ = kmax

• ` = log(ns) = s log(n)

• mTUZ = n′ = (kmax+`/αεTUZ

)1+2δ

• dTUZ = (2 + δ) · `/α.

and for the extracting (n′, 0, kRRV )×(dRRV )→ (mRRV , k′ 7→ k′−∆RRV ) conductor ExtRRV

we set

• εRRV = 5 · ε/8 (we will show why)

1For example, the encoding of Section 4.1.3 has this form and relative distance of more than 0.1 if we setσ = 2

67

CHAPTER 5. COMPOSITIONS OF BASIC CONSTRUCTIONS

• kRRV = kmax

• l′ = log(n′)

• ∆RRV = d2 log(1/εRRV ) + 5e = 2 log(1/ε) + 7

• dRRV = ψ · (`′)2 · log(kmax) + ` · log(kmax) + 6 log(n′) + 4 log(kmax)− 4.

• mRRV = kmax −∆RRV

Note that the output length n′ can be longer than n if kmax is getting too big. But this isno problem as we will see later at the calculation for the needed seed length.First, we analyze why we have chosen εTUZ = ε/8 and εRRV = 5ε/8. We cannot choose

εTUZ = εRRV = ε/2 as for the normal conductor composition, because ExtRRV uses the n′-bitstring with error εTUZ twice, once for calling ExtT with internal error εT i from Section 4.1and once for calling CSZ with internal error εSZi from Section 4.2. As in Figure 5.3 illustrated,we see that by setting εT i = ε/8 and εSZi = 3 · ε/8 we get for the overall error εMST

εMST = 2 · (εTUZ + εTi) + (εTUZ + εSZi)= 2 · (ε/8 + ε/8) + ε/8 + 3 · ε/8 = ε,

as wished. Hence, εRRV = 2 · εT i + εSZi = 5 · ε/8.

k′ −∆T

dT

dT

dT

dSZ

dTUZ

dTUZ

dTUZ

dSZ∆T −∆SZ − 1

n

n′

T UZ

T SZ

dSZ

dRRV

RRV

k′

εT = ε/4 εSZ = ε/2

k′

∆ = ∆SZ + 1 = 2 log(1/εRRV ) + 5 ≤ 2 log(1/ε) + 7 ε = 2 · εT + εSZ

εTUZ = ε/8

εTi = ε/8 εSZi = 3 · ε/8

εRRV = 5 · ε/8

Figure 5.3: Detailed construction

68

5.3. IMPROVED CONDUCTOR BY FIRST CONDENSING

We will give now the calculations for the dierent d's. We start with the value of dTUZ .

dTUZ =2 + δ

α· ` =

s · (2 + δ)α

· log(n).

We calculate now the value of dRRV step by step:

dRRV = ψ · (`′)2 · log(kmax) + `′︸ ︷︷ ︸=:dT

+ 6 log(n′) + 4 log(kmax − 4)︸ ︷︷ ︸=:dSZ

.

We know that

n′ =(

8 · (kmax + `/α)ε

)1+2δ

.

This gives for the value of `′:

`′ ≤ 6 log(n′) + 4 log(1/εT i) = 6 log(n′) + 4 log(1/ε) + 12

with

6 log(n′) = 6 · (1 + 2δ) · log(kmax + `/α) + 6 · (1 + 2δ) log 8 + 6 · (1 + 2δ) log(1/ε)= 6 · (1 + 2δ) · log(kmax + `/α) + 6 · (1 + 2δ) log(1/ε) + 18 · (1 + 2δ). (1)

Furthermore, we have ` = s · log(n). This gives

log(kmax + `/α) = log(kmax +

s

α· log(n)

)∗≤ log

(kmax +

s

αkmax

)= log (kmax) + log

(1 +

s

α

)(2)

where at ∗ we assumed that the upper bound kmax is greater than log n.Inserting (2) into (1) gives

6 log(n′) ≤ 6(1 + 2δ) · log(kmax) + 6(1 + 2δ) · (log (1 + s/α) + log(1/ε) + 2) ,

and hence, for the value of `′,

`′ ≤ 6(1 + 2δ) · log(kmax) + 6(1 + 2δ) · (log (1 + s/α) + log(1/ε) + 2) + 4 log(1/ε) + 12.

We collect now all constants to a new constant ϕ:

ϕ := 6(1 + 2δ) · (log (1 + s/α) + log(1/ε) + 2) + 4 log(1/ε) + 12

hence,`′ ≤ 6(1 + 2δ) · log(kmax) + ϕ

and6 log(n′) ≤ 6(1 + 2δ) · log(kmax) + ϕ− 4 log(1/ε)− 12.

We get

dT = ψ · (`′)2 · log(kmax) + `′ ≤ ψ · (6(1 + 2δ) log(kmax) + ϕ)2 · log(kmax) + 6(1 + 2δ) log(kmax) + ϕ

= 36 · ψ(1 + 2δ)′2 · log3(kmax) + 12 · ψ · ϕ(1 + 2δ) · log2(kmax)

+ ψ · ϕ2 · log(k) +(ψ · ϕ2 + 6(1 + 2δ)

)· log(kmax) + ϕ

69

CHAPTER 5. COMPOSITIONS OF BASIC CONSTRUCTIONS

and

dSZ = 6 log(n′) + 4 log(kmax)− 4 ≤ 6(1 + 2δ) · log(kmax) + ϕ− 4 log(1/ε)− 12+ 4 log(kmax)− 4

= (6(1 + 2δ) + 4) · log(kmax) + ϕ− 4 log(1/ε)− 16≤ (6(1 + 2δ) + 4) · log(kmax) + ϕ,

where at the last step we used 4 log(1/ε) > 0. In total, we get for dRRV

dRRV = ψ · (`′)2 · log(kmax) + `′ + 6 log(n′) + 4 log(kmax)− 4

≤ 36 · ψ · (1 + 2δ)2 · log3(kmax) + 12 · ψ · ϕ · (1 + 2δ) · log2(kmax)

+(ψ · ϕ2 + 12(1 + 2δ) + 4

)· log(kmax) + 2ϕ.

Finally, we have for the total seed length needed for the ExtMST conductor:

dMST = dTUZ + dRRV

≤ s · (2 + δ)α

· log(n) + 36 · ψ · (1 + 2δ)2 · log3(kmax)

+ 12 · ψ · ϕ · (1 + 2δ) · log2(kmax) +(ψ · ϕ2 + 12(1 + 2δ) + 4

)· log(kmax) + 2ϕ

∈ O(log n+ log3(kmax)).

Summarizing, we get the following strong injective extracting conductor ExtMST , whichneeds seed length O(log n+ log3(kmax)) instead of O(log3 n) if we had just used ExtRRV . Aslong as k is much smaller than n this is an improvement.

Theorem 5.3. Let EC : 0, 1n → 0, 1ns be an EC code with relative distance > 0.1. Forevery kmax ≤ n and every constants ε, δ > 0, there exists an explicit and injective strongextracting (n, 0, kmax) × (d) →ε (m, km(k′)) conductor ExtMST with km(k′) = k′ − ∆ fork′ ∈ [0, kmax] and ∆ = 2 · log(1/ε) + 7 and seed length

d = d(n, kmax, ε, δ)

=s · (2 + δ)

α· log(n) + 36 · ψ · (1 + 2δ)2 · log3(kmax)

+ 12 · ψ · ϕ · (1 + 2δ) · log2(kmax) +(ψ · ϕ2 + 12(1 + 2δ) + 4

)· log(kmax) + 2ϕ

where ϕ is a constant with the value

ϕ = 6(1 + 2δ) · (log (1 + s/α) + log(1/ε) + 2) + 4 log(1/ε) + 12

and α a constant with

α =δ

2 + δ· 1s · log e

.

Conductor from Theorem 5.3 as Expander Graph. From Theorem 3.27, we know thatthis explicit injective strong extracting conductor is an expander graph with left-degree 2d.Unfortunately, this expander graph cannot be used for the application introduced in Section1.2 because there, we want to have kmax ∈ Θ(n) which leads to a super-polynomial left-degreein n, but we want to have an expander graph with polynomial left-degree. Therefore, we willintroduce in the next section a construction of a somewhere-conductor based on this conductorwhich will lead to an expander graph with polynomial bounded left-degree.

70

5.4. FINAL CONSTRUCTION OF AN EXPANDER GRAPH

5.4 Final Explicit Construction of an Injective Unbalanced

Bipartite Expander Graph

In this section, we give a concrete construction of an expander graph fullling the requirementsto be applicable for the domain extension of public random functions introduced in Section 1.2.In particular, the expander graph will have a polynomial-bounded left-degree and Kmax =2kmax will be in the order of 2Θ(n). We present an adapted construction due to [MT07,BJST03] and give detailed values to show that the left-degree is poly(n) even if Kmax = 2Θ(n).We use the somewhere conductor construction of Section 3.3.3 and we instantiate C1 byapplying Theorem 5.3 and C2 with the conductor from Theorem 5.2. Note that in [MT07]both conductors C1 and C2 where instantiated with the conductor of Theorem 5.3. Ourconstruction gives asymptotically the same results but the construction is simpler. For C2,we set the maximal input min-entropy to k2 = kmax = (1 − η)n for some constant η andset the maximal input min-entropy for C1 to k1 = d2 + ∆ for a constant ∆. Further, we setν = 3 · log(9n) for having σ < 1, x the constant ε, set a1 = a2 = −∆ = −2 log(1/ε)− 5 andhave m = k2 + d2 −∆.Hence, for d2, we get

d2 = d(n, k2 = (1− η)n, ε)

= 36 · ψ · log2(n) · log((1− η)n) + (6 + 48 · ψ · log(1/ε)) · log(n) · log((1− η)n)+ 6 log(n) + ((16 · ψ + 4) · log(1/ε) + 4) · log((1− η)n)− 4

∈ O(log3(n))

and for d1 we get:

d1 = d(n, k1 = d2 + ∆, ε, δ)

=s · (2 + δ)

α· log(n) + 36 · ψ · (1 + 2δ)2 · log3(d2)

+ 12 · ψ · ϕ · (1 + 2δ) · log2(d2) +(ψ · ϕ2 + 12(1 + 2δ) + 4

)· log(d2) + 2ϕ

∈ O(log n)

where ϕ and α are constants with the value

ϕ = 6(1 + 2δ) · (log (1 + s/α) + log(1/ε) + 2) + 4 log(1/ε) + 12

and ψ is the function described in Theorem 5.2. Further, we have

C1 : 0, 1n × 0, 1d1 → 0, 1d2 , k1 = d2 + ∆

C2 : 0, 1n × 0, 1d2 → 0, 1m, k2 = k = (1− η)n

and therefore

C : 0, 1n × 0, 1d1 → 0, 1n·(d1+d2+m)

is a σ-somewhere (n, 0, kmax)× (d1)→2ε (n · (d1 + d2 +m), km(k′)) conductor with km(k′) =k′ + d1 − 2∆− ν and σ < 1 and its construction needs only d1 = O(log n) truly random bits.To get the nal expander graph, we can apply Lemma 3.29 to the above somewhere con-

ductor. We see that the left-degree D of the expander is n · 2d1 , for the expander parameter

71

CHAPTER 5. COMPOSITIONS OF BASIC CONSTRUCTIONS

Kmax we have Kmax = 2kmax = 2(1−η)n and the expansion factor is γ = 2d1−2∆−ν · (1 − 2ε).Note that the nal expander graph is injective because all the conductors used to composethe somewhere conductor are injective conductors.If we take a closer look at the value of d1, we see that the rst summand s·(2+δ)

α ·log n species

the highest degree of the polynomial D(n) = n · 2d1 . Thus, d1 = s·(2+δ) log(n)α log n+ f(n) for

a function f being in o(log(n)). In particular, the highest degree of D(n) is not dependent onthe chosen error ε, the constant η and the expansion factor γ.

Estimation for the Highest Degree of the Polynomial D(n): We have s·(2+δ) lognα = s2 ·

log e · (2+δ)2

δ · log n for α = δ2+δ · 1

s·log e and an EC encoding 0, 1n → 0, 1ns with relativedistance > 0.1 used in the construction of C1. Let us now choose a concrete EC encodingwhich satises the requirements for the construction of C1.We take the EC code used in Section 4.1.3 and set its parameter σ = 2 to get a relative

distance of 1/4. Thus, we get ` ≈ 4n2 and hence, s ≈ 2. Furthermore, (2+δ)2

δ is minimal for

δ = 2. We get s2 · log e · (2+δ)2

δ · log n ≥ 8 · 42 log e · log n ≈ 184.5 · log n. For the left-degree D,we have:

D ≥ n · 2128 log(e)·log(n) ≈ n185.5.

Thus, for practice it is not really applicable to use the above construction of an expandergraph. But there is still a potential to further improve the degree of the polynomial D(n), inparticular, the right choice of the EC encoding plays a big role. We know that the value of sis lower bounded by 1. Hence, for a more compact EC encoding than the one in Section 4.1.3,

we would get s2 · log e · (2+δ)2

δ · log n ≥ 8 · log e · log n ≈ 11.5 · log n and hence

D ≥ n · 28 log(e)·log(n) ≈ n12.5

which is still not practicable but much better than what can be achieved with the EC codeof Section 4.1.3. Unfortunately, we did not have time left to nd an EC code with relativedistance > 0.1 and having an encoding length n such that log(n) = (1 + ξ) · log(n) for a smallconstant ξ 1. Nevertheless, even with s = 1, we get Ω(n12.5) for the left-degree. Thus, onewould need a strong condensing conductor which has shorter seed length than the conductorof Section 4.3. But to our knowledge, there is no strong condensing conductor constructionwith shorter seed than the conductor of Section 4.3.Overall, we showed that

Lemma 5.4. For every polynomially-bounded γ and every constant η ∈ (0, 1), and all func-tions m (polynomially-bounded in n), there exists an explicit injective family of (2n, 0,K) ×(D)

γ→ (2m) expander graphs G = (V1,V2, E) with K = 2n(1−η) and left-degree D polynomially-bounded in n.

72

6 Graph Construction with Substring

Selection

The previous chapters introduced quite complex constructions of expander graphs using con-ductors and the nal expander graph construction in Section 5.4 has a left-degree size whichis not really practicable. In this chapter, we analyze a candidate for expander graph construc-tions which is simple and would lead to small left-degree. The idea is to select D substrings oflength n from a longer string of length rn and interpret the substrings as neighbors of the longerstring. In particular, we get a graph G = (V1,V2, E) with V1 = 0, 1rn and V2 = 0, 1n,where r is an integer and the graph would have a left-degree of D. We show that any suchconstruction based on substring selection cannot lead to a (2rn,Kmin,Kmax) × (D)

γ→ (2n)expander graph with useful expansion factor γ if we require to allow a big Kmax ∈ 2Θ(n).

6.1 Candidate for Expander Graph Construction

We start by a simple instantiation of the described idea of substring selection. We choose r = 2and D = 2. We select the two n-bit neighbors as follows: For every left vertex v ∈ 0, 12n,we split its 2n-bit representation into two n-bits, where the rst n bits of the representationis interpreted as the rst neighbor of vertex v and the second n-bits represent the secondneighbor of v. An illustration is given in Figure 6.1.

2n

n n

n

n

Figure 6.1: Simple graph construction

Before we analyze what expansion factor we can hope for, we denote a formal notion forthe selection function. Let X = x1, x2, ..., xm be a set of strings xi ∈ 0, 12n. Furtherlet P = P1, P2, ..., P` be a family of projections Pi ⊂ [2n] with |Pi| = n for all i's. Pidenotes which n bits should be taken from a string xj for the ith neighbor and we denote

S|Pi :=⋃mj=1 xj|Pi . Hence, the ith neighbor of x would be x|Pi and we have Γ(X ) :=

⋃`i=1X|Pi .

For our simple instantiation of substring selection, we have ` = 2, P = P1, P2 withP1 = 1, 2, ..., n and P2 = n + 1, n + 2, ..., 2n. To give a non-trivial upper bound for theexpansion factor γ, we choose

√Kmax distinct n-bit strings and denote them as the neighbor

set Γ(X ). We see now that if we take all possible pairwise combinations of this√Kmax distinct

n-bit strings, we get√Kmax

2 = Kmax distinct 2n-bit strings which is our set X . Thus, the

expansion factor is at most γ ≤ Γ(X )X =

√KmaxKmax

= 1√Kmax

. If we want to have a Kmax ∈ 2Θ(n)

as required in the application of expander graphs described in Section 3.6, we would get an

73

CHAPTER 6. GRAPH CONSTRUCTION WITH SUBSTRING SELECTION

exponentially small expansion factor γ ∈ 2−Ω(n). Such an expansion factor is not usable, inparticular not for the application in Section 3.6.

Lemma 6.1. There exists a set X with |X | = m and m ≤ 2n such that γ ≤ 1√m

and

P = P1, P2 with P1 = 1, ..., n and P1 = n+ 1, ..., 2n.In the next section, we show that even having a longer input string of length r · n for an

r > 1 and allowing to choose the n bits arbitrarily for each neighbor and to choose more thantwo substrings, we still get an exponentially small expansion factor in n and hence, the graphconstruction based on substring selection does not give usable expander graphs. In particular,we show the following theorem.

Theorem 6.2. Let r > 1 and K ∈ 2Θ(n) be constants. Then there exists no family P =P1, P2, ..., P` of projections Pi ⊂ [rn] with |Pi| = n for all i's and an integer `, which wouldlead to an expander graph with upper bound Kmax = K and not having exponentially smallexpansion factor γ in n.

6.2 Impossibility Proof

In this section, we show that increasing the number of neighbors for each vertex and allowingselecting the n-bit substrings arbitrary does not improve the construction. We still get ex-pander graphs with exponentially small expansion factor. To show a non-trivial upper boundfor the expansion factor γ, we assume that V1 = 0, 1rn for an integer r > 1 and dene aspecial set X ⊂ 0, 1rn of the rn-bit strings such that this set has only few neighbors. Weinstantiate X as being the set of all strings in 0, 1rn with maximal k bits set to 1. Wedene k := φ · rn for a constant φ such that k ≤ n. We restrict k to n because the number ofneighbors cannot be bigger than 2n. Furthermore, we have r ≥ 2 and hence, φ ≤ 1/2.The number of all possible rn-bit strings with maximal k 1s is at least

|X | =k∑i=0

(rn

i

)=

φ·rn∑i=0

(rn

i

)≥(

rn

φ · rn

)≥ 2rn·h(φ)

e ·√

2πφ(1− φ)rn,

where at the last step, we applied Lemma 2.21 and h(·) is the binary entropy function.Now, to estimate the expansion factor, we need an upper bound for the number of possible

neighbors Γ(X ) for set X . We know that every neighbor of X can only have at most k bits setto 1 because at most k 1 bits are contained in the string representation of every vertex in X .LetM be the set of n-bit strings with maximal k bits set to 1, thus, we have |Γ(X )| ≤ |M|.To estimate the size ofM, we distinguish three cases:

Case 1. rφ ≤ 1/2We can apply Lemma 2.20 for bounding:

|M| =φ·rn∑i=0

(n

i

)≤ 2n·h(φr)

Hence, we get the following upper bound for the expansion factor γ:

γ ≤ |Γ(X )||X | ≤

|M||X | ≤

√2πφ(1− φ)rn · e · 2n·h(φr)

2rn·h(φ)

≤√

2πφ(1− φ)rn · e · 2n·(h(rφ)−r·h(φ)).

74

6.2. IMPOSSIBILITY PROOF

Because of Lemma 2.2, we have that h(rφ) < r · h(φ) and thus, the expansion factor isexponentially small in n:

γ ∈ 2−Ω(n).

Case 2. rφ > 1/2 and rφ 6= 1This time, we have to bound |M| dierently, because Lemma 2.20 is not directly applicable.For the size ofM, we count the cases where the number of 1s is up to n/2 and add the numberof cases where the number of 0s is between n− k = n− φrn and n/2− 1. This leads to

|M| =n/2∑i=0

(n

i

)+

n/2−1∑i=0

(n

i

)−n−φrn∑i=0

(n

i

)= 2 ·

n/2∑i=0

(n

i

)−(n

n/2

)−

(1−φr)n∑i=0

(n

i

)

≤ 2 ·n/2∑i=0

(n

i

)−

(1−φr)n∑i=0

(n

i

)≤ 2 · 2(n−1) − 2n·h(1−φr)) = 2n · (1− 2h(1−φr))).

Hence, we get

γ ≤ |Γ(X )||X | ≤

|M||X | ≤

√2πφ(1− φ)rn · e · 2n · (1− 2h(1−φr))

2rn·h(φ)

=√

2πφ(1− φ)rn · e · (1− 2h(1−φr)) · 2n·(1−r·h(φ)),

where we used the fact that∑n/2

i=0

(ni

)=∑n

i=n/2

(ni

)and applied again Lemma 2.20. We still

have h(rφ) < r · h(φ) and because rφ > 1/2, we have r · h(φ) > 1. Thus, the expansion factoris as in Case 1 in 2−Ω(n).

Case 3. rφ = 1Finally, we regard the last possible case, where rφ = 1, which we omitted in Case 2, becauseit would have lead to the wrong statement |M| ≤ 0.We have k = rφn = n and hence, the setM is just the set of all strings in 0, 1n. Thus,

|M| = 2n.

Therefore, we get for the expansion factor γ:

γ ≤ |Γ(X )||X | ≤

|M||X | ≤

√2πφ(1− φ)rn · e · 2n

2rn·h(φ)

=√

2πφ(1− φ)rn · e · 2n·(1−r·h(φ)).

As in Case 2, we have r · h(φ) > 1 and hence, the expansion factor γ is again exponentiallysmall in n.

Overall, we get for all three possible cases that γ ∈ 2−Ω(n) if we try to construct an expandergraph with the construction idea of selecting n-bit substrings and allowing Kmax to be of theorder 2Θ(n). Hence, this simple constructions are not useful for the application described inSection 1.2. Although, we could achieve a small left-degree, we get a bad expansion factor.

75

CHAPTER 6. GRAPH CONSTRUCTION WITH SUBSTRING SELECTION

76

7 Conclusions and Outlook

In this thesis, we analyzed an expander graph construction which to our knowledge is thebest construction known regarding the size of the left-degree and we calculated the concretevalue of the left-degree. Although, the left-degree is small in complexity-theoretic terms,its concrete value is too big for the expander graph construction being of practical interest.There are possible improvements for the size of the left-degree, which we did not analyze. Forexample, by nding a better error-correcting code, used in the construction, which has theneeded properties but gives shorter encodings, or by nding a strong condensing conductorfor the conductor construction in Section 5.3, which has a shorter seed length than the usedstrong condensing conductor due to [TUZ01].Another parameter of the expander graph construction, which we did not investigate in,

but could be of interest, is the concrete running time complexity of the expander graphfunction calculating the neighbors. In particular, the constructions of Sections 4.1 and 4.3 useweak designs as basic constructs and what time complexity their construction has could beinteresting.In Chapter 6, we analyzed an alternative construction of expander graphs which does not

rely on conductors. Unfortunately, this alternative construction based on substring selectiondoes not lead to useful expander graphs. But because expander graphs have in general weakerproperties than conductors, it is possible that a simple construction not relying on conductorswould give expander graphs with much smaller left-degree. One possible construction, whichcould be further considered is a construction based on linear transformation: Assume that wewant to construct an unbalanced bipartite expander graph G = (V1,V1, E) with |V1| = 2rn,|V2| = 2n and left-degree D, where r > 1, then we could see the function calculating theneighbors as a function Fr × FD → F where F is the eld GF (2n). Or describing as matrixoperation: Given a matrix A ∈ FD×r and a vector v ∈ Fr being a vertex v ∈ V1, the neighborsof v are A · v. The question is now what kind of matrices A leads to a good expansionparameter. Unfortunately, we did not have the time to further investigate in this constructionproposition, but it seems to be a promising construction.

77

CHAPTER 7. CONCLUSIONS AND OUTLOOK

78

Bibliography

[AGHP90] N. Alon, O. Goldreich, J. Hastad, and R. Peralta. Simple construction of almost k-wise independent random variables. In SFCS '90: Proceedings of the Proceedings31st Annual Symposium on Foundations of Computer Science, volume 2, pages544553, 1990.

[BJST03] A. Baltz, G. Jäger, A. Srivastav, and A. Ta-Shma. An explicit construction ofsparse asymmetric connectors. Manuscript, 2003.

[CM97] C. Cachin and U. Maurer. Unconditional security against memory-bounded ad-versaries. In Advances in Cryptology CRYPTO '97, pages 292306, 1997.

[CRVW02] M. Capalbo, O. Reingold, S. Vadhan, and A. Wigderson. Randomness conductorsand constant-degree lossless expanders. In STOC '02: Proceedings of the 34thAnnual ACM Symposium on Theory of Computing, pages 659 668, 2002.

[IZ89] R. Impagliazzo and D. Zuckerman. How to recycle random bits. In FOCS '89:IEEE Symposium on Foundations of Computer Science, pages 248253, 1989.

[Mau92] U. Maurer. Conditionally-perfect secrecy and a provably-secure randomized ci-pher. Journal of Cryptology, 5(1):5366, 1992.

[MST04] T. Moran, R. Shaltiel, and A. Ta-Shma. Non-interactive timestamping in thebounded storage model. In Advances in Cryptology CRYPTO '04, volume3152 of Lecture Notes in Computer Science, pages 460476, 2004.

[MT07] U. Maurer and S. Tessaro. Domain Extension of Public Random Functions: Be-yond the Birthday Barrier. In Advances in Cryptology CRYPTO '07, volume4622 of Lecture Notes in Computer Science, pages 187204, 2007.

[NT99] N. Nisan and A. Ta-Shma. Extracting Randomness: A Survey and New Construc-tions. Journal of Computer and System Sciences, 58(1):148173, 1999.

[NW94] N. Nisan and A. Wigderson. Hardness vs randomness. Journal of Computer andSystem Sciences, 49(2):149167, 1994.

[RRV99] R. Raz, O. Reingold, and S. Vadhan. Extracting all the randomness and reducingthe error in Trevisan's extractors. In STOC '99: Proceedings of the 31st AnnualACM Symposium on Theory of Computing, pages 149158, 1999.

[RT00] J. Radhakrishnan and A. Ta-Shma. Bounds for dispersers, extractors, and depth-two superconcentrators. SIAM Journal on Discrete Mathematics, 13(1):224,2000.

[SZ98] A. Srinivasan and D. Zuckerman. Computing with very weak random sources.SIAM Journal on Computing, pages 14331459, 1998.

I

Bibliography

[Tre98] L. Trevisan. Constructions of near-optimal extractors using pseudo-random gen-erators. Electronic Colloquium on Computational Complexity, technical reports,1998.

[TUZ01] A. Ta-Shma, C. Umans, and D. Zuckerman. Lossless condensers, unbalancedexpanders, and extractors. STOC '01: Proceedings of the 33rd Annual ACMSymposium on Theory of Computing, pages 143152, 2001.

[Vad98] S. Vadhan. Extracting all the randomness from a weakly random source. ElectronicColloquium on Computational Complexity, technical reports, 5, 1998.

II

List of Figures

1.1 Construction of public random function 0, 1n → 0, 1` . . . . . . . . . . . . 2

2.1 Binary entropy function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.1 Example for an expander graph: K5,3 . . . . . . . . . . . . . . . . . . . . . . . . 133.2 Function C extracting p random bits from a k-source . . . . . . . . . . . . . . . 163.3 Conductor cascading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.4 Conductor concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.5 Construction of C(i)(x, y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.1 Function family GD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.1 Construction of an extracting conductor with almost optimal entropy loss . . . 665.2 Conductor ExtMST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675.3 Detailed construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

6.1 Simple graph construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

III

List of Figures

IV

Index

Symbolsε-close, 7t-wise ρ-biased sample space, 50conductor, 15

cascading, 18concatenation, 19, 21condensing, 16explicit, 15extracting, 16injective, 15lossless, 16reconstructive extracting, 53somewhere, 17strong, 15

Aadvantage, see distinguishing advantageadvice function, 54advice string, 42

Bbinary entropy function, see entropybinomial coecient, 9bipartite graph, 10

balanced, 10unbalanced, 14

CCauchy-Schwarz inequality, 7Cherno bound, 6, 30composition theorems, 18condenser, 39conductor, 39

Ddegree, 11design, 42distinguisher, 6distinguishing advantage, 6

E

entropy, 5binary entropy function, 5

entropy loss, 16expander graph, 13, 14

explicit, 14injective, 14unbalanced, 13

explicit, 14extractor, 39

Fat distribution, 6

Ggeneralized conductor, see conductorgeneralized expander graph, see expander

graphgraph, 10

Iinjective, 14input-restricting function family, 3, 37

Kk-source, 7

Lleft-degree, 11leftover hash lemma, 50

Mmin-entropy, 5multi graph, 10

Nnext-bit predictor, 8Nisan-Widgerson generator, 41

Rreconstructive extracting conductor, see con-

ductor

V

Index

right-degree, 11

Sseed, 15somewhere conductor, see conductorsource, see k-sourcestatistical dierence, 6support, 6, 14

TTrevisan's construction, 41

Wweak design, 42

VI

A Task Description

Concrete Constructions of Unbalanced Bipartite ExpanderGraphs and Generalized Conductors

Master's Project for Rose-Line Werner

3.3.2008 3.9.2008

A.1 Introduction

A (K, γ)-bipartite expander graph G = (V1,V2, E) is a bipartite graph1 with the followingadditional property: For all subsets X ⊆ V1 such that |X | ≤ K, the cardinality of theset Γ(X ) ⊆ V2 of all neighbors of the vertices in X satises |Γ(X )| ≥ γ|X |. Such a graph iscalled unbalanced when |V1| > |V2|. In general, one is interested in the case where both V1

and V2 are large, i.e. exponential in some given security parameter n. In order for such agraph to be useful, there must exist an algorithm which, given a vertex v ∈ V1 and an index i,can compute eciently (that is, in time polynomial in n) the i'th neighbor of v.

Interestingly, it turns out that unbalanced bipartite expander graphs play an important rolein cryptography. In particular, they have been used in the following two contexts.

Bounded-Storage Model. In the bounded storage model [Mau92, CM97] security is provedunder the sole assumption that an adversary has bounded storage capabilities, but isotherwise computationally unbounded. In this model, a long random string R is initiallybroadcast to all parties (both the honest parties and the adversary), but the adversary'smemory resources do not allow him to store all of R. The honest parties interact inorder to perform some cryptographic task (e.g. generating a secret key) depending on R,and security has to be guaranteed as long as the adversary can only store a portion ofit, even if he has much more memory than the honest parties.

Moran, Shaltiel, and Ta-Shma [MST04] have considered the problem of non-interactivetimestamping of documents in the bounded-storage model: The idea is that honest userscan timestamp documents, but the adversary is not able to produce more timestampsthan what his memory would allows him to do in the case he would behave honestly.Their solution relies on the use of unbalanced bipartite expander graphs in order toselect which random bits of R are used to timestamp a message.

Extension of Public Random Primitives. A well-known problem in cryptography is the prob-lem of extending random resources in a secure way. For example, one would like to obtaina longer secret key from a given shorter secret key. While this problem is well-understood

1That is, all edges u, v ∈ E are between a vertex u ∈ V1 and a vertex v ∈ V2.

VII

APPENDIX A. TASK DESCRIPTION

in the setting where randomness is private, it is less clear when honest users would liketo extend randomness which is public, i.e. also accessible by the adversary. An exampleof such a primitive is a public random function that maps m-bit strings to n-bit strings,i.e. a system which takes an m-bit string as input (both from the honest parties andform the adversary), and for each such input returns consistently a uniformly-distributedn-bit string.

In recent work [MT07], we have presented a solution for extending a public randomfunction mapping n-bit string to n-bit string to a public random function mappingarbitrarily-long bit strings to n-bit strings, and which guarantees nearly-optimal security.The approach we take is a novel one, and relies on the use of unbalanced expander graphs.

A.2 Description

Despite the potential wide range of possible applications of unbalanced bipartite expandergraphs to cryptography (and to computer science in general), their study has been ratherlimited so far. Such graphs have only been constructed from much stronger objects (forexample, randomness extractors [NT99]), and the existing constructions achieve parameterswhich are even too strong for many applications, at the cost of an inherent ineciency.The goal of this thesis is to provide an overview of unbalanced bipartite expander graphs,

their properties, and their applications to information-theoretic cryptography. In particular,in the rst part of the work, the student is supposed to gain an overview on existing resultson unbalanced bipartite expander graphs and to present this in a survey providing a unifyingview on the topic.Furthermore, it would be interesting to study possible graph constructions, and to study

their expanding properties. In fact, there are some natural candidates for suciently-goodconstructions using only basic mathematics, but it is not obvious to decide whether theirexpanding properties are good enough to obtain parameters suitable for cryptographic appli-cations.Finally, in light of the results of [MST04] and [MT07], it is interesting to understand the

role of the expansion property of such graphs in these results, and to apply similar ideas toother tasks in information-theoretic cryptography. It may also be possible that slightly weakertools suce to obtain cryptographic applications.

A.3 Tasks

Possible tasks for this project are the following ones.

1. Study the relevant literature on expander graphs (and in particular on unbalanced bi-partite expander graphs) and formulate a clear survey of existing results.

2. By using known composition theorems, study compositions of known constructions andthe parameters that can be achieved, including the parameters K and γ, as well as goodestimates of the sizes of the resulting graphs.

3. Study direct constructions (based for example on simple algebraic or number theoreticideas) of unbalanced bipartite graphs as well as their expanding properties. Are theygood expanders? And if not, what is the reason?

VIII

A.4. GRADING OF THE THESIS

4. Work on [MST04, MT07] and try to exploit the acquired insights to understand theessence of these results. In particular, which properties and which parameters are therelevant ones? Would it be possible to weaken the concept of unbalanced bipartiteexpander graphs and still achieve the same results?

5. Possibly, try to nd new cryptographic tasks where the use of unbalanced bipartiteexpander graphs could lead to a novel solution.

By the end of the project the work shall be presented in a talk. Hints about the documentationcan be found in the enclosed guidelines.Requirements for this work are interest in theoretical research as well as good mathematical-

thinking skills. Only basic knowledge in cryptography is assumed (as in the basic Cryptogra-phy lecture).

A.4 Grading of the Thesis

The master's project encompasses independent scientic research, writing a Master's thesis,and giving a presentation. The evaluation of the thesis takes into account the quality ofthe results (understanding of the subject, contributed ideas, correctness) and the quality ofthe documentation (thesis and presentation). More instructions for the documentation andinformation about grading criteria can be found in the enclosed leaets.

IX


Recommended