Reliable Deniable Communication: Hiding Messages in Noise
Mayank Bakshi Mahdi Jafari Siavoshani
ME
Sidharth Jaggi
The Chinese University of Hong Kong
The Institute of Network Coding
Pak Hou (Howard) Che
Alice
Reliability
Bob
Willie(the Warden)
Reliability
Deniability
AliceBob
Willie-sky
Reliability
Deniability
AliceBob
M
T
t
�⃑�
Alice’s Encoder
𝑁=2𝜃 (√𝑛)
M
T
Message Trans. Status
BSC(pb) �̂�=𝐷𝑒𝑐 (�⃑�𝑏)�⃑�𝑏�⃑�
Alice’s Encoder
Bob’s Decoder
𝑁=2𝜃 (√𝑛)
�̂�
M
T
Message Trans. Status
BSC(pb) �̂�=𝐷𝑒𝑐 (�⃑�𝑏)�⃑�𝑏�⃑�
Alice’s Encoder
Bob’s Decoder
BSC(pw)
�̂�=𝐷𝑒𝑐 (�⃑�𝑤)
�⃑�𝑤
𝑁=2𝜃 (√𝑛)
Willie’s (Best) Estimator
�̂�
�̂�
Bash, Goeckel & Towsley [1]Shared secret
[1] B. A. Bash, D. Goeckel and D. Towsley, “Square root law for communication with low probability of detection on AWGN channels,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), 2012, pp. 448–452.
€
O n .log(n)( ) bits
AWGN channels
But capacity only
€
O n( ) bits!
This workNo shared secret
BSC(pb)
BSC(pw)
pb < pw
Wicked Willie(s) Base-station Bob
Aerial Alice
Directional antenna
Steganography: Other work
Steganography: Other work
Other work: “Common” modelShared secret key
Capacity O(n) message bitsInformation-theoretically tight characterization(Gel’fand-Pinsker/Dirty paper coding)
O(n.log(n)) bits (not optimized)
[2] Y. Wang and P. Moulin, "Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions," IEEE Trans. on Information Theory, special issue on Information Theoretic Security, June 2008
Stegotext(covertext,message,key)
Message,Covertext
No noise
d(stegotext,covertext) “small”
Other work: Square-root “law”(“empirical”)
•“Steganographic capacity is a loosely-defined concept, indicating the size of payload whichmay securely be embedded in a cover object using a particular embedding method. What constitutes “secure” embedding is a matter for debate, but we will argue that capacity should grow only as the square root of the cover size under a wide range of definitions of security.” [3]
•“Thanks to the Central Limit Theorem, the more covertext we give the warden, the better he will be able to estimate its statistics, and so the smaller the rate at which [the steganographer] will be able to tweak bits safely.” [4]
[3] A. Ker, T. Pevny`, J. Kodovsky`, and J. Fridrich, “The square root law of steganographic capacity,” in Proceedings of the 10th ACM workshop on Multimedia and security. ACM, 2008, pp. 107–116.[4] R. Anderson, “Stretching the limits of steganography,” in Information Hiding, 1996, pp. 39–48.
•“[T]he reference to the Central Limit Theorem... suggests that a square root relationship should be considered. “ [3]
M
T
Message Trans. Status
BSC(pb) �̂�=𝐷𝑒𝑐 (�⃑�𝑏)�⃑�𝑏�⃑�
Alice’s Encoder
Bob’s Decoder
BSC(pw)
�̂�=𝐷𝑒𝑐 (�⃑�𝑤)
�⃑�𝑤
𝑁=2𝜃 (√𝑛)
Willie’s (Best) Estimator
�̂�
�̂�
Hypothesis Testing Willie’s Estimate
Alice’s Transmission
Status
𝛼=Pr ( �̂�=1|𝐓=0 ) , 𝛽=Pr ( �̂�=0|𝐓=1 )
Hypothesis Testing Willie’s Estimate
Alice’s Transmission
Status
Hypothesis Testing Willie’s Estimate
Alice’s Transmission
Status
Hypothesis Testing Willie’s Estimate
Alice’s Transmission
Status
Intuition
𝐓=0 , 𝐲𝑤=�⃑�𝑤 Binomial(𝑛 ,𝑝𝑤)
Intuition
Theorem 1 (Wt(c.w.))(high deniability => low weight codewords)
Too many codewords with weight “much ” greater than𝑐 √𝑛 , h𝑡 𝑒𝑛 h𝑡 𝑒𝑠𝑦𝑠𝑡𝑒𝑚𝑖𝑠 “not very” deniable
Theorems 2 & 3(Converse & achievability for reliable & deniable comm.)
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2
pb>pw
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2
𝑁=0
(Symmetrizability)
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2pw=1/2
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2
€
N ≈ 2(1−H (pb ))n
(BSC(pb))
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2
pb=0
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2
𝑁=2𝑂 (√𝑛 log𝑛) ,( 𝑛√𝑛)=2𝑂 (√𝑛 log𝑛)
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2
pw>pb
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2𝑁=2𝑂 (√𝑛)
“Standard” IT inequalities+
Wt(“most codewords”)<√n(Thm 1)
Theorems 2 & 3
𝑝𝑏
𝑝𝑤
0 1/2
1/2
Main thm:
𝑤𝑡𝐻 (𝒚𝑤 )
0 n
logarithm of# codewords
log ( 𝑛𝑛/2)≈𝑛
𝑤𝑡𝐻 (𝐲𝑤)0 n𝑝𝑤𝑛+𝑂 (√𝑛)𝑝𝑤𝑛
log(# codewords)
Pr�⃑�𝑤
(𝑤𝑡𝐻 (𝐲𝑤 ))
𝑂 (1/√𝑛)
𝑛𝐻 (𝑝𝑤 )
𝐱=0⃗
𝑤𝑡𝐻 (𝐲𝑤)0 n
(𝑝¿¿𝑤∗𝜌)𝑛+𝑂(√𝑛)¿(𝑝¿¿𝑤∗𝜌)𝑛¿(𝑝¿¿𝑤∗𝜌)𝑛−𝑂(√𝑛)¿
log(# codewords)
Pr𝐌 ,𝐙𝑤
(𝑤𝑡𝐻 (𝐲𝑤 ))
𝑛𝐻 (𝑝𝑤∗𝜌)
𝑐 √𝑛
𝑂 (1/√𝑛)
Theorem 3 – Reliability proof sketch
0 n
Noise magnitude >> Codeword weight!!!
Theorem 3 – Reliability proof sketch
.
.
.
1000001000000000100100000010000000100
0001000000100000010000000010000000001
0010000100000001010010000000100010011
0000100000010000000000010000000010000
Random code
2O(√n) codewords
Weight O(√n)
Theorem 3 – Reliability proof sketch
.
.
.
1000001000010000100100000010000000100
0001000000100000010000000010000000001
0010000100000001010010000000100010011
0000100000010000000000010000000010000
•E(Intersection of 2 codewords) = O(1)
Weight O(√n)
•Pr(dmin(x) < c√n) < 2-O(√n)
•“Most” codewords “well-isolated”
Theorem 3 – dmin decoding
•Pr(x decoded to x’) < 2-O(√n)
+ O(√n)
x
x’
• Recall: want to show
Theorem 3 – Deniability proof sketch
Theorem 4 – unexpected detour
𝑤𝑡𝐻 (𝒚𝑤 )
0 n
logarithm of# codewords
𝑤𝑡𝐻 (𝒚𝑤 )
0 n
logarithm of# codewords
Too few codewords=> Not deniable
Theorem 4 – unexpected detour
𝑤𝑡𝐻 (𝐲𝑤)0 n
(𝑝¿¿𝑤∗𝜌)𝑛+𝑂(√𝑛)¿(𝑝¿¿𝑤∗𝜌)𝑛¿(𝑝¿¿𝑤∗𝜌)𝑛−𝑂(√𝑛)¿
log(# codewords)
Pr𝐌 ,𝐙𝑤
(𝑤𝑡𝐻 (𝐲𝑤 ))
𝑛𝐻 (𝑝𝑤∗𝜌)
𝑐 √𝑛
𝑂 (1/√𝑛)
• Recall: want to show
𝐏0 𝐏1
Theorem 3 – Deniability proof sketch
0 n
log(# codewords)
𝑛𝐻 (𝑝𝑤 )
Theorem 3 – Deniability proof sketch
𝑤𝑡𝐻 (𝒚𝑤 )
0 n
logarithm of# codewords
Theorem 3 – Deniability proof sketch
𝐏0 𝐏1
!!!
Theorem 3 – Deniability proof sketch
𝐏0 𝐏1
!!!
Theorem 3 – Deniability proof sketch
𝐏1𝑬𝑪(𝐏¿¿1)¿
Theorem 3 – Deniability proof sketch
𝑤𝑡𝐻 (𝒚𝑤 )
0 n𝑝𝑤𝑛+𝑂 (√𝑛)𝑝𝑤𝑛
logarithm of# codewords
Theorem 3 – Deniability proof sketch
# codewords of “type”
𝑇 1𝑇 2
𝑇 3
Theorem 3 – Deniability proof sketch
Theorem 3 – Deniability proof sketch
Theorem 3 – Deniability proof sketch
Theorem 3 – Deniability proof sketch
• w.p.
Theorem 3 – Deniability proof sketch
• w.p.
Theorem 3 – Deniability proof sketch
• w.p. • close to w.p.
Theorem 3 – Deniability proof sketch
• w.p. • close to w.p. • , w.h.p.
Theorem 3 – Deniability proof sketch
Summary