Research ArticleGames Based Study of Nonblind Confrontation
Yixian Yang,1,2,3 Xinxin Niu,1,2,3 and Haipeng Peng2,3
1Guizhou Provincial Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China2Information Security Center, State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications, Beijing 100876, China3National Engineering Laboratory for Disaster Backup and Recovery, Beijing University of Posts and Telecommunications,Beijing 100876, China
Correspondence should be addressed to Haipeng Peng; [email protected]
Received 4 January 2017; Accepted 20 March 2017; Published 19 April 2017
Academic Editor: Liu Yuhong
Copyright Β© 2017 Yixian Yang et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Security confrontation is the second cornerstone of the GeneralTheory of Security. And it can be divided into two categories: blindconfrontation and nonblind confrontation between attackers and defenders. In this paper, we study the nonblind confrontation bysome well-known games. We show the probability of winning and losing between the attackers and defenders from the perspectiveof channel capacity. We establish channel models and find that the attacker or the defender wining one time is equivalent to one bittransmitted successfully in the channel. This paper also gives unified solutions for all the nonblind confrontations.
1. Introduction
The core of all security issues represented by cyberspacesecurity [1], economic security, and territorial security isconfrontation. Network confrontation [2], especially in bigdata era [3], has been widely studied in the field of cyberspacesecurity. There are two strategies in network confrontation:blind confrontation and nonblind confrontation. The so-called βblind confrontationβ is the confrontation in whichboth the attacker and defender are only aware of their self-assessment results and know nothing about the enemyβsassessment results after each round of confrontation. Thesuperpower rivalry, battlefield fight, network attack anddefense, espionage war, and other brutal confrontations,usually belong to the blind confrontation. The so-calledβnonblind confrontationβ is the confrontation in which boththe attacker and defender know the consistent result aftereach round.The games studied in this paper are all belongingto the nonblind confrontation.
βSecurity meridianβ is the first cornerstone of the GeneralTheory of Security which has been well established [4, 5].Security confrontation is the second cornerstone of theGeneral Theory of Security, where we have studied the blindconfrontation and gave the precise limitation of hacker attack
ability (honker defense ability) [4, 5]. Comparing with theblind confrontation, the winning or losing rules of nonblindconfrontation are more complex and not easy to study. Inthis paper, based on the Shannon InformationTheory [6], westudy several well-known games of the nonblind confronta-tion: βrock-paper-scissorsβ [7], βcoin tossingβ [8], βpalm orback,β βdraw boxing,β and βfinger guessingβ [9], from anovel point of view.The famous game, βrock-paper-scissors,βhas been played for thousands of years. However, there arefew related analyses on it. The interdisciplinary team ofZhejiang University, Chinese Academy of Sciences, and otherinstitutions, in cooperation with more than three hundredvolunteers, spent four years playing βrock-paper-scissorsβand giving corresponding analysis of game. And the findingswere awarded as βBest of 2014: MIT technology review.β
We obtain some significant results. The contributions ofthis paper are as follows:
(i) Channel models of all the above three games areestablished.
(ii) The conclusion that the attacker or the defenderwining one time is equivalent to one bit transmittedsuccessfully in the channel is found.
HindawiMathematical Problems in EngineeringVolume 2017, Article ID 8679079, 11 pageshttps://doi.org/10.1155/2017/8679079
2 Mathematical Problems in Engineering
(iii) Unified solutions for all the nonblind confrontationsare given.
The rest of the paper is organized as follows. The modelof rock-paper-scissors is introduced in Section 2, models ofcoin tossing and palm or back are introduced in Section 3,models of finger guessing and drawing boxing are introducedin Section 4, unified model of linear separable nonblind con-frontation is introduced in Section 5, and Section 6 concludesthis paper.
2. Model of Rock-Paper-Scissors
2.1. Channel Modeling. Suppose π΄ and π΅ play βrock-paper-scissors,β whose states can be, respectively, represented byrandom variablesπ and π:π = 0, π = 1, and π = 2 denote the βscissors,β βrock,βand βpaperβ of π΄, respectively;π = 0, π = 1, and π = 2 denote the βscissors,β βrock,βand βpaperβ of π΅, respectively.
Law of Large Numbers indicates that the limit of thefrequency tends to probability; thus the choice habits of π΄and π΅ can be represented as the probability distribution ofrandom variablesπ and π:
Pr(π = 0) = πmeans the probability of π΄ for βscissorsβ;Pr(π = 1) = πmeans the probability of π΄ for βrockβ;Pr(π = 2) = 1 β π β π means the probability of π΄ for
βpaperβ, where 0 < π, π and π + π < 1;Pr(π = 0) = πmeans the probability of π΅ for βscissorsβ;Pr(π = 1) = π means the probability of π΅ for βrockβ;Pr(π = 2) = 1 β π β π means the probability of π΅ for
βpaper,β where 0 < π, π and π + π < 1.Similarly, the joint probability distribution of two-
dimensional random variables (π, π) can be listed as follows:Pr(π = 0, π = 0) = π means the probability of π΄ for
βscissorsβ and π΅ for βscissorsβ;Pr(π = 0, π = 1) = π means the probability of π΄ for
βscissorsβ and π΅ for βrockβ;Pr(π = 0, π = 2) = π β π β πmeans the probability of π΄
for βscissorsβ and π΅ for βpaper,β where 0 < π, π, and π+π < π;Pr(π = 1, π = 0) = π means the probability of π΄ for
βrockβ and π΅ for βscissorsβ;Pr(π = 1, π = 1) = π means the probability of π΄ for
βrockβ and π΅ for βrockβ;Pr(π = 1, π = 2) = π β π β πmeans the probability of π΄
for βrockβ and π΅ for βpaper,β where 0 < π, π, and π + π < π;Pr(π = 2, π = 0) = π means the probability of π΄ for
βpaperβ and π΅ for βscissorsβ;Pr(π = 2, π = 1) = β means the probability of π΄ for
βpaperβ and π΅ for βrockβ;Pr(π = 2, π = 2) = 1βπβπβπββmeans the probability
ofπ΄ for βpaperβ andπ΅ for βpaper,β where 0 < π, π, and π+π <1 β π β π.Construct another random variable π = [2(1 + π +π)] mod 3 fromπ and π. Because any two random variables
can form a communication channel, we get a communicationchannel (π; π)withπ as the input andπ as the output, whichis called βChannel π΄,β which is shown in Figure 1.
Channel A
Channel B
X
Y
Z
Z
Figure 1: Block diagram of the channel model.
If π΄ wins, then there are only three cases.
Case 1. βπ΄ chooses scissors, π΅ chooses paperβ; namely, βπ =0, π = 2.β This is also equivalent to βπ = 0, π = 0β; namely,the input of βChannel π΄β is equal to the output.Case 2. βπ΄ chooses stone, π΅ chooses scissorsβ; namely, βπ =1, π = 0.β This is also equivalent to βπ = 1, π = 1β; namely,the input of βChannel π΄β is equal to the output.Case 3. βπ΄ chooses cloth, π΅ chooses stoneβ; namely, βπ = 2,π = 1.β This is also equivalent to βπ = 2, π = 2β; namely, theinput of βChannel π΄β is equal to the output.
In contrast, if βChannel π΄β sends one bit from the senderto the receiver successfully, then there are only three possiblecases.
Case 1. The input and the output equal 0; namely, βπ = 0,π = 0.β This is also equivalent to βπ = 0, π = 2β; namely, βπ΄chooses scissors, π΅ chooses paperβ; π΄ wins.
Case 2. The input and the output equal 1; namely, βπ = 1,π = 1.β This is also equivalent to βπ = 1, π = 0β; namely, βπ΄chooses rock, π΅ chooses scissorsβ; π΄ wins.
Case 3. The input and the output equal 2; namely, βπ = 2,π = 2.β This is also equivalent to βπ = 2, π = 1β; namely, βπ΄chooses paper, π΅ chooses rockβ; π΄ wins.
Based on the above six cases, we get an important lemma.
Lemma 1. π΄ wins once if and only if βChannel π΄β sends onebit from the sender to the receiver successfully.
Now we can construct another channel (π; π) by usingrandom variables π and π with π as the input and π as theoutput, which is called βChannel π΅.β Then similarly, we canget the following lemma.
Lemma 2. π΅ wins once if and only if βChannel π΅β sends onebit from the sender to the receiver successfully.
Thus, the winning and losing problem of βrock-paper-scissorsβ played by π΄ and π΅ converts to the problem ofwhether the information bits can be transmitted successfullyby βChannel π΄β and βChannel π΅.β According to Shannonβssecond theorem [3], we know that channel capacity is equalto the maximal number of bits that the channel can transmit
Mathematical Problems in Engineering 3
successfully. Therefore, the problem is transformed into thechannel capacity problem. More accurately, we have thefollowing theorem.
Theorem 3 (βrock-paper-scissorsβ theorem). If one does notconsider the case that both π΄ and π΅ have the same state; then
(1) for π΄, there must be some skills (corresponding to theShannon coding) for any π/π β€ πΆ, such that π΄ wins πtimes in ππΆ rounds of the game; if π΄ wins π’ times inπ rounds of the game, then π’ β€ ππΆ, where πΆ is thecapacity of βChannel π΄β;
(2) for π΅, there must be some skills (corresponding to theShannon coding) for any π/π β€ π·, such that π΅ wins πtimes in ππ· rounds of the game; if π΅ wins π’ times inπ rounds of the game, then π’ β€ ππ·, where π· is thecapacity of βChannel π΅β;
(3) statistically, if πΆ < π·, π΅ will win; if πΆ > π·, π΄ will win;if πΆ = π·, π΄ and π΅ are evenly matched.
Here, we calculate the channel capacity of βChannel π΄βand βChannel π΅β as follows.
For channel (π; π) of π΄: π denotes its transition proba-bility matrix with 3 β 3 order,π (0, 0) = Pr (π = 0 | π = 0) = (π β π β π)π ,π (0, 1) = Pr (π = 1 | π = 0) = ππ ,π (0, 2) = Pr (π = 2 | π = 0) = ππ ,π (1, 0) = Pr (π = 0 | π = 1) = ππ ,π (1, 1) = Pr (π = 1 | π = 1) = ππ ,π (1, 2) = Pr (π = 2 | π = 1) = (π β π β π)π ,π (2, 0) = Pr (π = 0 | π = 2) = π(1 β π β π) ,π (2, 1) = Pr (π = 1 | π = 2) = (1 β π β π β π β β)(1 β π β π) ,π (2, 2) = Pr (π = 2 | π = 2) = β(1 β π β π) .
(1)
The channel transfer probability matrix is used to calculatethe channel capacity: solve the equations ππ = π, whereπ isthe column vector:
π = (π0, π1, π2)π ,π = ( 2β
π=0
π (0, π) log2π (0, π) , 2βπ=0
π (1, π) log2π (1, π) ,2βπ=0
π (2, π) log2π (2, π)) .(2)
Consider the transition probability matrix π.
(1) If π is reversible, there is a unique solution; that is,π = πβ1π; then πΆ = log2(β2π=0 2ππ).According to the formula ππ§(π) = 2ππβπΆ, ππ§(π) =β2π=0 ππ₯(π)π(π, π), π, π = 0, 1, 2, where ππ§(π) is the probability
distribution of π.And the probability distribution of π is obtained. Ifππ(π) β₯ 0, π = 0, 1, 2, the channel capacity can be confirmed
as πΆ.(2) Ifπ is irreversible, the equation hasmultiple solutions.
Repeat the above steps; then we can get multiple πΆ and thecorresponding ππ(π). If ππ(π) does not satisfy ππ(π) β₯ 0, π =0, 1, 2, we delete the corresponding πΆ.
For channel (π; π) of π΅:π denotes its transition probabil-ity matrix with 3 β 3 order,
π (0, 0) = Pr (π = 0 | π = 0) = ππ ,π (0, 1) = Pr (π = 1 | π = 0) = ππ ,π (0, 2) = Pr (π = 2 | π = 0) = (π β π β π)π ,π (1, 0) = Pr (π = 0 | π = 1) = ππ ,π (1, 1) = Pr (π = 1 | π = 1) = ππ ,π (1, 2) = Pr (π = 2 | π = 1) = (π β π β π)π ,π (2, 0) = Pr (π = 0 | π = 2) = (1 β π β π)(1 β π β π ) ,π (2, 1) = Pr (π = 1 | π = 2) = (1 β π β β)(1 β π β π ) ,π (2, 2) = Pr (π = 2 | π = 2) = (1 β π β π)(1 β π β π ) .
(3)
The channel transfer probability matrix π is used tocalculate the channel capacity π΅.
Solution equation group ππ€ = π’, where π€, π’ are thecolumn vector:
π€ = (π€0, π€1, π€2)π ,π’ = ( 2β
π=0
π (0, π) log2π (0, π) , 2βπ=0
π (1, π) log2π (1, π) ,2βπ=0
π (2, π) log2π (2, π)) .(4)
Consider the transition probability matrix π.(1) If π is reversible, there is a unique solution; that is,π€ = πβ1π’; thenπ· = log2(β2π=0 2π€π).According to the formula ππ§(π) = 2π€πβπ·, ππ§(π) =β2π=0 ππ¦(π)π(π, π), π, π = 0, 1, 2.And the probability distribution of π is obtained. Ifππ(π) β₯ 0, π = 0, 1, 2, the channel capacity can be confirmed
asπ·.
4 Mathematical Problems in Engineering
(2) Ifπ is irreversible, the equation hasmultiple solutions.Repeat the above steps, then we can get multiple π· and thecorresponding ππ(π). If ππ(π) does not satisfy ππ(π) β₯ 0, π =0, 1, 2, we delete the correspondingπ·.
In the above analysis, the problem of βrock-paper-scissorsβ game has been solved perfectly, but the correspond-ing analysis is complex. Here, we give a more abstract andsimple solution.
Law of Large Numbers indicates that the limit of thefrequency tends to probability; thus the choice habits of π΄and π΅ can be represented as the probability distribution ofrandom variablesπ and π:0 < Pr (π = π₯) = ππ₯ < 1,π₯ = 0, 1, 2, π0 + π1 + π2 = 1;0 < Pr (π = π¦) = ππ¦ < 1,
π¦ = 0, 1, 2, π0 + π1 + π2 = 1;0 < Pr (π = π₯, π = π¦) = π‘π₯π¦ < 1,
π₯, π¦ = 0, 1, 2, β0β€π₯,π¦β€2
π‘π₯π¦ = 1;ππ₯ = β0β€π¦β€2
π‘π₯π¦, π₯ = 0, 1, 2;ππ¦ = β0β€π¦β€2
π‘π₯π¦, π¦ = 0, 1, 2.
(5)
The winning and losing rule of the game is if π = π₯, π =π¦, then the necessary and sufficient condition of the winningof π΄(π) is (π¦ β π₯) mod 3 = 2.
Now construct another random variable πΉ = (π β2) mod 3. Considering a channel (π; πΉ) consisting of π andπΉ, that is, a channel with π as an input and πΉ as an output,then, there are the following event equations.
Ifπ΄(π) wins in a certain round, then (πβπ) mod 3 = 2,so πΉ = (π β 2) mod 3 = [(2 + π) β π] mod 3 = π. That is,the input (π) of the channel (π; πΉ) always equals its output(πΉ). In other words, one bit is successfully transmitted fromthe sender to the receiver in the channel.
Conversely, if βone bit is successfully transmitted from thesender to the receiver in the channel,β it means that the input(π) of the channel (π; πΉ) always equals its output (πΉ).That is,πΉ = (π β 2) mod 3 = π, which is exactly the necessary andsufficient conditions forπ winning.
Based on the above discussions, π΄(π) winning oncemeans that the channel (π; πΉ) sends one bit from the senderto the receiver successfully and vice versa. Therefore, thechannel (π; πΉ) can also play the role of βChannel π΄β in thethird section.
Similarly, if the random variableπΊ = (πβ2) mod 3, thenthe channel (π; πΊ) can play the role of the above βChannel π΅.β
And now the form of channel capacity for channel (π; πΉ)and channel (π; πΊ) will be simpler. We haveπΆ(π; πΉ) = maxπ [πΌ(π, πΉ)] = maxπ [πΌ(π, (π β 2) mod3)] = maxπ [πΌ(π, π)] = maxπ [β π‘π₯π¦log(π‘π₯π¦/(ππ₯ππ¦))]. Themaximal value here is taken for all possible π‘π₯π¦ and ππ₯. So,πΆ(π; πΉ) is actually the function of π0, π1, and π2.
Similarly, πΆ(π; πΊ) = maxπ [πΌ(π, πΊ)] = maxπ [πΌ(π, (πβ2) mod 3)] = maxπ [I(π, π)] = maxπ [β π‘π₯π¦log(π‘π₯π¦/(ππ₯ππ¦))].The maximal value here is taken for all possible π‘π₯π¦ and ππ¦.So, πΆ(π; πΊ) is actually the function of π0, π1, and π2.2.2. The Strategy of Win. According to Theorem 3, if theprobability of a specific action is determined, the victory ofboth parties in the βrock-paper-scissorsβ game is determinedas well. In order to obtain the victory with higher probability,one must adjust his strategy.
2.2.1.The Game between Two Fools. The so-called βtwo foolsβmeans that π΄ and π΅ entrench their habits; that is, theychoose their actions in accordance with the established habitsno matter who won in the past. According to Theorem 3,statistically, if πΆ < π·, π΄ will lose; if πΆ > π·, then π΄ will win;and if πΆ = π·, then both parties are well-matched in strength.
2.2.2. The Game between a Fool and a Sage. If π΄ is a fool,he still insists on his inherent habit; then after confronting asufficient number of times, π΅ can calculate the distributionprobabilities π and π of random variable π correspondingto π΄. And π΅ can get the channel capacity of π΄ by somerelated conditional probability distribution at last, and thenby adjusting their own habits (i.e., the probability distributionof the random variable π and the corresponding conditionalprobability distribution, etc.); then π΅ enlarges his own chan-nel capacity to make the rest of game more beneficial tohimself; moreover, the channel capacity of π΅ is larger enough,πΆ(π΅) > πΆ(π΄); then π΅ win the success at last.
2.2.3. The Game between Two Sages. If bothπ΄ and π΅ get usedto summarizing the habits of each other at any time, andadjust their habits, enlarge their channel capacity. At last, thetwo parties can get the equal value of channel capacities; thatis, the competition between them will tend to a balance, adynamically stable state.
3. Models of (Coin Tossing) and(Palm or Back)
3.1. The Channel Capacity of βCoin Tossingβ Game. βCointossingβ game: βbankerβ covers a coin under his hand on thetable, and βplayerβ guesses the head or tail of the coin. Theβplayerβ will win when he guesses correctly.
Obviously, this game is a kind of βnonblind confronta-tion.β We will use the method of channel capacity to analyzethe winning or losing of the game.
Based on the Law of Large Numbers in the probabilitytheory, the frequency tends to probability. Thus, accordingto the habits of βbankerβ and βplayer,β that is, the statisticalregularities of their actions in the past, we can give theprobability distribution of their actions.
We use the random variable π to denote the state of theβbanker.β π = 0 (π = 1) means the coin is head (tail).So the habit of βbankerβ can be described by the probabilitydistribution of π; that is, Pr(π = 0) = π, Pr(π = 1) = 1 β π,where 0 β€ π β€ 1.
We use the random variable π to denote the state of theβplayer.β π = 0 (π = 1) means that he guesses head (tail).
Mathematical Problems in Engineering 5
So the habit of βplayerβ can be described by the probabilitydistribution of π; that is, Pr(π = 0) = π, Pr(π = 1) =1 β π, where 0 β€ π β€ 1. Similarly, according to the paststates of βbankerβ and βplayer,β we have the joint probabilitydistribution of random variables (π,π); namely,
Pr (π = 0, π = 0) = π;Pr (π = 0, π = 1) = π;Pr (π = 1, π = 0) = π;Pr (π = 1, π = 1) = π,
(6)
where 0 β€ π, π, π, π, π, π β€ 1 andπ + π + π + π = 1;
π = Pr (π = 0)= Pr (π = 0, π = 0) + Pr (π = 0, π = 1)= π + π;
π = Pr (π = 0)= Pr (π = 0, π = 0) + Pr (π = 1, π = 0)= π + π.
(7)
Taking π as the input and π as the output, we obtain thechannel (π;π) which is called βChannelπβ in this paper.
Because π guesses correctly = {π = 0, π = 0} βͺ {π =1, π = 1} = one bit is successfully transmitted from the senderπ to the receiver π in βChannel π,β βπ wins one timeβ isequivalent to transmitting one bit of information successfullyin βChannelπ.β
Based on the channel coding theorem of Shannonβs Infor-mation Theory, if the capacity of βChannel πβ is πΆ, for anytransmission rate π/π β€ πΆ, we can receive π bits successfullyby sending π bits with an arbitrarily small probability ofdecoding error. Conversely, if βChannelπβ can transmit π bitsto the receiver by sending π bits without error, there must beπ β€ ππΆ. In a word, we have the following theorem.
Theorem 4 (banker theorem). Suppose that the channelcapacity of βChannel πβ composed of the random variable(π;π) is πΆ. Then one has the following: (1) if π wants to winπ times, he must have a certain skill (corresponding to theShannon coding) to achieve the goal by any probability closeto 1 in the π/πΆ rounds; conversely, (2) if π wins π times in πrounds, there must be π β€ ππΆ.
According to Theorem 3, we only need to figure out thechannel capacity πΆ of βChannel πβ; then the limitation oftimes that βπ winsβ is determined. So we can calculate thetransition probability matrix π΄ = [π΄(π, π)], π, π = 0, 1 ofβChannelπβ:
π΄ (0, 0) = Pr (π = 0 | π = 0) = Pr (π = 0,π = 0)Pr (π = 0)
= ππ ;
π΄ (0, 1) = Pr (π = 1 | π = 0) = Pr (π = 1,π = 0)Pr (π = 0)
= ππ = 1 β ππ ;π΄ (1, 0) = Pr (π = 0 | π = 1) = Pr (π = 0,π = 1)
Pr (π = 1)= π(1 β π) = (π β π)(1 β π) ;
π΄ (1, 1) = Pr (π = 1 | π = 1) = Pr (π = 1,π = 1)Pr (π = 1)
= π(1 β π) = 1 β (π β π)(1 β π) .(8)
Thus, the mutual information πΌ(π, π) ofπ and π equals
πΌ (π, π) = βπ
βπ
π (π, π) log( π (π, π)[π (π) π (π)])= π log[ π(ππ)] + π log[ π[π (1 β π)]]+ π log[ π[(1 β π) π]]+ π log[ π[(1 β π) (1 β π)]]
= π log[ π(ππ)] + (π β π) log[ (π β π)[π (1 β π)]]+ (π β π) log[ (π β π)[(1 β π) π]]+ (1 + π β π β π) log[ (1 + π β π β π)[(1 β π) (1 β π)]] .
(9)
Thus, the channel capacity πΆ of βChannel πβ is equalto max[πΌ(π, π)] (the maximal value here is taken fromall possible binary random variables π). In a word, πΆ =max[πΌ(π, π)] 0 < π, π < 1 (where πΌ(π, π) is the mutualinformation above).Thus, the channel capacityπΆ of βChannelπβ is a function of π, which is defined as πΆ(π).
Suppose the random variable π = (π+ 1) mod 2. Takingπ as the input and π as the output, we obtain the channel(π; π) which is called βChannel πβ in this paper.Because {πwins} = {π = 0,π = 1} βͺ {π = 1,π = 0} ={π = 0, π = 0} βͺ {π = 1, π = 1} = {one bit is successfully
transmitted from the sender π to the receiver π in theβChannelYβ}, βπwins one timeβ is equivalent to transmittingone bit of information successfully in βChannel π.β
Based on theChannel coding theoremof Shannonβs Infor-mation Theory, if the capacity of βChannel πβ is π·, for anytransmission rate π/π β€ π·, we can receive π bits successfullyby sending π bits with an arbitrarily small probability of
6 Mathematical Problems in Engineering
decoding error. Conversely, if βChannelπβ can transmit π bitsto the receiver by sending π bits without error, there must beπ β€ ππ·. In a word, we have the following theorem.
Theorem5 (player theorem). Suppose that the channel capac-ity of βChannel πβ composed of the random variable (π;π) isπ·. Then one has the following: (1) if π wants to win π times,he must have a certain skill (corresponding to the Shannoncoding) to achieve the goal by any probability close to 1 in theπ/πΆ rounds; conversely, (2) if π wins π times in the n rounds,there must be π β€ ππ·.
According to Theorem 4, we can determine the winninglimitation ofπ as long as we know the channel capacityπ· ofβChannel π.β
Similarly, we can get the channel capacity π· =max [πΌ(π, π)], 0 < π, π < 1, of βChannelπ.βThus, the channelcapacityπ· of βChannelπβ is a function ofπ, which is denotedasπ·(π).πΌ (π, π) = β
π
βπ
π (π, π) log( π (π, π)[π (π) π (π)])= π log[ π(ππ)] + (π β π) log[ (π β π)[π (1 β π)]]+ (π β π) log[ (π β π)[(1 β π) π]]+ (1 + π β π β π) log[ (1 + π β π β π)[(1 β π) (1 β π)]] .
(10)
From Theorems 3 and 4, we can obtain the quantitativeresults of βthe statistical results of winning and losingβ andβthe game skills of banker and player.β
Theorem 6 (strength theorem). In the game of βcoin tossing,βif the channel capacities of βChannel πβ and βChannel πβ areπΆ(π) andπ·(π), respectively, one has the following.Case 1. If bothπ andπ do not try to adjust their habits in theprocess of game, that is, π and π are constant, statistically, ifπΆ(π) > π·(π), π will win; if πΆ(π) < π·(π), π will win; and ifπΆ(π) = π·(π), the final result of the game is a βdraw.β
Case 2. Ifπ implicitly adjusts his habit andπ does not, that is,change the probability distribution π of the random variableπ to enlarge theπ·(π) of βChannelπβ such thatπ·(π) > πΆ(π),statistically,πwill win.On the contrary, ifπ implicitly adjustshis habit andπ does not, that is,π·(π) < πΆ(π), π will win.
Case 3. If both π and π continuously adjust their habits andmake πΆ(π) and π·(π) grow simultaneously, they will achievea dynamic balance when π = π = 0.5, and there is no winneror loser in this case.
3.2. The Channel Capacity of βPalm or Backβ Game. Theβpalm or backβ game: three participants (π΄, π΅, andπΆ) choosetheir actions of βpalmβ or βbackβ at the same time; if one of theparticipants choose the opposite action to the others (e.g., the
others choose βpalmβ when he chooses βbackβ), he will winthis round.
Obviously, this game is also a kind of βnonblind con-frontation.β We will use the method of channel capacity toanalyze the winning or losing of the game.
Based on the Law of Large Numbers in the probabilitytheory, the frequency tends to probability.Thus, according tothe habits of π΄, π΅, and πΆ, that is, the statistical regularities oftheir actions in the past, we have the probability distributionof their actions.
We use the random variable π to denote the state of π΄.π = 0 (π = 1) means that he chooses βpalm (back).β Thus,the habit ofπ΄ can be described as the probability distributionof π; that is, Pr(π = 0) = π, Pr(π = 1) = 1 β π, where0 β€ π β€ 1.
We use random variable π to denote the state of π΅. π =0 (π = 1) means that he chooses βpalm (back)β. Thus, thehabit of π΅ can be described as the probability distribution ofπ, that is, Pr(π = 0) = π, Pr(π = 1) = 1 β π, where 0 β€ π β€ 1.
We use the random variable π to denote the state of πΆ.π = 0 (π = 1) means that he chooses βpalm (back).β Thus,the habit ofπΆ can be described as the probability distributionofπ; that is, Pr(π = 0) = π, Pr(π = 1) = 1βπ, where 0 β€ π β€ 1.
Similarly, according to the Law of Large Numbers, wecan obtain the joint probability distributions of the randomvariables (π, π, π) from the records of their game results aftersome rounds; namely,
Pr (π΄ for palm, π΅ for palm, πΆ for palm)= Pr (π = 0 π = 0 π = 0) = π;
Pr (π΄ for palm, π΅ for palm, πΆ for back)= Pr (π = 0 π = 0 π = 1) = π;
Pr (π΄ for palm, π΅ for back, πΆ for palm)= Pr (π = 0 π = 1 π = 0) = π;
Pr (π΄ for palm, π΅ for back, πΆ for back)= Pr (π = 0 π = 1 π = 1) = π;
Pr (π΄ for back, π΅ for palm, πΆ for palm)= Pr (π = 1 π = 0 π = 0) = π;
Pr (π΄ for back, π΅ for palm, πΆ for back)= Pr (π = 1 π = 0 π = 1) = π;
Pr (π΄ for back, π΅ for back, πΆ for palm)= Pr (π = 1 π = 1 π = 0) = π;
Pr (π΄ for back, π΅ for back, πΆ for back)= Pr (π = 1 π = 1 π = 1) = β,
(11)
where 0 β€ π, π, π, π, π, π, π, π, π, π, β β€ 1 andπ + π + π + π + π + π + π + β = 1;π = Pr (π΄ for palm) = Pr (π = 0) = π + π + π + π;π = Pr (π΅ for palm) = Pr (π = 0) = π + π + π + π;π = Pr (πΆ for palm) = Pr (π = 0) = π + π + π + π.
(12)
Mathematical Problems in Engineering 7
Suppose the random variableπ = (π + π + π) mod 2;then the probability distribution ofπ is
Pr (π = 0) = Pr (π = 0, π = 0, π = 0)+ Pr (π = 0, π = 1, π = 1)+ Pr (π = 1, π = 1, π = 0)+ Pr (π = 1, π = 0, π = 1)
= π + π + π + π,Pr (π = 1) = Pr (π = 0, π = 0, π = 1)
+ Pr (π = 0, π = 1, π = 0)+ Pr (π = 1, π = 0, π = 0)+ Pr (π = 1, π = 1, π = 1)
= π + π + π + β.
(13)
Takingπ as the input andπ as the output, we obtain thechannel (π;π) which is called βChannel π΄β in this paper.
After removing the situations in which three participantschoose the same actions, we have the following equation:{π΄ wins} = {π΄ for palm, π΅ for back, πΆ for back} βͺ{π΄ for back, π΅ for palm, πΆ for palm} = {π = 0, π = 1, π =1}βͺ{π = 1, π = 0, π = 0} = {π = 0,π = 0}βͺ{π = 1,π = 1}= {one bit is successfully transmitted from the sender (π) tothe receiver (π) in the βChannel Aβ}.
Conversely, after removing the situations that three par-ticipants choose the same actions, if {one bit is successfullytransmitted from sender (π) to the receiver (π) in theβChannel Aβ}, there is {π = 0,π = 0} βͺ {π = 1,π = 1} ={π = 0, π = 1, π = 1} βͺ {π = 1, π = 0, π = 0} = {π΄ forpalm, π΅ for back, πΆ for back} βͺ {π΄ for back, π΅ for palm, πΆfor palm} = {π΄ wins}. Thus, βπ΄ wins one timeβ is equivalentto transmitting one bit successfully from the sender π to thereceiver π in the βChannel π΄.β From the channel codingtheorem of Shannonβs Information Theory, if the capacity ofthe βChannel π΄β is πΈ, for any transmission rate π/π β€ πΈ,we can receive π bits successfully by sending π bits with anarbitrarily small probability of decoding error. Conversely, ifthe βChannelπ΄β can transmit π bits to the receiver by sendingπ bits without error, there must be π β€ ππΈ. In a word, we havethe following theorem.
Theorem7. Suppose that the channel capacity of the βChannelπ΄β composed of the random variable (π;π) is πΈ. Then, afterremoving the situations in which three participants choose thesame actions, one has the following: (1) if π΄ wants to winπ times, he must have a certain skill (corresponding to theShannon coding theory) to achieve the goal by any probabilityclose to 1 in the π/πΈ rounds; conversely, (2) if π΄ wins π times inthe π rounds, there must be π β€ ππΈ.
In order to calculate the channel capacity of the channel(π;π), we should first calculate the joint probability distri-bution of the random variable (π,π):
Pr (π = 0,π = 0) = Pr (π = 0, π = 0, π = 0)+ Pr (π = 0, π = 1, π = 1)
= π + π;
Pr (π = 0,π = 1) = Pr (π = 0, π = 1, π = 0)+ Pr (π = 0, π = 0, π = 1)
= π + π;Pr (π = 1,π = 0) = Pr (π = 1, π = 1, π = 0)
+ Pr (π = 1, π = 0, π = 1)= π + π;
Pr (π = 1,π = 1) = Pr (π = 1, π = 0, π = 0)+ Pr (π = 1, π = 1, π = 1)
= π + β.(14)
Therefore, the mutual information between π and πequals
πΌ (π,π) = (π + π) log[ (π + π)[π (π + π + π + π)]]+ (π + π) log[ (π + π)[(1 β π) (π + π + π + π)]]+ (π + π) log[ (π + π)[π (π + π + π + β)]] + (π + β)β log[ (π + β)[(1 β π) (π + π + π + β)]] = (π + π)β log[ (π + π)[π (π + π + π + π)]] + (π + π)β log[ (π + π)[(1 β π) (π + π + π + π)]] + (π β π β π)β log[ (π β π β π)[π (1 β (π + π + π + π))]]+ (1 β (π + π + π))β log[ (1 β (π + π + π))[(1 β π) (1 β (π + π + π + π))]] .
(15)
Thus, the channel capacity of βchannel π΄β is equal to πΈ =max[πΌ(π,π)] and it is a function of π and π, which is definedas πΈ(π, π).
Taking π as the input and π as the output, we obtainthe channel (π,π)which is called βChannel π΅.β Similarly, wehave the following.
Theorem8. Suppose that the channel capacity of the βChannelπ΅β composed of the random variable (π;π) is πΉ. Then, afterremoving the situation in which the three participants choosethe same action, one has the following: (1) if π΅ wants to winπ times, he must have a certain skill (corresponding to theShannon coding) to achieve the goal by any probability close
8 Mathematical Problems in Engineering
to 1 in the π/πΉ rounds; conversely, (2) if π΅ wins π times in the nrounds, there must be π β€ ππΉ.
The channel capacity πΉ can be calculated as the sameway of calculating πΈ. Here, the capacity of βChannel π΅β is afunction of π and π, which can be defined as πΉ(π, π).
Similarly, taking π as the input andπ as the output, weobtain the channel (π,π)which is called βChannelπΆ.β So wehave the following.
Theorem9. Suppose that the channel capacity of the βChannelπΆβ composed of the random variable (π;π) is πΊ. Then, afterremoving the situations in which three participants choose thesame actions, one has the following: (1) if πΆ wants to winπ times, he must have a certain skill (corresponding to theShannon coding theory) to achieve the goal by any probabilityclose to 1 in the π/πΉ rounds; conversely, (2) if πΆ wins π times inthe π rounds, there must be π β€ ππΊ.
The channel capacity πΊ can be calculated by the sameway of calculating πΈ. Now the capacity of βChannel πΆβ is afunction of π and π, which can be defined as πΊ(π, π).
FromTheorems 6, 7, and 8, we can qualitatively describethe winning or losing situations ofπ΄, π΅, and πΆ in the palm orback game.
Theorem 10. If the channel capacities of βChannelπ΄,β βChan-nel π΅,β and βChannel πΆβ are πΈ, πΉ, and πΊ, respectively, thestatistical results of winning or losing depend on the values ofπΈ, πΉ, and πΊ. The one who has the largest channel capacity willgain the priority.We can know that the three channel capacitiescannot be adjusted only by one participant individually. It isdifficult to change the final results by adjusting oneβs habit(namely, only change one of the π, π, and π separately), unlesstwo of them cooperate secretly.
4. Models of (Finger Guessing) and(Draw Boxing)
4.1. Model of βFinger Guessingβ. βFinger guessingβ is a gamebetween the host and guest in the banquet. The rules of thegame are the following. The host and the guest, respectively,choose one of the following four gestures at the same timein a round: bug, rooster, tiger, and stick. Then they decidethe winner by the following regulations: βBugβ is inferior toβroosterβ; βroosterβ is inferior to βtigerβ; βtigerβ is inferior toβstickβ; and βstickβ is inferior to βbugβ. Beyond that, the gameis ended in a draw and nobody will be punished.
The βhost π΄β and βguest π΅β will play the βfinger guessinggameβ again after the complete end of this round.Themathe-matical expression of βfinger guessing gameβ is as follows:suppose π΄ and π΅ are denoted by random variables π and π,respectively; there are 4 possible values of them. Specifically,
π = 0 (or π = 0) when π΄ (or π΅) shows βbugβ;π = 1 (or π = 1) when π΄ (or π΅) shows βcockβ;π = 2 (or π = 2) when π΄ (or π΅) shows βtigerβ;π = 3 (or π = 3) when π΄ (or π΅) shows βstickβ.If π΄ shows π₯ (namely, π = π₯, 0 β€ π₯ β€ 3) and π΅ showsπ¦ (namely, π = π¦, 0 β€ π¦ β€ 3) in a round, the necessary and
sufficient condition of π΄ wins in this round is (π₯ β π¦) mod4 = 1. The necessary and sufficient condition of π΅ wins inthis round is (π¦ β π₯) mod 4 = 1. Otherwise, this round endsin a draw and proceeds to the next round of the game.
Obviously, the βfinger guessingβ game is a kind of βnon-blind confrontation.βWho is thewinner and howmany timesthe winner wins? How can they make themselves win more?We will use the βchannel capacity methodβ of the βGeneralTheory of Securityβ to answer these questions.
Based on the Law of Large Numbers in the probabilitytheory, the frequency tends to probability.Thus, according tothe habits of βhost (π)β and βguest (π),β that is, the statisticalregularities of their actions in the past (if they meet for thefirst time, we can require them to play a βwarm-up gameβ andrecord their habits), we can give the probability distribution ofπ,π and the joint probability distribution of (π, π), respectively:0 < Pr (π = π) = ππ < 1,π = 0, 1, 2, 3; π0 + π1 + π2 + π3 = 1;0 < Pr (π = π) = ππ < 1,π = 0, 1, 2, 3; π0 + π1 + π2 + π3 = 1;0 < Pr (π = π, π = π) = π‘ππ < 1,
π, π = 0, 1, 2, 3; β0β€π,πβ€3
π‘ππ = 1.ππ₯ = β0β€π¦β€3
π‘π₯π¦, π₯ = 0, 1, 2, 3;ππ¦ = β0β€π₯β€3
π‘π₯π¦, π¦ = 0, 1, 2, 3.
(16)
In order to analyze the winning situation of π΄, weconstruct a random variable π = (π + 1) mod 4. Then weuse the random variables π and π to form a channel (π; π),which is called βchannel π΄β; namely, the channel takes πas the input and π as the output. Then we analyze someequations of the events. If π΄ shows π₯ (namely, π = π₯, 0 β€π₯ β€ 3) and π΅ shows π¦ (namely, π = π¦, 0 β€ π¦ β€ 3) in a round,one has the following.
If π΄ wins in this round, there is (π₯ β π¦) mod 4 = 1; thatis, π¦ = (π₯ β 1) mod 4, so we have π§ = (π¦ + 1) mod 4 = [(π₯ β1) + 1] mod 4 = π₯ mod 4 = π₯. In other words, the output πis always equal to the input π in the channel π΄ at this time.That is, a βbitβ is successfully transmitted from the inputπ toits output π.
In contrast, if a βbitβ is successfully transmitted from theinput π to the output π in the βchannel π΄,β βthe output π§ isalways equal to its input π₯; namely, π§ = π₯β is true at this time.Then there is (π₯ β π¦) mod 4 = (π§ β π¦) mod 4 = [(π¦ + 1) βπ¦] mod 4 = 1 mod 4 = 1. Hence, we can judge that βπ΄ winsβaccording to the rules of this game.
Combining with the situations above, one has the follow-ing.
Lemma 11. In the βfinger guessingβ game, βπ΄wins one timeβ isequivalent to βa βbitβ is successfully transmitted from the inputto its output in the βchannel A.ββ Combine Lemma 1 with theβchannel coding theoremβ of Shannonβs Information Theory; ifthe capacity of the βchannel π΄β is πΆ, for any transmission rateπ/π β€ πΆ, we can receive π bits successfully by sending π bits with
Mathematical Problems in Engineering 9
an arbitrarily small probability of decoding error. Conversely, ifthe βchannel π΄β can transmit π bits to the receiver by sending πbits without error, there must be π β€ ππΆ. In a word, we havethe following theorem.
Theorem 12. Suppose that the channel capacity of the βchannelπ΄β composed of the random variable (π; π) is πΆ. Then afterremoving the situation of βdraw,β one has the following: (1)if π΄ wants to win π times, he must have a certain skill(corresponding to the Shannon coding) to achieve the goal byany probability close to 1 in the π/πΆ rounds; conversely, (2) ifπ΄wins π times in the π rounds, there must be π β€ ππΆ.
According to Theorem 12, we only need to figure out thechannel capacity πΆ of the βchannel π΄,β then the limitation ofthe times of βπ΄ winsβ is determined. So we can calculate thechannel capacity πΆ: first, the joint probability distribution of(π, π) is Pr(π = π, π = π) = Pr(π = π, (π + 1) mod 4 = π) =Pr(π = π, π = (π β 1) mod 4) = π‘π(πβ1) mod 4, π, π = 0, 1, 2, 3, 4.
Therefore, the channel capacity of the channel π΄(π;π) isπΆ = max [πΌ (π, π)]= max
{{{ β0β€π,πβ€3
[π‘π(πβ1) mod 4] log [π‘π(πβ1) mod 4](ππππ)}}} .
(17)
The max in the equation is the maximal value taken from thereal numbers which satisfy the following conditions: 0 < ππ,π‘ππ < 1, π, π = 0, 1, 2, 3;π0+π1+π2+π3 = 1;β0β€π,πβ€3 π‘ππ = 1;ππ₯ =β0β€π¦β€3 π‘π₯π¦. Thus, the capacityπΆ is actually the function of thepositive real variables which satisfy the following conditionsπ0 + π1 + π2 + π3 = 1 and 0 < ππ < 1, π = 0, 1, 2, 3; namely, itcan be written as πΆ(π0, π1, π2, π3), where π0 + π1 + π2 + π3 = 1.
Similarly, we can analyze the situation of βπ΅wins.βWe cansee that the times of βπ΄winsβ (πΆ(π0, π1, π2, π3)) depend on thehabit of π΅(π0, π1, π2, π3). If both π΄ and π΅ stick to their habits,their winning or losing situation is determined; if either π΄ orπ΅ adjusts his habit, he can win statistically when his channelcapacity is larger; if both π΄ and π΅ adjust their habits, theirsituations will eventually reach a dynamic balance.
4.2. Model of βDraw Boxingβ. βDraw boxingβ is more compli-cated than βfinger guessing,β and it is also a game between thehost and guest in the banquet.The rule of the game is that thehost (π΄) and the guest (π΅) independently show one of the sixgestures from 0 to 5 and shout one of eleven numbers from0 to 10. That is, in each round, βhost π΄β is a two-dimensionalrandom variable π΄ = (π, π), where 0 β€ π β€ 5 is the gestureshowed by the βhostβ and 0 β€ π β€ 10 is the number shoutedby him. Similarly, βguestπ΅β is also a two-dimensional randomvariable π΅ = (πΉ, πΊ), where 0 β€ πΉ β€ 5 is the gesture showed bythe βguestβ and 0 β€ πΊ β€ 10 is the number shouted by him. Ifπ΄ and π΅ are denoted by (π₯, π¦) and (π, π) in a certain round,respectively, the rules of the βdraw boxingβ game are
If π₯ + π = π¦, π΄ wins;If π₯ + π = π, π΅ wins.
If the above two cases donot occur, the result of this roundis a βdraw,β andπ΄ andπ΅ continue the next round. Specifically,when the numbers shouted by both sides are the same
(namely, π = π¦), the result of this round is a βdraw.β However,the numbers shouted by both sides are different and the ges-tures showed by them are not equal to βthe number shoutedby any sideβ; the result of this round also comes to a βdraw.β
Obviously, the βdraw boxingβ game is a kind of βnonblindconfrontation.β Who is the winner and how many times thewinner wins? How can they make themselves win more? Wewill use the channel capacity method of the βGeneral Theoryof Securityβ to answer these questions.
Based on the Law of Large Numbers in the probabilitytheory, the frequency tends to probability.Thus, according tothe habits of βhost (π΄)β and βguest (π΅)β, that is the statisticalregularities of their actions in the past (if they meet for thefirst time, we can require them to play a βwarm-up gameβ andrecord their habits), we can give the probability distributionof π΄, π΅ and their components π, π, πΉ, and πΊ, and the jointprobability distribution of (π, π, πΉ, πΊ), respectively.
The probability of βπ΄ shows π₯β:0 < Pr(π = π₯) = ππ₯ < 1, 0 β€ π₯ β€ 5; π₯0 + π₯1 + π₯2 +π₯3 + π₯4 + π₯5 = 1;
The probability of βπ΅ shows πβ:0 < Pr(πΉ = π) = ππ < 1, 0 β€ π β€ 5; π0 + π1 + π2 +π3 + π4 + π5 = 1;
The probability of βπ΄ shouts π¦β:0 < Pr(π = π¦) = ππ¦ < 1, 0 β€ π¦ β€ 10; β0β€π¦β€10 ππ¦ = 1;The probability of βπ΅ shouts πβ:0 < Pr(πΊ = π) = π π < 1, 0 β€ π β€ 10; β0β€πβ€10 π π = 1;The probability of βπ΄ shows π₯ and shouts π¦β:0 < Pr[π΄ = (π₯, π¦)] = Pr(π = π₯, π = π¦) = ππ₯π¦ < 1,0 β€ π¦ β€ 10, 0 β€ π₯ β€ 5, β0β€π¦β€10,0β€π₯β€5 ππ₯π¦ = 1;The probability of βπ΅ shows π and shouts πβ:0 < Pr[π΅ = (π, π)] = Pr(πΉ = π, πΊ = π) = βππ < 1,0 β€ π β€ 10, 0 β€ π β€ 5, β0β€πβ€10,0β€πβ€5 βππ = 1;The probability of βπ΄ showsπ₯ and shoutsπ¦β and βπ΅ showsπ and shouts πβ at the same time:0 < Pr[π΄ = (π₯, π¦), π΅ = (π, π)] = Pr(π = π₯, π =π¦, πΉ = π, πΊ = π) = π‘π₯π¦ππ < 1, where 0 β€ π¦, π β€ 10,0 β€ π₯, π β€ 5, β0β€π¦,πβ€10,0β€π₯,πβ€5 π‘π₯π¦ππ = 1.
In order to analyze the situation of π΄ wins, we constructa two-dimensional random variableπ = (π,π) = (ππΏ (πΊ β π) ,π + πΉ) . (18)The function πΏ is defined as πΏ(0) = 0; πΏ(π₯) = 1 when π₯ ΜΈ= 0.Therefore,Pr [π = (π’, V)] = β
π₯+π=V,π₯πΏ(πβπ¦)=π’π‘π₯π¦ππ Ε‘ ππ’V,
where 0 β€ V β€ 10, 0 β€ π’ β€ 5. (19)
Then, we use the random variablesπ΄ andπ to form a channel(π΄; π), which is called βchannel π΄β and takes π΄ as the inputand π as the output.
Then we analyze some equations. In a certain round, π΄shows π₯ (i.e.,π = π₯, 0 β€ π₯ β€ 5) and shouts π¦ (i.e., π = π¦, 0 β€π¦ β€ 10); meanwhile, π΅ shows π (i.e., πΉ = π, 0 β€ π β€ 5) and
10 Mathematical Problems in Engineering
shoutsπ (i.e.,πΊ = π, 0 β€ π β€ 10). According to the evaluationrules, we have the following: ifπ΄wins in this around, we haveπ₯ + π = π¦ and π¦ ΜΈ= π. Thus, πΏ(π β π¦) = 1 and π = (π’, V) =(π₯πΏ(π β π¦), π₯ + π) = (π₯, π¦) = π΄. In other words, the output πof the βchannel π΄β is always equal to its input π΄ at this time;that is to say, a βbitβ is sent successfully from the input π΄ toits output π.
In contrast, if one bit is successfully sent from the inputπ΄to the output π in the βchannel π΄,β βthe output π§ = (π’, V) =(π₯πΏ(π β π¦)π₯ + π)β is always equal to the βinput (π₯, π¦)β at thistime; also there is π₯πΏ(π β π¦) = π₯ when π₯ + π = π¦; that is,π¦ ΜΈ= π and π₯ + π = π¦. Thus, π΄ wins this round according tothe evaluation rules.
Combining with the cases above, we have the following.
Lemma 13. In a βdraw boxingβ game, βπ΄ wins one timeβ isequivalent to one bit is successfully sent from the input ofβchannel π΄β to its output.
Combining Lemma 13with the βchannel coding theoremβof Shannonβs Information Theory, if the capacity of theβchannel π΄β is π·, for any transmission rate π/π β€ π·, wecan receive π bits successfully by sending π bits with anarbitrarily small probability of decoding error. Conversely, ifthe βchannel π΄β can transmit π bits to the receiver by sendingπ bits without error, there must be π β€ ππ·. In a word, we havethe following theorem.
Theorem 14. Suppose that the channel capacity of the βchannelπ΄β composed of the random variable (π΄; π) is π·. Then afterremoving the situation of βdraw,β one has the following: (1)if π΄ wants to win π times, he must have a certain skill(corresponding to the Shannon coding) to achieve the goal byany probability close to 1 in the π/π· rounds; conversely, (2) ifπ΄ wins π times in the π rounds, there must be π β€ ππ·.
According to Theorem 4, we only need to figure out thechannel capacityπ· of the βchannel π΄β; then the limitation oftimes that βπ΄ winsβ is determined. So we can calculate thechannel capacityπ·:
π· = max [πΌ (π΄, π)] = max{βπ,π§
Pr (π, π§)β log [ Pr (π, π§)[Pr (π)Pr (π§)]]}= max
{{{ βπ₯,π¦,π,π
Pr (π₯, π¦, π₯πΏ (π β π¦) , π₯ + π)
β log[ Pr (π₯, π¦, π₯πΏ (π β π¦) , π₯ + π)[Pr (π₯, π¦)Pr (π₯πΏ (π β π¦) , π₯ + π)]]}}}= max
{{{ βπ₯,π¦,π,π
π‘π₯,π¦,π₯πΏ(πβπ¦),π₯+πβ log[ π‘π₯,π¦,π₯πΏ(πβπ¦),π₯+π[ππ₯π¦ππ₯πΏ(πβπ¦),π₯+π]]
}}} .
(20)
Themaximal value in the equation is a real number whichsatisfies the following conditions:
β0β€π¦β€10
ππ¦ = 1; 0 β€ π¦ β€ 10;β
0β€π¦β€10,0β€π₯β€5
ππ₯π¦ = 1; 0 β€ π¦ β€ 10, 0 β€ π₯ β€ 5,β
0β€πβ€10,0β€πβ€5
βππ = 1; 0 β€ π β€ 10, 0 β€ π β€ 5.(21)
Thus, the capacity π· is actually the function of ππ, ππ,which satisfies the following conditions: 0 β€ π β€ 5; π0 + π1 +π2 + π3 + π4 + π5 = 1; 0 β€ π β€ 10; β0β€πβ€10 π π = 1, where0 β€ π β€ 5 and 0 β€ π β€ 10.
Similarly, we can analyze the situation of βπ΅ wins.β Wecan see that the times of βπ΄ winsβ (π·(ππ, ππ)) depend on thehabit of π΅(ππ, ππ). If both π΄ and π΅ stick to their habits, theirwinning or losing is determined; if either π΄ or π΅ adjusts hishabit, he can win statistically when his channel capacity islarger; if bothπ΄ and π΅ adjust their habits, their situations willeventually reach a dynamic balance.
5. Unified Model of Linear Separable(Nonblind Confrontation)
Suppose that the hacker (π) has n methods of attack; that is,the random variable π has π values which can be denoted as{π₯0, π₯1, . . . , π₯πβ1} = {0, 1, 2, . . . , π β 1}. These πmethods makeup the entire βarsenalβ of the hacker.
Suppose that the honker (π) has π methods of defense;that is, the random variable π has π values, which can bedenoted as {π¦0, π¦1, π¦πβ1} = {0, 1, 2, . . . , π β 1}. These πmethods make up the entire βarsenalβ of the honker.
Remark 15. In the following, we will equivalently transformbetween βthe methods π₯π, π¦πβ and βthe numbers π, πβ asneeded; that is, π₯π = π and π¦π = π. By the transformation, wecan make the problem clear in the interpretation and simplein the form.
In the nonblind confrontation, there is a rule of winningor losing between each hackerβs method π₯π (π = 0, 1, . . . , π β1) and each honkerβs method π¦π (π = 0, 1, . . . , π β 1). Sothere must exist a subset of the two-dimensional number set{(π, π), 0 β€ π β€ π β 1, 0 β€ π β€ π β 1}, which makesβπ₯π is superior to π¦πβ true if and only if (π, π) β π». If thestructure of the subsetπ» is simple, we can construct a certainchannel to make βthe hacker wins one timeβ equivalentto βone bit is successfully transmitted from the sender tothe receiver.β Then, we analyze it using Shannonβs βchannelcoding theorem.β For example,
in the game of βrock-paper-scissors,βπ» = (π, π) : 0 β€π, π β€ 2(π β π) mod 3 = 2;in the game of βcoin tossing,βπ» = (π, π) : 0 β€ π = π β€1;in the game of βpalm or back,βπ» = (π, π, π) : 0 β€ π ΜΈ=π = π β€ 1;
Mathematical Problems in Engineering 11
in the game of βfinger guessing,βπ» = (π, π) : 0 β€ π, π β€3(π β π) mod 4 = 1;in the game of βdraw boxing,β π» = (π₯, π¦, π, π) : 0 β€π₯, π β€ 50 β€ π ΜΈ= π¦ β€ 10π₯ + π = π¦.
We have constructed corresponding communicationchannels for each π» above in this paper. However, it isdifficult to construct such a communication channel for ageneral π». But if the above set π» can be decomposed intoπ» = {(π, π) : π = π(π), 0 β€ π β€ π β 1, 0 β€ π β€π β 1} (namely, the first component π of π» is a function ofits second component), we can construct a random variableπ = π(π). Then considering the channel (π; π), we can givethe following equations.
If the βhackerπβ attacks with themethod π₯π, and βhonkerπβ defends with the method π¦π in a certain round, then if βπwins,β that is, π = π(π), the output of the channel (π; π) isπ = π(π¦π) = π(π) = π = π₯π. So the output of the channelis the same as its input now; that is, one bit is successfullytransmitted from the input of the channel (π; π) to its output.Conversely, if βone bit is successfully transmitted from theinput of the channel (π; π) to its output,β there is βinput =outputβ; that is, βπ = π(π)β, which means βπ wins.β
Combining the cases above, we obtain the followingtheorem.
Theorem 16 (the limitation theorem of linear nonblindconfrontation). In the βnonblind confrontationβ, supposethe hacker π has n attack methods {π₯0, π₯1, . . . , π₯πβ1} ={0, 1, 2, . . . , π β 1} and the honker π has m defense methods{π¦0, π¦1, π¦πβ1} = {0, 1, 2, . . . , π β 1}, and both sides complywith the rule of winning or losing: βπ₯π is superior to π¦πβ if andonly if (π, π) β π», where π» is a subset of the rectangular set{(π, π), 0 β€ π β€ π β 1, 0 β€ π β€ π β 1}.
Forπ, ifπ» is linear and can be written asπ» = {(π, π) : π =π(π), 0 β€ π β€ π β 1, 0 β€ π β€ π β 1} (i.e., the first componentπ of π» is a certain function π(β ) of its second component π),we can construct a channel (π; π) with π = π(π) to get that,if πΆ is the channel capacity of channel (π; π), we have thefollowing.
(1) Ifπ wants to win π times, he must have a certain skill(corresponding to the Shannon coding) to achieve the goalby any probability close to 1 in the π/πΆ rounds.
(2) Ifπwins π times in π rounds, there must exist π β€ ππΆ.Forπ, ifπ» is linear and can be written asπ» = {(π, π) : π =π(π), 0 β€ π β€ πβ1, 0 β€ π β€ πβ1} (i.e., the second componentπ of π» is a certain function π(β ) of its first component π), we
can construct a channel (π; πΊ) with πΊ = π(π) to get that,if π· is the channel capacity of channel (π; πΊ), we have thefollowing.
(3) If π wants to win π times, he must have a certain skill(corresponding to the Shannon coding) to achieve the goalby any probability close to 1 in the π/π· rounds.
(4) Ifπwins π times in π rounds, there must exist π β€ ππ·.6. Conclusion
It seems that these games of nonblind confrontation aredifferent. However, we use an unified method to get the
distinctive conclusion; that is, we establish a channel modelwhich can transform βthe attacker or the defender wins onetimeβ to βone bit is transmitted successfully in the channel.βThus, βthe confrontation between attacker and defenderβ istransformed to βthe calculation of channel capacitiesβ by theShannon coding theorem [6]. We find that the winning orlosing rules sets of these games are linearly separable. Forlinearly inseparable case, it is still an open problem. Thesewinning or losing strategies can be applied in big data field,which provides a new perspective for the study of the big dataprivacy protection.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This paper is supported by the National Key Research andDevelopment Programof China (Grant nos. 2016YFB0800602,2016YFB0800604), the National Natural Science Founda-tion of China (Grant nos. 61573067, 61472045), the BeijingCity Board of Education Science and technology project(Grant no. KM201510015009), and the Beijing City Board ofEducation Science and Technology Key Project (Grant no.KZ201510015015).
References
[1] R. J. Deibert and R. Rohozinski, βRisking security: policiesand paradoxes of cyberspace security,β International PoliticalSociology, vol. 4, no. 1, pp. 15β32, 2010.
[2] L. Shi, C. Jia, and S. Lv, βResearch on end hopping for activenetwork confrontation,β Journal of China Institute of Communi-cations, vol. 29, no. 2, p. 106, 2008.
[3] H. Demirkan and D. Delen, βLeveraging the capabilities ofservice-oriented decision support systems: putting analyticsand big data in cloud,β Decision Support Systems, vol. 55, no. 1,pp. 412β421, 2013.
[4] Y. Yang, H. Peng, L. Li, and X. Niu, βGeneral theory of securityand a study case in internet of things,β IEEE Internet of ThingsJournal, 2016.
[5] Y. Yang, X.Niu, L. Li,H. Peng, J. Ren, andH.Qi, βGeneral theoryof security and a study of hackerβs behavior in big data era,βPeer-to-Peer Networking and Applications, 2016.
[6] C. E. Shannon, βCoding theorems for a discrete source with afidelity criterion,β IRE National Convention Record, vol. 4, pp.142β163, 1959.
[7] B. Kerr, M. A. Riley, M. W. Feldman, and B. J. M. Bohannan,βLocal dispersal promotes biodiversity in a real-life game ofrock-paper-scissors,β Nature, vol. 418, no. 6894, pp. 171β174,2002.
[8] K. L. Chung and W. Feller, βOn fluctuations in coin-tossing,βProceedings of the National Academy of Sciences of the UnitedStates of America, vol. 35, pp. 605β608, 1949.
[9] K.-T. Tseng, W.-F. Huang, and C.-H. Wu, βVision-based fingerguessing game in humanmachine interaction,β in Proceedings ofthe IEEE International Conference on Robotics and Biomimetics(ROBIO β06), pp. 619β624, December 2006.
Submit your manuscripts athttps://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 201
The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Decision SciencesAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Stochastic AnalysisInternational Journal of