+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()]...

[American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()]...

Date post: 16-Dec-2016
Category:
Upload: susanna
View: 212 times
Download: 0 times
Share this document with a friend
11
American Institute of Aeronautics and Astronautics 1 Numerical verification of the historicity of the ESA telecommand authentication approach F. Chiaraluce * , E. Gambi and S.Spinsante Dipartimento di Elettronica, Intelligenza Artificiale e Telecomunicazioni, Università Politecnica delle Marche, Via Brecce Bianche 12, Ancona, Italy, I-60131 This paper presents an algebraic method to force the authentication system proposed in the past by the European Space Agency and nowadays invoked as one of the few examples of security equipment for space missions. The study demonstrates that this system is really vulnerable and that the recent proposal to design new and more robust authentication algorithms for space applications is correct and motivated. I. Introduction ECURITY issues in satellite systems, for a long time relegated to academic discussions and made explicit through no more than “optional” recommendations, are now earning an increasing importance. The reasons for this are manifold. On one hand, satellite services such as communication, imagery and global positioning are more and more integrated with many ground systems and provide services that are essential to every day life. As such, they represent vital assets that need to be protected. On the other hand, the threat to satellite systems is quite tangible: it may be due to accidental contributions, like noise or human errors, but also to malicious jamming. The vulnerability to the latter, in particular, is emphasized by the use of Internet as a part of the ground segment, since Internet based systems can be attacked by anyone, from everywhere, and at anytime. In front of this evidence, the Consultative Committee of Space Data Systems (CCSDS) Security Working Group (WG) has recently started a number of activities demanding national agencies for a fast and deep evaluation of security threats against space missions, and encouraging proposals for encryption and authentication algorithms to be imperatively used in the future missions. A starting point for these activities is obviously constituted by the (very few) examples of specific agency security implementations already presented in the past; among them, the European Space Agency (ESA) telecommand (TC) authentication approach. In the recently revised version of the CCSDS Green Book “On the application of CCSDS protocols to secure systems” 1 the ESA scheme has been correctly addressed as “historical”, since progress in modern, high speed, processors and flaws in foundation technology make its application unsuited for current and future scenarios. Though this remark is absolutely shareable in principle, it deserves, in our opinion, further investigation; in other words, it is interesting to confirm the historicity of the ESA scheme through an explicit quantitative evaluation. The ESA TC authentication approach belongs to the class of Hash-based Message Authentication Code (HMAC) techniques, where the secret key is used to produce the check word. More precisely, the MAC is generated as the output of the cascade of three blocks: the Hash function, the Hard Knapsack and the Erasing Block. In some previous papers, we began to test the vulnerability of such scheme. In particular, in Refs. 2 and 3 we developed an in-depth analysis of the randomness level of the final MAC (but also of the intermediate signature, i.e., the Hash result). Classic NIST (National Institute of Standards and Technology) test suites, but also “ad hoc” tests, were used for this purpose. More recently, in Ref. 4, we focused on the problem of the internal collisions, by demonstrating that simply knowing one input-output pair, the generator polynomial of the Hash function can be computed by hand through a very limited number of attempts. We also discussed other kinds of attacks, like reply attacks or forgery attacks. In this paper, we go further ahead by presenting a complete cryptanalysis of the ESA MAC generator, thus including the Hard Knapsack (HK) and the Erasing Block (EB), that were not considered in the previous study. Although it is rather easy to demonstrate that the secrecy of the HK factors (2880 bits) is questionable when the opponent can apply a chosen text attack, the EB, that deletes the 8 least significant bits of the Knapsack output, * Prof. Eng., email: [email protected] (corresponding author). Dr. Eng., email: [email protected]. Dr. Eng., email: [email protected]. S SpaceOps 2006 Conference AIAA 2006-5580 Copyright © 2006 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Transcript
Page 1: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

1

Numerical verification of the historicity of the ESA telecommand authentication approach

F. Chiaraluce*, E. Gambi† and S.Spinsante‡ Dipartimento di Elettronica, Intelligenza Artificiale e Telecomunicazioni,

Università Politecnica delle Marche, Via Brecce Bianche 12, Ancona, Italy, I-60131

This paper presents an algebraic method to force the authentication system proposed in the past by the European Space Agency and nowadays invoked as one of the few examples of security equipment for space missions. The study demonstrates that this system is really vulnerable and that the recent proposal to design new and more robust authentication algorithms for space applications is correct and motivated.

I. Introduction ECURITY issues in satellite systems, for a long time relegated to academic discussions and made explicit through no more than “optional” recommendations, are now earning an increasing importance. The reasons for

this are manifold. On one hand, satellite services such as communication, imagery and global positioning are more and more integrated with many ground systems and provide services that are essential to every day life. As such, they represent vital assets that need to be protected. On the other hand, the threat to satellite systems is quite tangible: it may be due to accidental contributions, like noise or human errors, but also to malicious jamming. The vulnerability to the latter, in particular, is emphasized by the use of Internet as a part of the ground segment, since Internet based systems can be attacked by anyone, from everywhere, and at anytime.

In front of this evidence, the Consultative Committee of Space Data Systems (CCSDS) Security Working Group (WG) has recently started a number of activities demanding national agencies for a fast and deep evaluation of security threats against space missions, and encouraging proposals for encryption and authentication algorithms to be imperatively used in the future missions. A starting point for these activities is obviously constituted by the (very few) examples of specific agency security implementations already presented in the past; among them, the European Space Agency (ESA) telecommand (TC) authentication approach. In the recently revised version of the CCSDS Green Book “On the application of CCSDS protocols to secure systems”1 the ESA scheme has been correctly addressed as “historical”, since progress in modern, high speed, processors and flaws in foundation technology make its application unsuited for current and future scenarios. Though this remark is absolutely shareable in principle, it deserves, in our opinion, further investigation; in other words, it is interesting to confirm the historicity of the ESA scheme through an explicit quantitative evaluation.

The ESA TC authentication approach belongs to the class of Hash-based Message Authentication Code (HMAC) techniques, where the secret key is used to produce the check word. More precisely, the MAC is generated as the output of the cascade of three blocks: the Hash function, the Hard Knapsack and the Erasing Block. In some previous papers, we began to test the vulnerability of such scheme. In particular, in Refs. 2 and 3 we developed an in-depth analysis of the randomness level of the final MAC (but also of the intermediate signature, i.e., the Hash result). Classic NIST (National Institute of Standards and Technology) test suites, but also “ad hoc” tests, were used for this purpose. More recently, in Ref. 4, we focused on the problem of the internal collisions, by demonstrating that simply knowing one input-output pair, the generator polynomial of the Hash function can be computed by hand through a very limited number of attempts. We also discussed other kinds of attacks, like reply attacks or forgery attacks.

In this paper, we go further ahead by presenting a complete cryptanalysis of the ESA MAC generator, thus including the Hard Knapsack (HK) and the Erasing Block (EB), that were not considered in the previous study. Although it is rather easy to demonstrate that the secrecy of the HK factors (2880 bits) is questionable when the opponent can apply a chosen text attack, the EB, that deletes the 8 least significant bits of the Knapsack output, * Prof. Eng., email: [email protected] (corresponding author). † Dr. Eng., email: [email protected]. ‡ Dr. Eng., email: [email protected].

S

SpaceOps 2006 Conference AIAA 2006-5580

Copyright © 2006 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Page 2: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

2

introduces a weak nonlinearity, making more complex for the opponent to invert the transformation. We have studied, by means of explicit numerical evaluation, the ways to perform an attack for discovering the last part of the key, and we have evaluated, in probabilistic terms, the risk such an attack is successful. Basically, the proposed cryptanalysis will confirm the CCSDS point of view, looking at the ESA approach as historical. Anyway, the numerical evaluation will provide solid bases as the result of a quantitative evaluation, and not merely a sensation induced by the technology progress. At the same time, it will provide the bases to analyze, in a similar way, the security level of the authentication algorithms that will be proposed in the near future to replace the historical one.

II. The ESA Authentication System A schematic representation of the MAC generator used in the ESA Packet Telecommand Standard is shown in

Fig. 1.

The notation in the figure needs explanation (other details can be found in Ref. 5). In particular, it is useful to

remind that m is related to the TC frame data field: it is obtained through the concatenation of the TC, the contents and ID (Identifier) of an LAC (Logical Authentication Channel) counter, and a variable number of stuffing zeros; S = f(m) is the corresponding MAC. The Hash function, used to reduce the data field into a 60-bit value denoted by P, is realized via a Linear Feedback Shift Register (LFSR) that is 60 bits long, whose feedback coefficients are programmable but secret as they are part of the authentication key. This way, the Hash result is kept secret as well.

The LFSR is shown in Fig. 2: sums are modulo 2 (XOR) while the feedback coefficients cfi (i = 0, 1, …, 59) are taken from the secret key. The Hash result is given by the contents of the LFSR after the last bit of m has entered the register. The LFSR shown in Fig. 2, known as “Galois implementation”, can be replaced by other schemes (for example the “Fibonacci implementation”) that however exhibit features quite similar to those discussed afterwards.

The Hard Knapsack is a transformation process which involves the multiplication of each incoming bit with one

out of 60 different vectors, 48 bits long, which are secret in their turn, and the final sum of the results. The operation is shown in Fig. 3; the result is a preliminary version of the MAC, noted by S’ and computed as follows:

59

0

' modj j

j

S P W Q=

� �� �= ⋅� �� �� �� (1)

where Q = 248, Wj is the j-th Hard Knapsack factor, and Pj the j-th bit of the Hash result. S’ is truncated in such a way as to have a final MAC S that, unlike S’, is 40 bits long. This is done by the Erasing Block which eliminates the 8 least significant bits of the Knapsack output.

HASH

FUNCTION HARD

KNAPSACK ERASING

BLOCK m P S’ S

Figure 1. Main functions of the ESA MAC generator.

In

. . .Out

cf0 cf1 cf2 cf58 cf59

Figure 2. LFSR implementation of the Hash function.

Page 3: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

3

At the receiver, on board of the satellite, the MAC is computed again by using a copy of the secret key, and the TC is accepted if and only if the calculated MAC coincides with the received one. Otherwise, the message is blocked from further distribution.

Basically, the presumed strength of this scheme is in the very long secret key, consisting of the 60 cfi’s and the 60 Wj’s, for a total of 60 + 60×48 = 2940 bits, that are unknown to an opponent and discourage application of brute force strategies. Actually, in the next sections we show that the intrinsic linearity of the scheme (only mitigated by the action of the EB) makes this system highly vulnerable, to the point that the opponent, when able to apply a chosen text attack, can even discover the key completely, thus realizing, eventually, a destructive total break attack.

III. Choice of the LFSR’s Coefficients As mentioned, the feedback coefficients of the LFSR implementing the Hash function in the ESA TC

authentication scheme constitute the first 60 bits of the secret key. Thus, their determination is the first act by an opponent wishing to make a total break attack. The ESA standard, which is quite clear in fixing the initial condition of the LFSR, is not equally clear in establishing the structure of these coefficients.

In principle, the LFSR’s coefficients could be taken in such a way as to have a generator polynomial g(x) with maximum degree. This means that g(x):

1. cannot be factorized as the product of two (or more) polynomials with binary coefficients; 2. is a factor of xN + 1, with N = 260 – 1.

A g(x) satisfying conditions 1 and 2 is said to be primitive and irreducible, and an LFSR with such a polynomial generates maximal length sequences. It is easy to see that

g(x) = x60 + x59 + 1 (2)

is a polynomial of this kind. As a consequence of the assumption of (2) (or another polynomial with the same features) one could expect that

the pre-signature P looks like a pseudo-random sequence and this has advantages also on the randomness level of the signature S. Contrary to the expectation, however, there is the observation that the pre-signature P really is not the output of a random generator: it is extracted from the LFSR once the message has been completely loaded, and the randomness features are evaluated over a sequence of length 60, which is too short for appreciating the maximal length properties. Obviously, by changing the message, the pre-signature changes as well, and a statistical analysis, for example in terms of symbol 1 and symbol 0 probabilities (Frequency Test) can be done on the entire set of simulations (for instance, by considering 25,000 messages, the number of P-bits produced is 1,500,000).

To compute the probabilities of symbols 1 and 0 is one of the tests included in the well-known NIST test suite6. The purpose of this test is to determine whether the number of 1 and 0 in a sequence are approximately the same, as would be expected for a truly random sequence. Other important tests are6: Block Frequency, Runs, Autocorrelation, Longest Run of Ones, Rank, DFT. Each test produces a so-called p_value. The p_value is a real number in [0, 1] estimating the probability that a finite realization of an ideally random binary process deviates from the ideal statistics more than the given sequence. Obviously, the higher the p_value the more random the sequence. In Refs. 3 and 4, we assumed that a test is passed if the p_value is greater than a threshold th = 0.01, which is a commonly adopted choice.

P0 . . .

S'0 S'1 S'2 S'45 S'46 S'47. . .

W0 +x

MSB LSBbit 0 bit 47

Pre-signatureP bit

48 bit factors taken from the SECRET KEY

P1 . . . W1 +x

P2 . . . W2 +x

P59 . . . W59 +x

Figure 3. Implementation of the Hard Knapsack function.

Page 4: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

4

We have repeated the Frequency Test by assuming different criteria for generating the messages but, in any case, the test was not passed by the pre-signature, with a p_value everywhere near to zero. It must be said, for the sake of clarity, that this does not prevent to have a signature S with good statistical properties. As shown in Ref. 3, most of the NIST tests are passed by S, with some exceptions, particularly in regard to the Autocorrelation Test (for this reason, just in Ref. 3, a modified scheme was proposed able to eliminate these unfavourable cases).

By repeating the test with polynomials g(x) of not maximum degree (i.e., not satisfying conditions 1 and 2) the pre-signature P has continued to not pass the Frequency Test, whilst the signature S has exhibited faults in more tests than before. Examples of polynomials of this second category are:

g(x) = x60 + x27 + 1 (3a)

g(x) = x60 + x14 + 1 (3b)

g(x) = x60 + x50 + 1 (3c)

Thus, our analysis has confirmed that the adoption of a polynomial of not maximum degree introduces a penalization on the randomness of S. Anyway, we have also proved that this penalization is limited (quantitative details can be found in Ref. 4 and are here omitted for brevity) while, as a counterpart, the adoption of polynomials of maximum degree would set a number of constraints on the structure of g(x), facilitating a lot the work of an opponent. This is due to the fact that the number of polynomials of maximum degree is rather small and could be easily tested by a brute-force attack. Since the adoption of such a polynomial has no evident merit in regard to the properties of P and S, it is better to leave free the choice of g(x), which makes the work harder for everyone that aims to determine the LFSR’s coefficients.

IV. Internal and External Collisions Internal collisions exist naturally among the pre-signatures, because of the reduced size of the LFSR. As the

latter has 60 cells while the minimum length of the incoming message is 64, it is sure that, even for the shortest messages, 16 inputs, on average, will produce an identical P. The number of colliding sequences is destined to increase with the message length. As the pre-signatures can assume 260 different configurations, the birthday paradox states that by generating 230 sequences (which is a rather small value if we look at the calculus power of the present PCs) the probability to find a collision is larger than 0.5. Actually, as shown in Ref. 7, brute force is not the best way for searching collisions; on the contrary, suitable messages should be constructed that, because of their features, allow to find collisions with a limited number of attempts.

For the Hash function in the ESA standard the messages producing internal collisions can be even computed by hand using polynomial algebra. The procedure was described in Ref. 4 and lies in the category of the chosen text attacks. If the pre-signature P is known, which is however a rather unrealistic hypothesis even in the most pessimistic scenario, the messages producing collisions can be found with a very limited number of attempts, even ignoring the generator polynomial g(x). From the knowledge of the colliding messages, g(x) (and therefore part of the secret key) can be easily found as well. Mathematical details are given in the Subsection IV.A. In Subsection IV.B, instead, we will discuss the more realistic case when P is unknown.

A. Total Break in Case of Knowledge of the Pre-Signature P Let us denote by m(x) a polynomial, with binary coefficients, that represents the input message of length n:

1 21 2 1 0( ) .....n n

n nm x m x m x m x m− −− −= + + + + (4)

We have mi = 1 if the i-th bit of the message to authenticate is 1; mi = 0 otherwise. In the following analysis we suppose that the least significant bit (LSB) of g(x) is equal to 1; strictly speaking, in

case of quite arbitrary choice of g(x), this occurs in half the cases. It is also useful to note that, under such hypothesis, g(x) has necessarily degree equal to 60. Moreover, without loss of generality, we focus attention on messages with minimum length (according with the ESA Recommendation) that is n = 64.

Because of the assumption on the generator polynomial, the LFSR in Fig. 1 realizes the division between the input sequence and g(x). By definition, the reminder of the division provides the pre-signature P, in polynomial form.

Page 5: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

5

Taking into account the initial condition (that is 1 in the leftmost cell and 0 in all the other cells), the sequence incoming the LFSR must be written as x64 + m(x), and therefore:

64 ( ) ( ) ( ) ( )x m x q x g x r x+ = + (5)

Then, denoting by m1(x) and m2(x) two colliding messages, the following conditions are satisfied:

641 1( ) ( ) ( ) ( )x m x q x g x r x+ = + (6a)

642 2( ) ( ) ( ) ( )x m x q x g x r x+ = + (6b)

and through simple elaboration:

4 64 4 642 1 1 2'( ) ( ) ( ) '( ) ( ) ( )x q x x m x r x x q x x m x r x� � � � � � � �+ + + = + + +� � � � � � � � (7)

where q1’(x) = x4 + q1(x) and q2’(x) = x4 + q2(x) are polynomials of (maximum) degree equal to 3. For any assigned message m1(x), the remainder r(x) is known and with (at most) 15×16 attempts (q1’(x) is certainly different from q2’(x)) the colliding message m2(x) can be found algebraically. Then, by using (6a) and (6b), the polynomial g(x) results in:

1 2

1 2

( ) ( )( )( ) ( )

m x m xg xq x q x

+=+

(8)

B. Total Break When the Pre-Signature P Is Unknown The hypothesis in the previous subsection, that is the pre-signature P is known, is too optimistic (from the

opponent point of view). Really, the opponent can look at the final signature S only. In other words, she/he can see external collisions instead of internal ones. External collisions are more numerous than internal collisions. Once again, this can be explained by considering the progressive reduction in the signature length when passing from the output of the Hash function (60 bits), to the output of the Hard Knapsack function (48 bits), to the output of the Erasing Block (40 bits). On the other hand, when computed at the output of the authentication scheme, the probability that two signatures collide is 2−40 ≈ 10−12, that is a rather high value.

Once having seen two messages that have experienced an external collision, there is the risk they do not correspond to an internal collision, and therefore cannot be used to derive g(x) according with the procedure described in the previous subsection. A verification is therefore necessary.

Starting from m1(x) and m2(x), a possible g(x) can be found from (8) by testing all possible combinations of q1(x) and q2(x); the number of attempts to make depends on the degree of m1(x) + m2(x), but it is never greater than 8. The problem is that Eq. (8) could be satisfied by more than one polynomial.

Example 1: If

62 61 59 58 4 3 58 2 21 2( ) ( ) 1 ( 1) ( 1) ( 1)m x m x x x x x x x x x x x x+ = + + + + + + + = + ⋅ + + ⋅ +

the following two polynomials are candidate to be the generator polynomial we are searching for:

58 2'( ) ( 1) ( 1)g x x x x= + ⋅ + +

58 2''( ) ( 1) ( 1)g x x x= + ⋅ +

Page 6: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

6

For understanding the procedure that permits to select the right g(x), it is preliminarily useful to observe that messages m(x) that yield r(x) = xi produce, at the output, the first 40 bits of the i-th factor of the Hard Knapsack (in practice, S’ coincides with the i-th factor, which is then truncated by the Erasing Block to provide the final signature S).

Let us suppose to have two candidates, g’(x) and g’’(x) (generalization to an arbitrary number of candidates can be done by extending the procedure here described). The choice between g’(x) and g’’(x) can be done on the basis of an adaptive chosen text attack.

At first we suppose that g(x) = g’(x) and find a message m1(x) that produces a remainder r(x) = 1, for instance:

64 41( ) '( ) 1m x x x g x= + + (9)

having set q1(x) = x4 (other choices are obviously possible). Coherent with the considerations above, when applying m1(x), S should be equal to the first 40 bits of the first Hard Knapsack factor. Then we apply another input, that is:

2 1( ) ( ) '( )m x m x g x= + (10)

It is easy to prove that if the initial assumption is correct, that is g’(x) is the true generator polynomial we are searching for, also m2(x) gives a remainder r(x) = 1, and therefore collides with m1(x). By applying m2(x), in fact, we have:

642 2( ) ( ) ( ) ( )r x x m x q x g x= + + (11)

hence

641 2( ) ( ) '( ) ( ) ( )r x x m x g x q x g x= + + + (12)

and finally, replacing (9) and rearranging

42( ) ( 1) '( ) ( ) ( ) 1r x x g x q x g x= + + + (13)

Thus, we have r(x) = 1 if

4 '( )( ) ( 1)( )

g xq x xg x

= + (14)

which is satisfied for g(x) = g’(x), as assumed. So, in principle, it should be enough to apply inputs (9) and (10) and to verify if these messages collide or not. Unfortunately, however, (we are always looking at the opponent point of view) this test gives a necessary but not sufficient condition in the sense that, depending on the structure of the Hard Knapsack, an external collision between m1(x) and m2(x) can occur even if r(x) produced by m2(x) is ≠ 1. For this reason, it is convenient to repeat the test:

1. for any r(x) = xi, with i = 0, 1, …, 59, 2. for any possible candidate.

This way, the probability of anomalous situations is made negligible, and the only polynomial that passes the test for all values of i can be chosen as the right one.

The processing time required for verification is very limited: in practice, 120 chosen texts must be applied for any candidate. At the same time, this procedure provides directly the first 40 bits of all the Hard Knapsack factors, that would be anyway other unknowns to solve. Therefore, at this point, the opponent has determined 2400 + 60 = 2460 bits of the secret key, that is about 84% of the overall key. And this result has been obtained with a procedure conceptually and practically very simple, though based on the possibility to realize an adaptive chosen text attack (which is the hypothesis of the considered cryptanalysis). To find the remaining 14% of the key, thus reaching the position for a total break, is the hardest part of the analysis, since the bits eliminated by the Erasing Block can be

Page 7: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

7

only reconstructed by induction, never appearing explicitly in the signature S for any possible chosen text. This problem is faced in the next section.

V. Reconstruction of the Erased Bits of the Key As stressed in the previous section, application of messages that produce r(x) = xi as the result of the Hash

function (that is the pre-signature P, and this is possible once having determined the LFSR generator polynomial g(x)), should permit to extract selectively the Hard Knapsack factors, thus completing discovering the secret key. The Erasing Block, however, complicates the matter, making visible only the first 40 bits of each factor, and keeping secret the remaining 8 bits. Though in relative terms the undetermined (at this point) portion of the key is rather small, in absolute terms it is very large (480 bits). So, it is quite unpractical to search for these bits through a brute force attack. Once again, the problem must be faced analytically, where it is possible to describe a procedure that permits to find many of these residual unknowns, or even all in a number of cases. The fundamentals of the procedure are described below.

The erased bits of the Hard Knapsack factors can be discovered by exploiting the carries mechanism in the sum S’. Let us show the matter with a preliminary example.

Suppose to apply:

�� first, a message m1(x) that gives r1(x) = xi and therefore S’1 = Wi; �� then, a message m2(x) that gives r2(x) = xj and therefore S’2 = Wj; �� finally, a message m3(x) = m1(x) + m2(x) that gives r(x) = xi + xj and therefore S’3 = Wi + Wj.

In all cases, only the first part of S’k, i.e. Sk, will be visible at the output. It is evident that if the sum of the 8 erased bits in Wi and Wj exceeds 255 (decimal value) there is a carry (C = 1) and S3 ≠ S1 + S2; otherwise, that is the sum of the erased bits is smaller than 256, there is not a carry (C = 0). So, simply comparing S3 to S1 + S2 it is possible to know if a carry has been produced or not, and to derive useful information on the value of the unknown bits. Even more, by combining triplets of messages and analyzing the carries that result from their combination, under proper circumstances, the values of some bits can be univocally determined.

Example 2: Let us suppose to have verified that:

i) W0 + W1 yields C = 0 ii) W0 + W2 yields C = 1 iii) W1 + W2 yields C = 1

Noting by wi1, wi2, . . ., wi8 the erased bits of the i-th factor (where wi8 is the least significant bit), the results above certainly implies that (operations are ordinary, not in the binary field):

i) (w08 + w18) + 2(w07 + w17) + 4(w06 + w16) + 8(w05 + w15) + 16(w04 + w14) + 32(w03 + w13) + 64(w02 + w12) + 128(w01 + w11) < 256 → w01 and w11 cannot be both equal to 1;

ii) 256 ≤ (w08 + w28) + 2(w07 + w27) + 4(w06 + w26) + 8(w05 + w25) + 16(w04 + w24) + 32(w03 + w23) + 64(w02 + w22) + 128(w01 + w21) < 511 → w01 and w21 cannot be both equal to 0;

iii) 256 ≤ (w18 + w28) + 2(w17 + w27) + 4(w16 + w26) + 8(w15 + w25) + 16(w14 + w24) + 32(w13 + w23) + 64(w12 + w22) + 128(w11 + w21) < 511 → w11 and w21 cannot be both equal to 0.

These partial conclusions can be reported in the following logic table, where “imp” states for impossible combination:

w01 w11 w21 0 0 0 imp 0 0 1 0 1 0 imp 0 1 1 1 0 0 imp 1 0 1 1 1 0 imp 1 1 1 imp

Page 8: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

8

So, from the table we see that, because of impossible combinations, w21 = 1 necessarily; therefore, the value of w21 is revealed, and will be known throughout the rest of the cryptanalysis. �

The case proposed in Example 2 defines a “significant” combination, in the sense that the triplet of conditions

permits to determine the value of one bit and some further constraints on the other bits (for example: to have w01 = 1 and w11 = 1 is impossible). On the other hand, depending on the structure of the Hard Knapsack factors, that are the unknowns to determine, not all combinations are significant.

Example 3: Let us suppose to have verified that: iv) W3 + W4 yields C = 0 v) W3 + W5 yields C = 0 vi) W4 + W5 yields C = 0

Consequently: iv) w31 and w41 cannot be both equal to 1; v) w31 and w51 cannot be both equal to 1; vi) w41 and w51 cannot be both equal to 1.

The logic table, in this case, is as follows:

w31 w41 w51 0 0 0 0 0 1 0 1 0 0 1 1 imp 1 0 0 1 0 1 imp 1 1 0 imp 1 1 1 imp

that continues to provide useful information on the values of w31, w41 and w51 but does not permit to determine any of them univocally. �

As unknown bits are determined from significant conditions, they can be used to compute the value of other bits.

Example 2 (cont.): Let us suppose to have verified that:

vii) W2 + W3 yields C = 0 This implies that w21 and w31 cannot be both equal to 1; since, we have already determined w21 = 1, it must be w31 = 0. �

Once having computed a substantial set of ws1’s (obviously, the ideal case is to find all of them through

significant conditions) one can proceed to determine the ws2’s. The sum of two Hard Knapsack factors Wi and Wj provides a useful condition to this purpose only for wi1 = 0 and wj1 = 1 (or vice versa), when it is possible to say that

- if C = 0, then wi2 and wj2 cannot be simultaneously = 1; - if C = 1, then wi2 and wj2 cannot be simultaneously = 0.

In general, however, it is necessary to consider sums of three factors. As before, not all sums of three factors are significant. In particular, it is possible to verify that when summing factors Wi, Wj and Wk, significant conditions are found in the following cases:

- wi1 = wj1 = wk1 = 0 and C = 1, which implies that at least two among wi2, wj2 and wk2 are = 1; - wi1 = wj1 = wk1 = 1 and C = 1, which implies that at least two among wi2, wj2 and wk2 are = 0; - wi1 = 0, wj1 = wk1 = 1 and C = 10 (binary notation), which implies that at least two among wi2, wj2 and wk2

are = 1;

Page 9: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

9

- wi1 = 1, wj1 = wk1 = 0 and C = 0, which implies that at least two among wi2, wj2 and wk2 are = 0. Suitable combinations of these conditions permit to determine values of wi2.

Example 4: Let us suppose that the previous analysis has given: w41 = 1 w71 = 0 w91 = 1 w11,1 = 1 w12,1 = 1 and that:

viii) W4 + W11 + W12 yields C = 1 ix) W7 + W9 + W11 yields C = 10 x) W7 + W9 + W12 yields C = 10

Based on the significant conditions above, combining these results the following table can be easily derived:

w72 w92 w11,2 w12,2 0 0 0 0 imp 0 0 0 1 imp 0 0 1 0 imp 0 0 1 1 imp 0 1 0 0 imp 0 1 0 1 imp 0 1 1 0 imp 0 1 1 1 imp 1 0 0 0 imp 1 0 0 1 imp 1 0 1 0 imp 1 0 1 1 imp 1 1 0 0 1 1 0 1 1 1 1 0 1 1 1 1 imp

and, from the table, we can conclude that w72 = w92 = 1. We also see that to have w11,2 = w12,2 = 1 is impossible, but the other three combinations remain possible. �

Sums of three factors can be also used to determine the values of (or, at least, conditions on) the wsm’s, with 3 ≤

m ≤ 8, once having determined the wsn’s, with n < m. As an example, the following table specifies significant conditions for calculating the ws3’s, on the basis of the values of the ws1’s and ws2’s:

wi1-wj1-wk1 wi2-wj2-wk2 C 000 110 1 000 111 0 001 000 1 001 001 0 110 110 10 110 111 1 111 000 10 111 001 1

In each column, the order is unimportant; explicitly, this means that, for example, wi1-wj1-wk1 = 001 states for

“two bits, among the three considered, are equal to 0 and one bit is equal to 1”. Consequently, the fourth row of the table includes the case of wi1 = 0, wi2 = 0, wj1 = 0, wj2 = 0, wk1 = 1, wk2 = 1, as well as the case wi1 = 0, wi2 = 1, wj1 =

Page 10: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

10

1, wj2 = 0, wk1 = 0, wk2 = 0, and so on. Similarly, the following table specifies significant conditions for calculating the ws4’s, on the basis of the values of the ws1’s, ws2’s and ws3’s:

wi1-wj1-wk1 wi2-wj2-wk2 wi3-wj3-wk3 C Significance 000 110 110 1 & 000 110 111 0 % 000 111 000 1 & 000 111 001 0 % 001 000 110 1 & 001 000 111 0 % 001 001 000 1 & 001 001 001 0 % 110 110 110 10 & 110 110 111 1 % 110 111 000 10 & 110 111 001 1 % 111 000 110 10 & 111 000 111 1 % 111 001 000 10 & 111 001 001 1 %

The last column of the table specifies the “significance level” of each condition, according with the following

legenda: &: at least two of the unknown bits (ws4 in this case, with s = i, j, k) are = 1; %: at least two of the unknown bits are = 0.

Extending the procedure to the other unknown bits, it is easy to see that the combinations of three factors can also lead to these additional significance levels: $: all three unknown bits are = 1; £: at least one of the unknown bits is = 0; #: at least one of the unknown bits is = 1.

To compute the ws8’s, on the basis of the values of the ws1’s … ws7’s, one can come back to consider sums of two factors, that in fact yield conditions with significance levels £ and $ (specialized to couples, instead of triplets, of bits).

Many other rules and practical remarks can be found, that give a contribution to the solution of the carries problem; they are omitted here for the sake of brevity. Globally, the opponent (whose point of view we have embodied for clarity) has to solve a “big” system of inequalities that combine 480 unknowns. This could seem a very arduous task but, in reality, the considered inequalities are very simple and can be efficiently managed through a software program. The algorithm we have discussed above step by step, for the sake of clarity, is prone to be implemented on a PC, even with limited computing power (in our simulations we have used a Pentium IV CPU 2.8 GHz) and relatively small free space on the hard disk. A number of tricks can be used for making elaboration faster and faster. One of such tricks relies on the possibility to determine the order of magnitude of the factors (limited to the unknown part) in such a way as to order them, as much as possible. Ordering is often a subresult of the sums that one makes when searching significant conditions. In an ordered list of factors, the disclosure of one bit can permit to find immediately many other bits. As an example, if one finds wi4 = 1, for some factor Wi, then all factors Wj with larger weight but wjm = wim, m = 1, 2, 3, will also have wj4 = 1; dually, if wi4 = 0, then all factors Wj with smaller weight but wjm = wim, m = 1, 2, 3, will also have wj4 = 0.

Another obvious observation is that inequalities involving factor Wi can be revisited any time a new bit of Wi is determined, since they can permit to solve other unknowns. As an example, if one finds wi3 = 0, and Wi is included in a not yet solved condition with significance level & (a condition is solved when it has permitted to find the values of the unknown bits it involves), it is automatically ensured that the wj3 and wk3 of the other two factors, Wj and Wk, in the same inequality are both equal to 1.

Once having stated that the idea to solve the system is realistic, it remains the curiosity to determine the percentage of cases the solution permits to find all the unknown bits, thus realizing a complete total break attack. Based on the previous analysis, it is easy to conclude that to find an unknown bit of Wi we need two significant combinations of type & (or %) involving Wi (for example: Wi - Wj - Wr and Wi - Wy - Ws) and one significant combination of type % (or &) involving one factor from the first combination and one factor from the second

Page 11: [American Institute of Aeronautics and Astronautics SpaceOps 2006 Conference - Rome, Italy ()] SpaceOps 2006 Conference - Numerical Verification of the Historicity of the ESA Telecommand

American Institute of Aeronautics and Astronautics

11

combination (for example: Wj - Wy - Wq). To solve completely the problem, thus determining all the unknown bits, it is necessary that all factors are involved in significant combinations. A rough estimation of the probability this happens, based on the ratio between the number of favourable cases and the number of possible cases, provides as reliable a value on the order of 0.01. This means that 1 key any 100 keys, arbitrary chosen, can be completely discovered, in spite of its large length, in a short time and with very limited computing power. A probability p ≈ 0.01 that the system suffers a total break attack is obviously unacceptable for any kind of secure system, most of all in a telecommand or telemetry environment. To prevent the risk, any chosen key should be tested, according with the procedure here described. Moreover, even a key that is able to prevent a total break attack remains vulnerable to a solution of the system that permits to find a large number of bits deleted by the Erasing Block. For all these reasons, the conclusion about the intrinsic weakness of the ESA TC authentication scheme is confirmed by our analysis, and the suggestion of the CCSDS Security WG to conceive a much robust system to be included mandatory in the future space missions is now justified on a quantitative basis.

VI. Conclusion Our demonstration on the vulnerability of the ESA telecommand authentication standard relies on the possibility

to make an adaptive chosen text attack. Probably, this is not the simplest attack to realize, but it is known that modern cryptography, following a conservative approach, requires that an algorithm is robust also against this “complex” attacks. Our quantitative analysis confirms the qualitative conclusions of the CCSDS WG. Surprisingly, the weakest part of the system is the Hard Knapsack; the Hash Function, which is linear, is rather simple to violate, whilst the major difficulties for an opponent are caused by the action of the Erasing Block. The very long length of the secret key does not provide any specific protection, since most of the key can be discovered fast, while the remaining part can require additional work that however is managed by a simple software program. The percentage of cases where cryptanalysis permits the total break of the system is significant.

As the ESA system cannot be further seen as a benchmark, discussion is open about new proposals. The current trend is for the adoption of the RSA (Rivest-Shamir-Adleman) authentication system. This the state-of-the-art in the present scenario of authentication methods. Anyway, the technological advances are so rapid to require continuous monitoring and updating. This is particularly true for space missions, where the robustness of the algorithm should take into account the mission lifetime.

References 1CCSDS, “The Application of CCSDS Protocols to Secure Systems,” Report Concerning Space Data System Standards,

CCSDS 350.0-G-2, Green Book, Washington, D.C., Aug. 2005. 2F. Chiaraluce, G. Finaurini, E. Gambi, S. Spinsante, “Analysis and improvement of the ESA telecommand authentication

procedure”, ESA 3rd Int. Workshop on TTC Systems for Space Appls., 7-9 Sept. 2004, Darmstadt, Germany, pp. 705-711. 3F. Chiaraluce, E. Gambi, S. Spinsante, “Efficiency tests results and new perspectives for secure telecommand authentication

in space missions: case-study of the European Space Agency”, ETRI Journal, Vol. 27, No. 4, pp. 394-404, Aug. 2005. 4S. Spinsante, M. Baldi, F. Chiaraluce, E. Gambi, G. Righi, “Evaluation of authentication and encryption algorithms for

telecommand and telemetry in space missions”, Proc. 23rd AIAA International Communications Satellite Systems Conference (ICSSC-2005), Paper I000095, Rome, Italy, 25-28 Sept. 2005.

5ESA, “Telecommand Decoder Specification,” ESA PSS-04-151, Issue 1, Paris, France, Sept. 1993. 6National Institute of Standards and Technology, “A Statistical Test Suite for Random and Pseudorandom Number

Generators for Cryptographic Applications,” NIST special publication 800-22, 15 May 2001. 7X. Wang, D. Feng, X. Lai, H. Yu, “Collisions for Hash functions MD4, MD5, HAVAL-128 and RIPEMD”, eprint archive

2004/199, http://eprint.iacr.org/2004/199, presented at the Crypto 2004 rump session, Aug. 17, 2004.


Recommended