519
Security in IT Networks: Multilateral Security in Distributed and by Distributed Systems Script for the lectures ”Security and Cryptography” I+II Andreas Pfitzmann TU Dresden, Fakult¨at Informatik, D-01062 Dresden Phone: +49 351 463 8277 E-Mail: pfi[email protected] http://dud.inf.tu-dresden.de/ April 10, 2006

Multilateral Security in Distributed and by Distributed Systems

Embed Size (px)

Citation preview

Security in IT Networks:Multilateral Security in Distributed and by

Distributed Systems

Script for the lectures

”Security and Cryptography” I+II

Andreas PfitzmannTU Dresden, Fakultat Informatik, D-01062 Dresden

Phone: +49 351 463 8277E-Mail: [email protected]

http://dud.inf.tu-dresden.de/

April 10, 2006

Contents

1 Introduction 17

1.1 What are computer networks (distributed open systems)? . . . . . . . . . 17

1.2 What does security mean? . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.2.1 What is to be protected . . . . . . . . . . . . . . . . . . . . . . . . 23

1.2.2 To protect against whom . . . . . . . . . . . . . . . . . . . . . . . 25

1.2.3 How and through what security can be achieved . . . . . . . . . . 30

1.2.4 Preview of precautions . . . . . . . . . . . . . . . . . . . . . . . . . 30

1.2.5 Attacker model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

1.3 What means security in computer networks? . . . . . . . . . . . . . . . . 34

1.4 Why multilateral security . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2 Security in Single Computers 37

2.1 Physical security assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.1.1 What can we expect at best? . . . . . . . . . . . . . . . . . . . . . 37

2.1.2 Design of security measures . . . . . . . . . . . . . . . . . . . . . . 38

2.1.3 A negative example: chip cards . . . . . . . . . . . . . . . . . . . 39

2.1.4 Useful physical security measures . . . . . . . . . . . . . . . . . . . 40

2.2 Protection of isolated computers from unauthorized access and computerviruses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.2.1 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.2.2 Admission control . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.2.3 Access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

2.2.4 Limitation of the thread through ”computer viruses” towards theone of ”transitive trojan horses” . . . . . . . . . . . . . . . . . . . 45

2.2.5 Remaining problems . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3 Cryptology Basics 49

3.1 Systematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.1.1 Cryptographic systems and their key-distribution . . . . . . . . . . 51

3.1.1.1 Conzelation-systems, overview . . . . . . . . . . . . . . . 52

3.1.1.2 Authentication-systems, overview . . . . . . . . . . . . . 55

3.1.2 Further annotations to the key-distribution . . . . . . . . . . . . . 59

3.1.2.1 Whom are keys assigned to? . . . . . . . . . . . . . . . . 60

3.1.2.2 How many keys need to be exchanged? . . . . . . . . . . 60

3.1.2.3 Security of the key-distribution limits the cryptographi-cally reachable security . . . . . . . . . . . . . . . . . . . 61

3

Contents

3.1.3 Basics of security . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.1.3.1 Attack-aims and -successes . . . . . . . . . . . . . . . . . 623.1.3.2 Types of attacks . . . . . . . . . . . . . . . . . . . . . . . 643.1.3.3 Connection between observing/changing and passive/ac-

tive attacker . . . . . . . . . . . . . . . . . . . . . . . . . 653.1.3.4 Basics of ”cryptographically strong” . . . . . . . . . . . . 663.1.3.5 Overview over the in the following presented cryptogra-

phic systems . . . . . . . . . . . . . . . . . . . . . . . . . 693.1.4 Hybrid cryptographic systems . . . . . . . . . . . . . . . . . . . . . 70

3.2 one-time-pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713.3 Authentication codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator . . . . . . . . 79

3.4.1 Basics for systems with factorizing assumptions . . . . . . . . . . . 803.4.1.1 Calculating modulo n . . . . . . . . . . . . . . . . . . . . 833.4.1.2 Number of elements of Z∗

n and the coherence with expo-nentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

3.4.1.3 Connections between Zn and Zp, Zq . . . . . . . . . . . . 873.4.1.4 Squares and roots in general . . . . . . . . . . . . . . . . 883.4.1.5 Squares and roots (mod p) . . . . . . . . . . . . . . . . . 893.4.1.6 Squares and roots (mod p) for p ≡ 3 (mod 4) . . . . . . 903.4.1.7 Squares and roots (mod n) withknowledge of p, q . . . . 913.4.1.8 Squares and roots (mod n) withknowledge of p, q (mod 4) 923.4.1.9 Squares and roots (mod n) withoutknowledge of p, q . . 92

3.4.2 Requirements concerning the Pseudo–Random Bit Generator . . . 953.4.3 The s2 (mod n)–generator . . . . . . . . . . . . . . . . . . . . . . . 993.4.4 s2 (mod n)–generator used as an asymmetric system of secrecy . . 101

3.5 GMR: A strong cryptographic signature system . . . . . . . . . . . . . . . 1033.5.1 Basic function: collision restistant permutation pairs . . . . . . . . 1043.5.2 Mini-GMR for a message containing only one bit . . . . . . . . . . 1063.5.3 Basic signatures: big tuples of collision resistant permutations . . . 1073.5.4 Complete GMR: authentification of many references . . . . . . . . 108

3.5.4.1 Further efficeny improvements . . . . . . . . . . . . . . . 1113.6 RSA: The most known system for asymetric concelation and digital sig-

natures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1133.6.1 The asymetric cryptographical system RSA . . . . . . . . . . . . . 1133.6.2 Naive and insecure usage of RSA . . . . . . . . . . . . . . . . . . . 115

3.6.2.1 RSA as asymetric concelation system . . . . . . . . . . . 1163.6.2.2 RSA as digital signature system . . . . . . . . . . . . . . 117

3.6.3 Attacks, especially multiplicative attacks from Davida and Moore . 1173.6.3.1 RSA as digital signature system . . . . . . . . . . . . . . 1183.6.3.2 RSA as asymetric concelation system . . . . . . . . . . . 1193.6.3.3 Thwarting the attack . . . . . . . . . . . . . . . . . . . . 1203.6.3.4 RSA as asymetric concelation system . . . . . . . . . . . 1203.6.3.5 RSA as digital signature system . . . . . . . . . . . . . . 121

4

Contents

3.6.4 Efficent implementations of RSA . . . . . . . . . . . . . . . . . . . 1223.6.4.1 Public exponent with almost only zeros . . . . . . . . . . 1223.6.4.2 The owner of the secret key calculates the factors modulo 1243.6.4.3 Encryption performance . . . . . . . . . . . . . . . . . . . 125

3.6.5 Security and usage of RSA . . . . . . . . . . . . . . . . . . . . . . 1253.7 DES: The most known system for symetric concelation and authentication 126

3.7.1 Overview of DES . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1263.7.2 A iteration round . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1263.7.3 The decryption principle of DES . . . . . . . . . . . . . . . . . . . 1263.7.4 The encryption function . . . . . . . . . . . . . . . . . . . . . . . . 1293.7.5 Partial key generation . . . . . . . . . . . . . . . . . . . . . . . . . 1303.7.6 The complementary property of DES . . . . . . . . . . . . . . . . . 1313.7.7 Generalization of DES: G-DES . . . . . . . . . . . . . . . . . . . . 1313.7.8 Decryption performance . . . . . . . . . . . . . . . . . . . . . . . . 134

3.8 Cryptographic systems in operation . . . . . . . . . . . . . . . . . . . . . 1343.8.1 Block Ciphers, Stream Ciphers . . . . . . . . . . . . . . . . . . . . 1343.8.2 Operation modes of block ciphers . . . . . . . . . . . . . . . . . . . 136

3.8.2.1 Electronic code book (ECB) . . . . . . . . . . . . . . . . 1373.8.2.2 Cipher block chaining (CBC) . . . . . . . . . . . . . . . . 1373.8.2.3 Cipher Feedback (CFB) . . . . . . . . . . . . . . . . . . . 1413.8.2.4 Output feedback (OFB) . . . . . . . . . . . . . . . . . . . 1443.8.2.5 Plain cipher block chaining (PCBC) . . . . . . . . . . . . 1463.8.2.6 Output cipher feedback (OCFB) . . . . . . . . . . . . . . 1483.8.2.7 Annotations to expanding block ciphers and memory ini-

tialization . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493.8.2.8 Summary on the properties of operation modes . . . . . . 149

3.8.3 Construction of a collision-resistent hash function from block ciphers1513.9 Outline of further systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

3.9.1 Diffie-Hellman key exchange . . . . . . . . . . . . . . . . . . . . . . 1543.9.2 For the signee unconditional secure signatures . . . . . . . . . . . . 1573.9.3 Unconditional secure pseudo-signatures . . . . . . . . . . . . . . . 1603.9.4 Undeniable signatures . . . . . . . . . . . . . . . . . . . . . . . . . 1613.9.5 Blind signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1613.9.6 Threshold scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

4 Basics in Steganography 1674.1 Systematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

4.1.1 Stenographic systems . . . . . . . . . . . . . . . . . . . . . . . . . 1704.1.1.1 Stenographic secrecy-systems, overview . . . . . . . . . . 1704.1.1.2 Stenographic authentication-systems, overview . . . . . . 1714.1.1.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 171

4.1.2 An annotation to the key-distribution: steganography with publickeys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

4.1.3 Basics of security . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

5

Contents

4.1.3.1 Attack-aims and -successes . . . . . . . . . . . . . . . . . 1734.1.3.2 Types of attacks . . . . . . . . . . . . . . . . . . . . . . . 1734.1.3.3 Stegoanalysis leads to improved steganography . . . . . . 173

4.1.3.3.1 construction of S and S−1 . . . . . . . . . . . . 1734.1.3.3.2 construction of E and E−1 . . . . . . . . . . . . 175

4.2 Information-theoretic secure steganography . . . . . . . . . . . . . . . . . 1784.2.1 Information theoretic secure steganographic secrecy . . . . . . . . 1784.2.2 Information-theoretic secure steganographic authentication . . . . 179

4.3 Two opposed stego-paradigms . . . . . . . . . . . . . . . . . . . . . . . . . 1794.3.1 Changing as little as possible . . . . . . . . . . . . . . . . . . . . . 1794.3.2 Simulating the normal process . . . . . . . . . . . . . . . . . . . . 179

4.4 Parity-encoding to improve concealment . . . . . . . . . . . . . . . . . . . 1794.5 Steganography in digital telephone calls . . . . . . . . . . . . . . . . . . . 1794.6 Steganography in pictures . . . . . . . . . . . . . . . . . . . . . . . . . . . 1794.7 Steganography in video conferences . . . . . . . . . . . . . . . . . . . . . . 179

5 Security in Communication Networks 1815.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

5.1.1 The societal problem . . . . . . . . . . . . . . . . . . . . . . . . . . 1815.1.2 Information theoretical problem and solution drafts . . . . . . . . 183

5.2 Usage and limits of encryption in communication networks . . . . . . . . 1865.2.1 Usage of encryption in communication networks . . . . . . . . . . 186

5.2.1.1 Link by link encryption . . . . . . . . . . . . . . . . . . . 1865.2.1.2 Point to Point encryption . . . . . . . . . . . . . . . . . . 188

5.2.2 Limits of encryption in communication networks . . . . . . . . . . 1905.3 Basic measures settled outside the communication network . . . . . . . . 192

5.3.1 Public nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925.3.2 Time independent processing . . . . . . . . . . . . . . . . . . . . . 1925.3.3 Local choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

5.4 Basic measures settled inside the communication network . . . . . . . . . 1935.4.1 Broadcasting: The protection of the receiver . . . . . . . . . . . . 194

5.4.1.1 Altering attacks on broadcasting . . . . . . . . . . . . . . 1955.4.1.2 Implicit addressing . . . . . . . . . . . . . . . . . . . . . . 195

5.4.1.2.1 Implementation of covered implicit addressing . 1965.4.1.2.2 Equivalence of covered and implicit addressing

and concealation . . . . . . . . . . . . . . . . . . 1965.4.1.2.3 Implementation of open and implicit addressing 1975.4.1.2.4 Address management . . . . . . . . . . . . . . . 197

5.4.1.3 Tolerating of errors and altering attacks on broadcasting 1985.4.2 Requesting and overlaying: Protecting the receiver . . . . . . . . . 1995.4.3 Meaningless messages: Weak protection of the sender and the re-

ceiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2015.4.4 Unobservability of bordering connections and stations as well as

digital signal regeneration: Protection of the sender . . . . . . . . 202

6

Contents

5.4.4.1 Circular cabling (RING-Network) . . . . . . . . . . . . . 2035.4.4.1.1 A efficent 2-anonymous ring access system . . . 2045.4.4.1.2 Fault tolerance in RING-Networks . . . . . . . . 206

5.4.4.2 Collision avoiding tree network (TREE-Network) . . . . . 2105.4.5 Superimposing sending (DC-network): Protection of the sender . . 211

5.4.5.1 A first outline . . . . . . . . . . . . . . . . . . . . . . . . 2115.4.5.2 Definition and Proof of sender-anonymity . . . . . . . . . 2145.4.5.3 Receiver-anonymity achieved by the snap-snatch-distri-

bution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2175.4.5.3.1 Comparing the global superimposition-results . . 2185.4.5.3.2 Deterministic snap-snatch-key-generation . . . . 2195.4.5.3.3 Probabilistic snap-snatch-key-generation . . . . 221

5.4.5.4 Superimposing receiving . . . . . . . . . . . . . . . . . . . 2215.4.5.4.1 Criteria for the preservation of anonymity and

un-linkability . . . . . . . . . . . . . . . . . . . . 2225.4.5.4.2 Algorithm for resolution of collisions with aver-

aging and superimposing receiving . . . . . . . . 2245.4.5.4.3 booking-scheme (Roberts’ scheme) . . . . . . . . 227

5.4.5.5 Optimality, complexity and implementations . . . . . . . 2295.4.5.6 Fault tolerance of the DC-network . . . . . . . . . . . . . 236

5.4.6 MIX–networks: Protection of the relations of communication . . . 2425.4.6.1 Fundamental deliberations about possibilities and limits

of changing the encoding . . . . . . . . . . . . . . . . . . 2445.4.6.2 Sender anonymity . . . . . . . . . . . . . . . . . . . . . . 2475.4.6.3 Receiver’s anonymity . . . . . . . . . . . . . . . . . . . . 2495.4.6.4 mutual anonymity . . . . . . . . . . . . . . . . . . . . . . 2545.4.6.5 Changing the encoding without changing length of mes-

sages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2545.4.6.6 Efficient avoidance of changing the encoding repeatedly . 2645.4.6.7 Short preview . . . . . . . . . . . . . . . . . . . . . . . . 2665.4.6.8 Necessary properties of the asymmetric system of secrecy

and breaking the direct RSA–implementation . . . . . . . 2665.4.6.9 Faster transmission by using MIX-channels . . . . . . . . 2695.4.6.10 Fault–tolerance at MIX–nets . . . . . . . . . . . . . . . . 274

5.4.6.10.1 Different MIX–sequences . . . . . . . . . . . . . 2765.4.6.10.2 Substitution of MIXs . . . . . . . . . . . . . . . 277

5.4.6.10.2.1 The coordination problem . . . . . . . . . 2795.4.6.10.2.2 MIXs with reserve–MIXs . . . . . . . . . 2805.4.6.10.2.3 Leaving out a MIX . . . . . . . . . . . . . 282

5.4.7 Tolerating altering attacks on RING-, DC- and MIX-Networks . . 2855.4.8 Active chaining attacks over limited resources . . . . . . . . . . . . 2885.4.9 Classification within a layer model . . . . . . . . . . . . . . . . . . 2905.4.10 Comparision of strength and effort of RING-, DC, and MIX-Network294

5.5 Upgrading towards a integrated broadband network . . . . . . . . . . . . 296

7

Contents

5.5.1 Telephone MIXes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2975.5.1.1 Basics of time slice channels . . . . . . . . . . . . . . . . 2985.5.1.2 Structure of the time slice channels . . . . . . . . . . . . 2995.5.1.3 Establishing a unobservable connection . . . . . . . . . . 3005.5.1.4 Shutting down a unobservable connection . . . . . . . . . 3045.5.1.5 Accouting the network usage . . . . . . . . . . . . . . . . 3045.5.1.6 Connections between LSCs with MIX cascades and con-

ventional LSCs . . . . . . . . . . . . . . . . . . . . . . . . 3055.5.1.7 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . 306

5.5.2 Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3065.5.2.1 Parameters for describing the cryptographic systems needed307

5.5.2.1.1 Symetrical Concealations Systems . . . . . . . . 3085.5.2.1.2 Asymetrical Concealations Systems . . . . . . . 3085.5.2.1.3 Hybrid Encryption . . . . . . . . . . . . . . . . . 308

5.5.2.2 Minimal duration of a time slice . . . . . . . . . . . . . . 3095.5.2.2.1 Length of a MIX message . . . . . . . . . . . . . 3095.5.2.2.2 Estimation . . . . . . . . . . . . . . . . . . . . . 309

5.5.2.3 Maximum amount of interfaces to be served by one MIX-ing Cascade . . . . . . . . . . . . . . . . . . . . . . . . . . 311

5.5.2.4 Computation complexity for the interfaces and MIXes . . 3125.5.2.5 Connection startup duration . . . . . . . . . . . . . . . . 312

5.6 Network managment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3125.6.1 Operating a Network: Responsibility for service quality vs. threats

of trojan horses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3135.6.1.1 End-to-End encryption and distribution . . . . . . . . . . 3145.6.1.2 RING- and TREE-Networks . . . . . . . . . . . . . . . . 3145.6.1.3 DC-Network . . . . . . . . . . . . . . . . . . . . . . . . . 3165.6.1.4 Requesting and Overlaying as well as MIX-Networks . . . 3165.6.1.5 Combination as well as heterogenous networks . . . . . . 3175.6.1.6 Connected, hierachical Networks . . . . . . . . . . . . . . 317

5.6.2 Accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3175.7 Public Mobile Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319

6 Value Exchange and Payment Systems 3236.1 Anonymity of participants . . . . . . . . . . . . . . . . . . . . . . . . . . . 3236.2 The predictability of legal decisions for business processes protecting anonymity325

6.2.1 Declarations of intention . . . . . . . . . . . . . . . . . . . . . . . . 3266.2.1.1 Anonymous declaring or receiving of declarations . . . . . 3266.2.1.2 Authentication of declarations . . . . . . . . . . . . . . . 326

6.2.1.2.1 Digital signatures . . . . . . . . . . . . . . . . . 3266.2.1.2.2 Types of authentication . . . . . . . . . . . . . . 328

6.2.2 Other actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3306.2.3 Securing of evidence . . . . . . . . . . . . . . . . . . . . . . . . . . 3306.2.4 Legal investigations . . . . . . . . . . . . . . . . . . . . . . . . . . 332

8

Contents

6.2.5 Settlement of damages . . . . . . . . . . . . . . . . . . . . . . . . . 332

6.3 Fault secure value exchange . . . . . . . . . . . . . . . . . . . . . . . . . . 334

6.3.1 A third party guarantees that somebody can be made public . . . 334

6.3.2 Trustees guarantee anonyme partners fraud protection . . . . . . . 337

6.4 Anonymous digital payment systems . . . . . . . . . . . . . . . . . . . . . 340

6.4.1 Basic scheme of a secure and anonymous digital payment system . 341

6.4.2 Restriction of anonymity through predefined accounts . . . . . . . 345

6.4.3 Suggestions from literature . . . . . . . . . . . . . . . . . . . . . . 347

6.4.4 Marginal conditions for anonymous payment systems . . . . . . . . 348

6.4.4.1 Transfer between payment systems . . . . . . . . . . . . . 348

6.4.4.2 Interest on the balance and accommodation of loans . . . 349

6.4.5 Secure devices as witnesses . . . . . . . . . . . . . . . . . . . . . . 350

6.4.6 Payment systems with security based on the possibility to makesomeone public . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

7 Credentials 355

8 Multiparty Computation Protocols 357

9 Controlability of Security Technologies 361

9.1 User requirements on signature- and concealation keys . . . . . . . . . . . 362

9.2 Digital signatures allow the exchange of concealation keys . . . . . . . . . 363

9.3 Key-Escrow Systems can be used for an undectable key exchange . . . . . 364

9.4 Symetric authentication allows concealation . . . . . . . . . . . . . . . . . 365

9.5 Steganography: Finding confidential message is impossible . . . . . . . . . 367

9.6 Measures to take after key loss . . . . . . . . . . . . . . . . . . . . . . . . 368

9.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369

10 Multilateral Secure IT Systems 371

11 Glossary 373

A Exercises 403

1 Exercises for “Introduction” . . . . . . . . . . . . . . . . . . . . . . . . . . 403

2 Exercises to “Security in single computers and its limits” . . . . . . . . . 405

3 Exercises for “Basics of Cryptology” . . . . . . . . . . . . . . . . . . . . . 407

4 Exercises for “Basics of Steganography” . . . . . . . . . . . . . . . . . . . 423

5 Exercises for chapter “Security in communication networks” . . . . . . . . 426

6 Exercises for “value exchange and payment systems” . . . . . . . . . . . . 434

9 Exercises for “Regulation of security technologies” . . . . . . . . . . . . . 435

B Answers 437

1 Answers for “Introduction” . . . . . . . . . . . . . . . . . . . . . . . . . . 437

2 Answers for “Security of single computers and its limits” . . . . . . . . . . 444

3 Answers for “Basics of Cryptology” . . . . . . . . . . . . . . . . . . . . . . 448

9

Contents

4 Answers for “Basics of Steganography” . . . . . . . . . . . . . . . . . . . . 5105 Answers for chapter “Security in communication networks” . . . . . . . . 5156 Answers for “Value exchange and payment systems . . . . . . . . . . . . . 5179 Answers for “Regulation of security technologies” . . . . . . . . . . . . . . 517

C Index 519

10

List of Figures

1.1 A sector of a network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201.2 Development of the wire-based communication networks of the Deutsche

Bundespost Telekom (now: Deutsche Telekom AG), as planned in 1984and largely realized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1.3 People use software programs to develop other software programs andcomputers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.4 Transitive distribution of errors and attacks . . . . . . . . . . . . . . . . . 271.5 Universal Trojan Horse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291.6 Which security measures safe from which attackers? . . . . . . . . . . . . 311.7 Observing vs. modifying attacker . . . . . . . . . . . . . . . . . . . . . . . 33

2.1 The five basic functions arranged in circular layers (upper left figure) andthree possibilities of iterated layers. . . . . . . . . . . . . . . . . . . . . . . 39

2.2 Identification of humans by IT systems . . . . . . . . . . . . . . . . . . . . 412.3 Identification of IT systems by humans . . . . . . . . . . . . . . . . . . . . 422.4 Identification of IT systems by IT systems . . . . . . . . . . . . . . . . . . 432.5 Access control by an access monitor . . . . . . . . . . . . . . . . . . . . . 442.6 Computer virus, transitive Trojan Horse and restriction of security threats

caused by computer viruses to threats caused by transitive Trojan Horses 46

3.1 Basic scheme of cryptographical systems shown by the example of a sym-metrical concelation system . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.2 Symmetrical concelation system (black box with a lock; two identical keys) 523.3 Key distribution for symmetrical concelation systems . . . . . . . . . . . . 543.4 Asymmetrical concelation system . . . . . . . . . . . . . . . . . . . . . . . 553.5 Key distribution for asymmetrical concelation systems by a public key-

register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.6 Symmetrical authentication system (Glas box with a lock, there are two

identical keys for putting things in) . . . . . . . . . . . . . . . . . . . . . . 563.7 Digital signature system (Glas box with a lock; there is only one key for

putting things in) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.8 Key distribution for digital signature system by a public key-register . . . 583.9 Goals and key distribution of asymmetrical concelation system, symmet-

rical cryptographical system and digital siganture system . . . . . . . . . 603.10 Generation of a random number z used for key generation: XOR of ran-

dom numbers generated by a device, received from a manufacturer, dicedby the user or calculated from time intervals . . . . . . . . . . . . . . . . . 62

11

List of Figures

3.11 Combinations of attacker properties observing/changing and passive/active 663.12 Overview on the following cryptographical systems . . . . . . . . . . . . . 713.13 ciphertext can correspond to all possible cleartexts . . . . . . . . . . . . . 723.14 Every ciphertext should correspond to all possible cleartexts . . . . . . . . 733.15 A simple authentication code . . . . . . . . . . . . . . . . . . . . . . . . . 773.16 Structure of a turing reduction from factorization F to extracting roots W 933.17 Pseudo random number generator . . . . . . . . . . . . . . . . . . . . . . 963.18 Pseudo–one–time–pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973.19 s2 (mod n)–generator used as an asymmetric system of secrecy . . . . . . 1023.20 Choosen Ciphertext-cleartext attack on the s2 (mod n)–generator used as

an asymmetric system of secrecy . . . . . . . . . . . . . . . . . . . . . . . 1023.21 Collision of a permutation pair . . . . . . . . . . . . . . . . . . . . . . . . 1043.22 Mini-GMR for a one-bit-message. s(0) and s(1) are the possible signatures1063.23 Principle of GMR’s basic signature . . . . . . . . . . . . . . . . . . . . . . 1083.24 Basic idea, how to use GMR for signing multiple messages . . . . . . . . . 1093.25 Tree of references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103.26 Signature for m relative to reference R2 . . . . . . . . . . . . . . . . . . . 1113.27 “Top-down” and “bottom-up” calculations . . . . . . . . . . . . . . . . . . 1123.28 The digital signature system GMR . . . . . . . . . . . . . . . . . . . . . . 1133.29 Naive and insecure usage of RSA as an asymmetric concelation system . . 1163.30 Naive and insecure usage of RSA as a digital signature system . . . . . . 1173.31 Secure usage of RSA as an asymmetric concelation system: Redundancy

check by a collision resistant hash function h and indeterministic encryption1223.32 Secure usage of RSA as a digital signature system with a collision resistant

hash function h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233.33 Sequence of steps in DES . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273.34 One iteration round of DES . . . . . . . . . . . . . . . . . . . . . . . . . . 1283.35 Decryption principle of DES . . . . . . . . . . . . . . . . . . . . . . . . . . 1283.36 Encryption function f of DES . . . . . . . . . . . . . . . . . . . . . . . . . 1293.37 Generation of partial keys with DES . . . . . . . . . . . . . . . . . . . . . 1303.38 Complementarity in every iteration round of DES . . . . . . . . . . . . . . 1323.39 Complementarity in the encryption function f of DES . . . . . . . . . . . 1333.40 Operation mode “electronic code book” . . . . . . . . . . . . . . . . . . . 1373.41 Block patterns with ECB . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383.42 Construction of symmetrical resp. asymmetrical self-synchonizing stream

cipher from symmetrical resp. asymmetrical block cipher: block cipherwith block chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

3.43 Block cipher with block chaining used for authentication: Constructedusing a deterministic block cipher . . . . . . . . . . . . . . . . . . . . . . . 140

3.44 Pathological block cipher (only secure with plain text blocks, which con-tain a 0 on the right hand side . . . . . . . . . . . . . . . . . . . . . . . . 141

3.45 Block cipher expanding by one bit . . . . . . . . . . . . . . . . . . . . . . 1413.46 Construction of a symmetrical self-synchronizing stream cipher from a

deterministic block cipher: cipher text feedback . . . . . . . . . . . . . . . 143

12

List of Figures

3.47 Cipher feedback for authentication: Construction using a deterministicblock cipher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

3.48 Construction of a symmetrical synchronous stream cipher from a deter-ministic block cipher: output feedback . . . . . . . . . . . . . . . . . . . . 146

3.49 Construction of a symmetrical resp. asymmetrical synchronous streamcipher from a symmetrical resp. asymmetrical block cipher: block cipherwith cipher text and plain text block chaining . . . . . . . . . . . . . . . . 148

3.50 aa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

3.51 Properties of the operation modes . . . . . . . . . . . . . . . . . . . . . . 152

3.52 Construction of a collision resistant hash function from a deterministicblock cipher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

3.53 Diffie–Hellman key exchange . . . . . . . . . . . . . . . . . . . . . . . . . 157

3.54 Usual digital signature system: There is one and only one signature s(x)for one message x and one validation key t (and possibly one signatureenvironment, see GMR). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

3.55 For the signee unconditional secure signature system: For one message xand one validation key t there are multiple signatures (gray area), whichindeed are sparse within the signature space . . . . . . . . . . . . . . . . . 159

3.56 Proof of fake in form of a collision in a signature system unconditionalsecure for the signee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

3.57 Fail-stop signature system (Glass box with a lock; there is only one keyfor putting things in. The glass can be broken, but it can not be replacedunrecognizably.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

3.58 Signature system with undeniable signatures (black box with a spyhole;there is only one key for putting things in. The key owner controls accessto the spyhole.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

3.59 Signature system for blind signatures . . . . . . . . . . . . . . . . . . . . . 162

4.1 Cryptography vs. steganography . . . . . . . . . . . . . . . . . . . . . . . 168

4.2 Steganographic system of secrecy . . . . . . . . . . . . . . . . . . . . . . . 169

4.3 Steganographic authentication system . . . . . . . . . . . . . . . . . . . . 169

4.4 Steganographic system of secrecy according to the model of embedding . . 170

4.5 Steganographic system of secrecy according to the model of synthesis . . . 171

5.1 Observability of the user in an astral integrated broadband mediatingnetwork, which mediated all services. . . . . . . . . . . . . . . . . . . . . . 182

5.2 Link encryption between network connection and switching center . . . . 187

5.3 Point-to-point encryption between user stations . . . . . . . . . . . . . . . 189

5.4 Point-to-point encryption between user stations and link encryption be-tween network connections and switching centers, and between switchingcenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

5.5 Evaluation of efficiency and unobservability of the addressing type/add-ress management combination . . . . . . . . . . . . . . . . . . . . . . . . . 197

5.6 Broadcast vs. requesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

13

List of Figures

5.7 Example for a message service with m = 5 and s = 3, requested is message 32005.8 Anonymity property of the RING network . . . . . . . . . . . . . . . . . . 2055.9 Reconfiguration of a braided ring . . . . . . . . . . . . . . . . . . . . . . . 2095.10 Superimposing sending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2135.11 Illustration of the induction step . . . . . . . . . . . . . . . . . . . . . . . 2175.12 Pairwise superimposing receiving of the user stations T1 and T2 . . . . . . 2235.13 Message format of the algorithm for resolution of collicions with averaging

and superimposing receiving . . . . . . . . . . . . . . . . . . . . . . . . . . 2245.14 Detailed example of the algorithm for resolution of collicions with aver-

aging and superimposing receiving . . . . . . . . . . . . . . . . . . . . . . 2255.15 Booking scheme with generalized superimposing sending for the stations Ti2285.16 Practical encoding of information units, keys, and local and global super-

position results during transmission: unique binary encoding . . . . . . . 2325.17 The DC network’s three topologies . . . . . . . . . . . . . . . . . . . . . . 2345.18 Superimposition topology with minimum delay for gates with two input

chanels and a driver power of two for the example of binary superimposingsending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

5.19 sender-partitioned DC netework with 10 stations, configured so that anydesired faulty behavior of any desired, but one and only one, station canbe tolerated without fault diagnosis . . . . . . . . . . . . . . . . . . . . . . 238

5.20 Farthermost propagation of an error (or a modifying attack) of station 3 . 2395.21 Error detection, localization and correction in a DC network . . . . . . . . 2425.22 Basis functions of a MIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2455.23 Maximum security: passing through the MIXes simultaneously and, be-

cause of that, in the same sequence . . . . . . . . . . . . . . . . . . . . . . 2475.24 MIXes hide the connections between ingoing and outcoming messages . . 2505.25 Transmission and encryption structure of messages in a MIX network,

using a direct scheme of recoding for sender anonymity . . . . . . . . . . . 2515.26 Decreasing message length while changing the encoding . . . . . . . . . . 2555.27 Indirect length preserving recoding scheme . . . . . . . . . . . . . . . . . . 2575.28 Indirect recoding scheme for sender anonymity . . . . . . . . . . . . . . . 2595.29 Indirect recoding scheme for receiver anonymity . . . . . . . . . . . . . . . 2605.30 Indirect recoding scheme for sender and receiver anonymity . . . . . . . . 2625.31 Indirect length preserving recoding scheme for special symmetrical con-

celation systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2635.32 Breaking the direct RSA implementation of MIXes . . . . . . . . . . . . . 2685.33 Processing of the channel request message sent by R . . . . . . . . . . . . 2725.34 Processing of channel establishing messages . . . . . . . . . . . . . . . . . 2725.35 Connecting channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2735.36 two additional alternative ways over disjoint MIX sequences . . . . . . . . 2785.37 MIXi can be substituted alternatively by MIXi′ or MIXi∗ (i=1,2,3,4,5) 2815.38 One MIX can be left out each; the coordination protocols are transacted

in the picture within groups of minimal circumference . . . . . . . . . . . 2835.39 Allocation of end-to-end and link encryption in the ISO OSI reference model291

14

List of Figures

5.40 Allocation of basic techniques for securing traffic data and interest datain the ISO OSI reference model . . . . . . . . . . . . . . . . . . . . . . . . 292

5.41 Integration of different security measures into one network . . . . . . . . . 2965.42 Interfaces of caller and callee with respective local switching centers (di-

vided in two functional parts) and attached MIX cascade . . . . . . . . . 2995.43 Data needed by MIXes MRi for TS channel establishment . . . . . . . . . 3005.44 Data needed by MIXes MRi for TR channel establishment . . . . . . . . 3005.45 Connection establishment in the simplest case. Sending of start messages,

i.e. the original start of the connection, is not shown . . . . . . . . . . . . 3015.46 Connection request message of Q for S . . . . . . . . . . . . . . . . . . . . 3045.47 Parameters of the crytographical systems used . . . . . . . . . . . . . . . 3075.48 Functions of the interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 3155.49 Connection of wireless users to a MIX network . . . . . . . . . . . . . . . 320

6.1 Classification of pseudonyms according to personal relation . . . . . . . . 3246.2 Transfer of a signed message from X to Y, functional view (left), graphical

view (right) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3276.3 Authenticated anonymous declarations between business partners who

could be made public . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3356.4 Fraud protection for completely anonymous business partners through an

active trustee, who is able to check the merchandize . . . . . . . . . . . . 3386.5 Fraud protection for completely anonymous business partners through an

active trustee, who is not able to check the merchandize . . . . . . . . . . 3396.6 Basic scheme of a secure anonymous digital payment system . . . . . . . . 344

8.1 Centralized computation protocol between participants Ti. Every Ti sendsan input Ii to a central computer everybody trust. It calculates the func-tion fj and sends every Ti it’s output Oi. . . . . . . . . . . . . . . . . . . 358

8.2 Distributed computation protocol between participants Ti. Every Ti handsover it’s input Ii and receives it’s output Oi. . . . . . . . . . . . . . . . . . 359

9.1 Secure digital signatures allow autonomous and secure concealation. . . . 3639.2 Key-Escrow concealation without permanent monitoring allows exchang-

ing keys and therefore a) “normal” concealation, b) asymetric conceala-tion without Key-Escrow and c) symetric concealation without Key-Escrow.364

9.3 Key-Escrow concealation without retroactive decryption allows direct con-cealation without Key-Escrow. . . . . . . . . . . . . . . . . . . . . . . . . 364

9.4 Concealation through symetric authentication by Rivest. . . . . . . . . . . 3669.5 Concealtion through symetric authentication without participant activity 3679.6 Where is key backup useful, unnecessary or an additional risc. . . . . . . . 369

15

List of Figures

16

1 Introduction

In order to outline the area of application, a short overview of computer networks andtheir services is given first.

Afterwards it is discussed more in detail, what is understood by security, in particular,which general questions have to be asked: What is to be protected? Against whom isto be protected? How and whereby can security be achieved?

In the end of the introduction it is concretized, what therefore security in computer net-works should mean, and generalized why multilateral security is practical and necessary.

1.1 What are computer networks (distributed open

systems)?

First the area of application of this script is to be sketched briefly:

What are computer networks and what are they supposed to do?

What does one understand by distributed open systems?

Which types do exist?

Which services are implemented or planned, which conceivably?

After humans had created and to a certain perfection had developed computers, theysaw that computers would be more useful, if they could communicate with each other.Because of that, they would be able to compensate not only limited computing andnoticing talents of humans, but also its limited communication ranges and progressivemovement possibilities. Humans started to connect computers over communicationsnetworks to computer networks (of first type). Step by step parts of the communi-cation networks, e.g. electromechanical switching equipment, were replaced by processcomputers. For that reason practically all communication networks are dependent onfunctioning computers today. These are the computer networks of second type.

In his next culture level humans discovered that they had created many different com-puters, which didn’t give electrical signal levels, bits, characters, data structures thesame meaning (the meaning wasn’t God-given). A spatially distributed information-technical system (IT system), which uses the possibility of physical communication,has been created. But this was not in that sense open, that two arbitrary connectedcomputers could exchange messages with maintenance of content (files, documents, pic-tures) and work on them useful. Since then masses of technicians are concerned aboutthe standardization of following things: First a standardization of a reference model foropen communication, afterwards individual communication protocols, which fit into this

17

1 Introduction

reference model, and finally their implementation on the most different computer types.Meanwhile ingenious technicians discovered that computers which can communicate donot have to be far away from each other. Enhancement of availability (error toleranceis supported by the limitation of the physical effects of errors) or throughput (parallelwork) by uncoupling spokes increasingly for the implementation of distributed IT sys-tems concerning their control and implementation structure. As a consequenceone generally speaks of a distributed system, if there is no instance, which has a globalinternal view over the system. Therefore a distributed system cannot be controlledcentrally by another instance.

To the history of the computer networks:

1833 first electromagnetic telegraph [Meye 87]1858 first cable connection between Europe and North America [Meye 87]1876 Telephone over a 8,5 km long test section [Meye 87]1881 first telephone local area network [Meye 87]1900 Beginning of wireless telegraphy [Meye 87]1906 Introduction of the direct distance dialing to Germany, implemented by two-

motion switches, i.e. first fully automatic switching by electromechanics[Meye 87]

1928 telephone service Germany - USA introduced (over radio) [Meye 87]1949 first functioning von-Neumann computer [BaGo 84]1956 first transatlantic cable for telephoning [Meye 87]1960 first satellite for remote communication [Meye 87]1967 Start of commission of the Datex-network by German Federal Post Office, i.e.

the first particularly communication network implemented for computer com-munication (computer network of first type). The transfer takes place digitally,the switching via computer (computer network of second type). [Meye 81]

1977 Introduction of the electronic switching system (EMS) for telephoning by theGerman Federal Post Office, i.e., for the first time switching by computer (com-puter network of second type) in the telephone network, but further analoguetransfer [Meye 87]

1981 First personal computer (PC) of the computer family (IBM PC), which findsfar spreading also within the private sector

1982 Investments to the transmission system of the telephone network take placeincreasingly into digital technology [ScSc 84, ScSc 86]

1985 Investments into the switching systems of the telephone network do take placeincreasingly into computer controlled technology, those now do not mediateanalogue signals any longer, but digital signals [ScSc 84, ScSc 86]

1988 beginning of operating the ISDN (Integrated Services Digital Network)1989 first vest pocket-large PC: Atari Portfolio; thus computers are personal and

mobile strictly speaking1993 The cellular radio nets with GSM standard (in Germany: D1, D2) become a

mass service with several hundredthousand participants. Only digital signalsare transferred and mediated by computers.

18

1.1 What are computer networks (distributed open systems)?

1994 The World Wide Web (WWW), a distributed hypertext system, into whichdiagrams and pictures can be merged too, opens, so many application possibili-ties for the Internet also for untrained users that commercialization is seriouslydiscussed (accordingly necessity of jurisdiction and unfortunately censorship).

1998 All switching centers of the German Telekom are digitized [Tele 00], i.e. theprocess that began in 1985 containing the conversion of all switching systemsto computer controlled technology is closed.

2000 Over so-called WAP able cellular radio telephones mobile access to suitablyarranged contents of the WWW is possible. In the spring WAP able Handys(140 g) with an asset map (25 DM starting assets) without contract attachmentare sold for 199 DM, in order to promote the development of a mass-market.

The following services have been realised by means of the following communicationsnetworks in 1991:

• Sound broadcasting, television, videotext, etc. are services, (more exactly: theirinformation) that are distributed over the broadcasting station net, but more andmore over the developing broadband cable network. Purpose of the broadbandcable distributed network is the improvement of the service quality by increaseof the available range: more receiptable television programs is then called cabletelevision; larger information offer with videotext is then called cable text.

• Telephone, interactive videotext, electronic p.o. box (TELEBOX) for electronicletter and voice mail, telex (TELETEX), remote copying (FAX) and remote work-ing (TEMEX) as well as Telebanking (electronics cash) are services, that are me-diated over the telephone network and/or the digital text and data network, cp.figure 1.1. The telephone network, which is in the participant exchange area stillnearly everywhere analogue, but in the interconnection area already to a largeextent digital since 1991, gets step by step digitized: since 1988 in some local areanetworks and starting from approximately 1993 in the whole Federal Republic alsoin the participant exchange area. And afterwards it will furnish in each case allthese services with higher quality. The analogue telephone network and the par-ticipant’s access line using digital telephone network are narrow-band nets (= 1Mbit/s), i.e., they are in contrast to wide-band nets (> 1 Mbit/s) not able totransfer video in high quality (e.g. television).

At present we use two in principle different types of communications networks:

• Distributed networks, in which all subsets of the net receive the same data andeach participant selects locally whether, and if, what it wants to actually receive,and

• Switching networks, in which each subset individually only receives from thenet, what the participant requested or another participant sent to him.

19

1 Introduction

Visual Telephone

Network

Connection Participant’s

Access Line

Switching

Center 2

Videotext

Telephone

Television

Radio

Switching

Center 1

Participant 1

Participant 2

Bank

Interactive Videotext Server

Interactive

Figure 1.1: A sector of a network

In nearly all realized and planned public distributed networks communication takes placeonly in one direction, from the net to the participant [Krat 84, DuD 86]; in switchingnetworks communication takes place in one direction generally.

Since on a long-term basis a net for all services, a so-called service-integrating net,is at least in the participant exchange area more inexpensive than several different nets,and of course all distributed services can also be mediated, it is aimed to mediate allservices in one net [Scho 84, ScSc 84, ScS1 84, Rose 85, Thom 87].

Therewith also wide-band services can be transferred mediated to the participant, newparticipants access cables (fiber-optics) must be shifted. Thereby a broadband cabledistributed network becomes gradually redundant. Over a wide-band service-integratingswitching network not only all services, which can be offered over a broadband cabledistributed network and a narrow-band switching network together, can be offered, butalso additionally services, which require wide-band communication between participants,e.g. for picture telephony.

Since digital values can not only be transferred in modern technical systems more easily,but also more easily processed respective mediated, the service-integrating net will be adigital net from subset to subset. I’m calling such communication network “service-integrating digital network”, whereas I’m using the abbreviation ISDN (Integrated

20

1.1 What are computer networks (distributed open systems)?

Services Networks1986 Since 1988 Since 1990 Since 1992

TelephoneVideotext

Telephone

Network

Integrated

Data Transmission

TELEFAXTEMEX

ISDN Wide-band

ISDN Wide-bandTelex IntegratedTeletex Switched

DATEX-LText and

NetworkDATEX-P

Data Network

Visual Telephonen

BIGFONVideo confe-

Video Conference rence Network

VideotextAntenna

Installations

Broadcast NetworksSwitchedNetworks

TELEBOX

Networks Networks Networks

Sound BroadcastingTelevision

CommunityBroadband

Cable

Distributed

Network

Broadband

Cable

Distributed

Network

Figure 1.2: Development of the wire-based communication networks of the Deutsche BundespostTelekom (now: Deutsche Telekom AG), as planned in 1984 and largely realized

Services Digital Network) only for the service-integrating digital network, operated byGerman Telekom AG, which suffices further restrictions. Already in 1986 the digitiza-tion saved 40% of the costs with the remote exchange technique, and already at thattime it was not more expensive for the local exchange technology, with falling tendency[Scho 86]. Both obtain 65% saving of space, which lowers the above ground constructioncosts enormously, and faster connection establishment and larger net flexibility saves 15to 20% of the necessary transmission capacity.

When broad-band ISDN (shortened b-isdn [Steg 85]) replaces broadband cable dis-tributed network, whereby on the one hand the maintenance of the broadband cabledistributed network can be saved and on the other hand high definition TV, HDTV,can be offered to a larger extent, it is called integrated wide-band telephone network(IBFN) [ScS1 84, Thom 87]. Pilot attempts for the testing of the IBFN are known underthe abbreviation Bigfon (wideband integrating fiber-optic telephone network)[Brau 82,Brau 83, Brau 84, Brau 87]. In the meantime the name IBFN came out of fashionand after abolition of the monopoly of the net the goal of such a extensive monolithicinfrastructure is also less public effective. Although in the meantime there are several

21

1 Introduction

network carriers, which do not promote in each case with the goal IBFN, the technicaldevelopment goes further into this direction. Meanwhile fiber-optics, for example, lie inmost houses in the new Lands of the Federal Republic, although mostly unused so far.The latter can be changed comparatively fast.

Since 1994 it is discussed about the vision of an all services integrating net less fromthe view of the transmission technique, as in the decade before it, but more from the viewof computer communication: The Internet takes over hopes, goals and illusions of IBFN,it is to become the medium also for mass communication. Instead of gigantic range aswith IBFN high-quality data compression techniques and adaptive service organizationare standing in the center of interest.

The described development is represented in figure 1.2 more detailed. Not representedand in the following also only in §5.7 explicitly treated are radio nets which have apermanent significance as connection nets for moving participants and as replacementnets in emergencies and as guarantors of transnational freedom of information.

Beside these digital public long-distance traffic nets there are local computernetworks (LAN s) – without these this script wouldn’t have found the way fromthe computer to the printer that comfortable. For example, already today also modernhospitals and administrations are dependent on their LANs absolutely. In the future notonly airplanes are controlled by computer networks (key word: fly by wire; first civilianairplane is the airbus 320, first flight 1988 or in former times), but also cars. Thelatters are called CAN’s (Controller AREA networks1), that connect computers toinstruments, engine and brakes.

Already today many LANs are often connected via public long-distance traffic nets.And also with CANs this can be meaningful.

In short: Often already today, but in few years almost comprehensively, interactionswith technical devices will be input and output to and from computer networks. Thefact that we interact with a computer network will be noticeable to us more less, sincethe computers step in their appearance into the background, but not regarding theirsecurity (problems).

1.2 What does security mean?

This second part of the introduction deals with which dangers generally threaten inand from information-technical systems (IT systems). One imagines errors from hu-mans, losses from integrated circuits, in addition, consciously bad-willing behavior fromhumans, who naturally can also use the assistance of IT systems for this (computers,communication systems, etc...).

After a short enumerating, what is to be protected, it is discussed, against whom is tobe protected and on which levels the protective mechanisms must be set.

1Sometimes this name is not used for a class of computer networks, but for a special one: Boschdeveloped the “CONTROLLER AREA network (CAN)”, which is in the meantime internationallystandardized (ISO 11898).

22

1.2 What does security mean?

1.2.1 What is to be protected

A usual organization of the threats and corresponding protection goals for systems of theinformation technology (IT systems) is the following three-division [VoKe 83, ZSI 89]:

1. unauthorized gain of information, i.e. loss of confidentiality,

2. unauthorized modification of information, i.e. loss of integrity, and

3. unauthorized impairment of functionality, i.e. loss of availability.

Appropriate examples may describe this:

1. If case histories (investigations, diagnoses, therapy attempts) are stored no moreon record sheets, but in computers, then it is surely unwanted that the computermanufacturer receives reading access to these data in maintenance or repair mea-sures.

2. It can become lethal for patients, if someone changes unauthorized and unnoticeddata, e.g. the instruction for dosage of a medicine which has to be given.

3. Likewise it can become lethal, if a patient’s history is stored only in the computer,but it just fails recognizably, if an inquiry for a therapy measure must take place.

Unfortunately no classification of the threats is well-known. In particular the abovethree-division is not, even with the attributes made italically by me a classification: If ajust not executed program in memory is modified unauthorized, this is an unauthorizedmodification of information. If the unauthorized modified program is executed, this isan unauthorized impairment of functionality.

Because of these demarcation difficulties caused by the interpretableness of informa-tion scientists do not want to differentiate between the 2. and the 3. threat.

But I myself consider this distinction under pragmatic criteria as meaningful. Becausewith the later treated preventive measures one is usually able to clearly differentiate,which of the three protection goals they serve.

Additionally, this distinction has a correspondence in the correctness term, ifone adds the attributes, which are written in italics in the examples, unnoticed and/orrecognizably into the definitions of integrity and/or availability:

Integrity (= no unauthorized and unnoticed change of information) corresponds tothe partial correctness of an algorithm: If it supplies a result, it is correct.

Integrity and availability together correspond to the total correctness2 of analgorithm: A correct result is supplied (sufficient operational funds for theexecution of the algorithm presupposed, see below).

For nearly all applications with availability requirements it is characteristic that correctresults are needed not sometimes in the future, but at material times (next necessarytherapy measure in the above example). Course-sharpened measures ensure safety of

23

1 Introduction

integrity for the fact that the correct things happen. Measures for ensuring safety ofavailability for the fact that things happen in time [Hopk 89]. The proof of total correct-ness alone is not sufficient for the general proof of availability. Additionally necessaryoperational funds must be analyzed at least and their sufficient availability must beproven. Thus applies:

Integrity and availability = total correctness and sufficiently operationalfunds available

A few notes for the above classification of threats and protection goals are attached:

The term unauthorized in [ZSI 89, ITSEC2d 91] must be understood very broad, oth-erwise the above three threats are incomplete: For exmaple, the loss of a transistor canquite modify information or impair functionality, and even cause loss of privacy. Butit is unclear if it represents an unauthorized event with unauthorized consequences inthe general linguistic usage. Because powers will be in general only transferred to in-stances with responsibility, thus humans. Instances without responsibility cannot actunauthorized, and so cause nothing unauthorized, too.

The impairment of functionality must be referred to the entitled users. Otherwise,if a IT system is used intensively by unauthorized users, its functionality would notbe impaired, because it functions. But depending upon extent of utilization of the ITsystem by the unauthorized users the functionality of the IT system would be detractedfrom the entitled users [Stel 90].

In [Stel 90] it is critisized that in [ZSI 89] the protection goal (legal) commitment isnot specified. A similar protection goal, called accountability, is later introduced to theCanadian safety evaluation criteria [CTCPEC 92] as additional fourth. Thereby one candescribe some protection goals more concisely, cp. §1.3. Since one can also seize threatsand/or protection goals under integrity and should get along with as few as possiblethreat types, I remain to the usual three-division.

Positively and as briefly as possible formulated the three protection goals are then:

confidentiality = information only known to entitled users.integrity = information are correct, complete and up-to-date or that

is recognizably not the case.availability = information are accessible where and when they are used by

entitled users.

Here “information” is meant in a broad and comprehensive sense in each case, so thatdata and programs, in addition, hardware structures3 are summarized.

So that talking about “entitled users” makes sense, it must be at least clarified outsideof the regarded IT system who in which situation is entitled for what.

3Since procedures – say: functionality – can be described not only by programs but also by hardwarestructures, this is necessary.

24

1.2 What does security mean?

Finally it is marked that in each case “correct, complete and up-to-date” only refers tothe inside of a regarded system since correctness of the system’s view on its environmentis not clarifyable within the regarded system.

1.2.2 To protect against whom

On the one hand in and on every technical system the laws of nature take effect, andif not protected from, the force of nature. The former means that components ageand eventually won’t work as specified. The latter means that precautions againstovervoltage (lightning strokes, EMP), voltage drop, floodings (heavy rainfall, brokenwater pipe), immediate temperature changes etc. need to be employed. Both topics arecovered by the special field of fault-tolerance[Echt 90].

On the other hand people may have impact in an undesired manner because of inabil-ity, neglience or unauthorized acting. According to their role within the IT systems itis useful to seperate them into (cp. figure 1.3)

externalsusers of the systemoperators of the systemservicemenproducers of the systemsdesigner of the systemsproducers of design and production itemsdesigner of design and production itemsproducers of design and production items for the producers of design- and production

items ...It is pointed out that all of the producers and designers mentioned here have users,operators and maintance service on their own. If necessary you have to protect againstthem too.

25

1 Introduction

A C

B

A uses Bfor designing C

Y

X

Computer X runsprogram Y

-

Program

Legend

Computer

Figure 1.3: People use software programs to develop other software programs and computers

While it is common to distrust externals and users of the IT system and therefore toprotect against their unconsicious mistakes and consicious attacks all the other com-ponents (especially the influence of further IT systems) are left out. This is a serioussecurity breach. Because not only unconsicious mistakes can propagate along the designand production line but also consicious attacks do so. In figure 1.4 the human at thetop is not inevitably to blame if the computer aided produced computer is faulty. It isas good as possible that the person at the very bottom, at left bottom or at any otherlower level has to be taken into account.

26

1.2 What does security mean?

Propagation oferrors / attacks

A C

B

A uses Bfor designing C

Y

X

Computer X runsprogram Y

Program

Legend

-

Computer

Figure 1.4: Transitive distribution of errors and attacks

The especially high complexity of today’s IT systems prevents a sufficent monitoring ofthe systems themselves. That’s why their producers, the people themselves or the (finallythey are man made too) machines used, have to be considered as potential attackers.For example they could hide Trojan Horses within parts of the system they designedor produced:

A system part is a Trojan Horse, using the given data and granted permissions, if

27

1 Introduction

it does more than expected4 or the expected wrong or not at all5 , cp. fig. 1.5. Forinstance a text editor could not only save the input to the user’s file but sending acopy to the author of the program. For this purpose the Trojan Horse needs a covertchannel6[Cove 90, Denn 82, Lamp 73][Pfit 89, pages 7,18]. If the editor in our examplecouldn’t use a second file as covert channel (on many computers it can) it could modulateoperating system resources (memory consumtion, CPU time) what could be recognizedby other programs.

Unfortunatly there is no practical method to reveal all or rule out the possible existanceof Trojan Horses in the examined system. All in all you have to rely on watching withthe naked eye (with the complexity of today’s computer you could almost watch witheyes closed).

The bandwidth even of known covert channels cannot be arbitrarily reduced withoutenormous additional costs. The highest demands in IT security criteria, without limitingthe amount of channels, allow a bandwidth of 1 bit/s per hidden channel[ZSI 89]. Letthere be an Trojan Horse which transmits with “only” 1 bit/s to the attacker: Withinone year he would receive 4 MBytes of data.

Fortunately by using diversity, that is the design is conducted by some independentdesigners with independent aids on their own, you can reduce the bandwidth to 0 bit/s[Cles 88]. But the expenditure of that would be considerable.

4In the literature a Trojan Horse is often defined as a “program executing undocumented additional

functions along with it’s alleged own function.” With the unnecessary limitation to programs andadditional functions the definition seems a bit unsuiting: For every function exists at least onedocumentation containing every detail: the program code itself [Dier3 91].

5At the first glance this definition may look too common because no intention or evilness of the authoris demanded while according to Homer the Greeks took use of the wooden horse in front of Troywith all means. For the introduced and extended insight of Trojan Horses here there are two thingsto mention:

1. The criteria whether there is intention or evilness in functions of systemparts is not to be decided- a mathematical definition with respect to those attributes would be quit hard to handle (Moralor legal abjections which would be relevant for prosecution remain untouched here).

2. For effects of unexpected functionality it does not matter whether it was created with intentionor evilness. Only if it’s expoitable and known to potential offenders is a matter of subject.

6In the literature both hidden channel and covert channel are used with the same meaning.

28

1.2 What does security mean?

Figure 1.5: Universal Trojan Horse

This very broad definition of Trojan Horses allows many different kinds of; partlywith martial names like logical bomb, etc. However for the principle inspection only thefollowing attributes “universal” and “transitive” are important.

Usually the damage functions of a Trojan Horse are fixed during it’s creation. So theTrojan Horse would be a quite unflexible and unaligned attacker and if discovered theintention of damaging of its author would be verifiable. But it is possible to create anacting frame during the creation which would be filled with life during operation. Theinput for the Trojan Horse would be stashed in the input for a harmless looking part ofthe system containing the Trojan Horse (fig. 1.5). The encryption of the input is choosenso redundant that the probability that an external person giving input into the infectedsystem part would give input to the Trojan Horse too is minimal. In the most generalcase you could freely program the Trojan Horse during operation. Dorothy Denningnamed those Trojan Horses universal[Denn 85]. The aimed usage of universal TrojanHorse is of course only possible to people having access during operations. Especiallyin open systems this is possible to nearly everbody. With the help of universal TrojanHorses you can do very flexible and pointed attacks. The user of a Trojan Horses hasn’tto wait years long for receiving the datagramm he is interested in but can make it sendingit immediatly.

Usually you think that Trojan Horses have to be directly put there by the offenderwhere it shall take its subversive effect. But that thought is wrong: If during the design ofthe system other aids are used (i.e. compilers) both the designer and the production aidcan infiltrate a Trojan Horse into the design. With having made the production aids withother production aids the Trojan Horse can spread itself recursively within the transitivehull where the guest system was directly or indirectly participated [Thom 84, Pfit 89].That is why I speak about transitve Trojan Horses.

29

1 Introduction

The circle in which members must be considered as potential attackers is continouslygrowing because of the transivity of Trojan Horses which equals the former recursivecharacterizing of the set of designers and producers. The growth is bad because of twogroups of reasons:

• Firstly the probability that all of them aren’t attackers decreases just because of theincrement of the amount of people. Additionally with the size and heterogenity ofthe group the sense for partnership and being-responsible-for-each-other decreases.

• Secondly the performing of controls of the people gets more and more costly, unac-ceptable (catchword Big Brother) and bordered by the cooperation of the nations(which country doesn’t deny the surveillance of it’s people by others; on the otherside: which country denies the import of software?).

1.2.3 How and through what security can be achieved

First of all we have to agree on what “unauthorized” means. That could be achievedby international conventions, constitution, laws, cooperation argeements or by the pro-fessional groups of computer scientist’s ethical agreement. “Unauthorized” acting isimpended with penalty.

Organsiation structures should be choosen appropriately to unease or better to preventunauthorized acting.

Are fiats, prohibitions and organisational measures sufficent?

A prohibition only becomes useful if it’s adherence can be verified and is backed bylegal prosecution, and if the orginial state can be recovered. Both things don’t apply onIT systems.

Common “data theft” especially wiretapping and copying data out of computers isalmost impossible to detect because the original data won’t change. As mentioned aboveit is just as well impossible to detect Trojan Horses. And nothing to say about a furtherprocessing of the data that was received legally or illegally.

The recovery of the original state would consist of deleting the created data. But youcan never rely on the unexistance of further data copies. Besides data can be hold insidethe human’s memory where deleting should not be targeted.

So more effective measures are needed. Preventioning and technical precautions are theonly one’s known.

1.2.4 Preview of precautions

Fig. 1.6 gives a preview of measures which will be explained in the next chapters. Thecolumn “preventing undesirables” (reads: Preventing undesired actions of the system)corresponds essentially to the goal of confidence, the column “rendering desirables”

30

1.2 What does security mean?

(reads: Desired actions of the system shall take place) to the goal of integrity andavailability.7

rendering desirables preventing undesirables

Designer and producer

of design and

production tools

System designer

System producer

Maintenance service

System operator

System user

external people

understandable languages and(provisional) results, analyzed using

independent tools

like above, anddesign by several independent designers

using independent tools

product analysis using independent tools

Protection from

Protection on.

restrict physical access byusing temperproof cases,restrict and log logicalaccess

control like with new product, see above

restrict physical and logical access

seperate physically from the systemand cryptographically from the data

physicaldistributionandredundancy

enforcingunobservability

anonymity and

unlinkability

Figure 1.6: Which security measures safe from which attackers?

7Unobservability, anonymity and untraceability may contribute in a this very subtle way: If the attackeris not willing to affect all (or at least many) participants - because he doesn’t want to attractattention nor to hurt, in case of a political motivated attack, the weak members of a society -unobservability, anonymity and untraceability prevent a selective trench on the availability of certaingroups of participants or individual participants. In situations like that you won’t need to reckon oninfluence of offenders motivated like this.

31

1 Introduction

1.2.5 Attacker model

Protection from an almighty attacker is of course impossible: an almighty attacker could

1. trace all data he’s interested in at the point of their creation (or if necessarycollecting data off different points and combine them to the one’s he is interestedin).

2. alter data unnoticably because if the user would know them exactly he could dowithout.

3. affect the whole IT system’s functionality by physical demolition.

That’s why all the measures following are only approximations to a perfect protectionof the participants from all possible attackers. The approximation is determined bythe specification of the maximal considered strength of an attacker formed into asocalled attacker model.

To characterize single mechanisms of protection you should describe the attacker ofmaximal strenght to protect against quite exactly. The attack model comprises, besidesthe possible roles of the attacker (external, user, operator, servicemen, producer, designer...), the following attributes: To describe how much spread the attacker is you shouldquantify his possible roles: Which physical and cryptographic barriers can he surpass?Which subsystem can he use and take control of in which manner? Which subsystemsand in which way could he use as operator or servicemen? As producer where could heinfiltrate Trojan Horses? And so and so on...

Are all subsystems equal regarding to the role of the attacker it is sufficent to assigna non negative natural number to the system indicating how many subsystems can becontrolled by the attacker.

For discriminating between wires and stations (i.e. relay central, accesspoint, enduserdevice) in a computernetwork as smallest subsystems you have to specify the amount ofstations and wires which can be observerd or even activly altered.

Does the attacker behave to the outside (precisely: within the whole IT system butoutside of his system part8 likewise he is allowed to; cp. fig. 1.7) the way he is allowedto therefore he is only performing an observing attack? For example he could do so bylistening to wire or radio connections, evaluate data in a different form or saving datalonger than allowed. Or is the attacker risking more by doing things prohibited to himoutside by doing an modifing9 attack?

In general observing attacks can’t be recognized - but their success can, as we willsee later, be principally completely thwarted (in reference to the act of collecting and

8Systemparts of the attacker are meant as being controlled exclusively where others can’t watch ineasily. The zone of control and therefore the zone of spreading of the attacker is usually greater.It usually comprises big parts of the IT system itself.

9I’m not fully happy with the terms observing vs. modifing attacker but I don’t know better.Using protocol devoted vs. protocal violating you must assume that everything specified by theprotocol is allowed. But this doesn’t hold true for today’s protocols.

32

1.2 What does security mean?

processing data he wouldn’t have regulary). Modifing attacks can be principally spottedbut their success can’t be completly prevented. Off site the IT system observing attackscould be possibly detected and outcomes of a modifing attack should be limited.

In may be useful for a further description to assign the attributes observing/modifingnot to the attacker as a whole but to his different roles within the IT system.

considered considered

IT-System IT-System

observing attacker

the world the world

allowed behaviour only also prohibited behaviour

modifying attacker

Attacker's Attacker's

system parts system parts

Figure 1.7: Observing vs. modifying attacker

In §1.2.2 we distinguished between stupid or accidential mistakes and intelligent, targetedattacks. For the latter it may be useful to distinguish even more: How much computingpower does the attacker have?

You are on the safe side if we assume endless computing power. Then we speakof complextheoretical unrestrainted (or: informationtheoretical) attacker. Thereforeproofs of security have be made in the informationtheoretical world model from ClaudeShannon which is covered more precisely in §3.

You’re going at risk by assuming the attacker has only limited computing power(calculation ability) at hand, so he would not be able to factor out large primes withina reasonable time, cp. §3. We speak of an limited complextheoretical attacker. Hereyou can be wrong with respect to his hardware resources and algorithm knowledge. Themain risk as we will see in §3 in underestimating his knowledge of algorithms.

For substantiating the complextheoretical attacks you can distinguish attackers by howmuch money and time they have and how much they want to spend. Finally you coulduse the money completely untechnical outside the IT system: Bribing people is themost useful means in many situations, i.e. to make designers installing Trojan Horses,operators manipulating the IT system or user with the right permissions cooperating.

Establishing an realistic attacker model is an art comparable to estimating mathematicalproofs: If the estimation is too raw, the proof fails. If the estimation is too delicate, theproof becomes unnecessarly tough, fault-prone and not very convincing. Referring to

33

1 Introduction

the attack model often all possible attack points and kinds are combined to one attackereven though you can hope for reality that all the attack points controlling instanceswon’t cooperate.

Not only a realistic attack model should cover all attackers known fornow but all attackers that can be expected during the life of the system. Itshould be as simple as possible so the remaining risks, if an unexpectedlystrong attacker, are easy to understand. Despite this approximation of theattacker, a system secure against him should be effortable to be created andoperated as well as a proper proof of all the target’s security should succeed.

The life span of a system not only has influence to the electrical measurements andtechnical devices of a future attacker but also to his motivation, his will to spend moneyor time and the risk assumed. All that can be increased by political developments,the increase (due to political or economic reasons) of value of the data processed inthe system or with a grown dependence of indivduals, organisations or nations on thesystem. If in doubt you should not try to make a better risk analysis, which is notpossible due to unpredictable future development, but increase the efforts in building asystem tough enough to resist stronger attackers.

1.3 What means security in computer networks?

Without claim of completeness and precision let all possible protection goals be men-tioned. They are covered more thoroughly in §5 after in §3 the necessary formal andcryptographical terms are introduced.

Protection goal confidence

• The content of messages should be kept confident to every other instance exeptcommunication partners.

• Sender and/or receiver of messages should stay anonym to each other and shouldnot be observable to noninvolved people (incl. the operator of the net).

Protection goal integrity

• Fakes of the message’s content (incl. the sender’s address) should be recognized.

• The receiver should be able to proof to third persons that instance x has sent themessage to instance y (additional the time when it happend).

• The sender should be able to proof the sending of the message with the rightcontent and as possible the receive (additional the time when it happend).

• Nobody can withhold hires to the net operator for concessions made. The otherway round the operator can only demand hires for correctly made concessions.

34

1.4 Why multilateral security

Granted, to put three last protection goal under integrity was forced. Alternativly youwould have to introduce further protection goals what becomes easly too hard, cp. endof §1.2.1. Though the last protection goal could be put under availablility.

Protection goal availablility

• The computernetwork allows communication between all partners who wish to(except it’s denied to them explicitly).

Theese protection goals are multilateral and aren’t equally strong for all participants.Depending on the communication subject und cirumstances single protection goals won’thave the same meaning for the same user at all times.

1.4 Why multilateral security

Let’s generalize:We do wish that everbody is protected against anybody else. Because everbody can

threaten protection interests of others by devices with Trojan Horses, handling errors,interests of his own or evilness and therefore to be considered as an potential attacker.Is everbody protected as good as possible against anybody else we call it multilateral

security [RaPM 96].

Multilateral security means the inclusion of all interests of protection of allparticipants and likewise to deal with resulting conflicts like an acruement ofa connection between.

The realization of multilateral security doesn’t inevitably lead to the fullfilling of theinterests of all participants. Because protection goals are formulated exlicitly it perhapsreveals incompatible interests of which nobody was aware of. However realization of mul-tilateral security guarantees that partners interact with each other within a multilateralcommunication relation with clear ratio of forces.

Techincally seen to give the user the potentiality to formulate, negotiate and enforcehis protection goals. So multilateral security requires to develop accessable methodswhich give the user the chance to protect himself at least within information technicalsystems.

Of course the practical employment of those methods will depend on the economical,social and legal actualities of the situation. Security is an interdiciplinary issue andtherefore it requires a close dialog with other scientific disciplines.

Reading recommendations

Computer networks: [Tane 96]Security: [MuPf 97], [Denn 82], [ZSI 89], [ZSI 90], [ITSEC2 91]

35

1 Introduction

Security in computer networks: [VoKe 83]Legal basics: [BFD1 95], [BFD5 98], [Tele 98]Surveillance technology:

http://www.europarl.eu.int/dg4/stoa/de/publi/166499/execsum.htm

36

2 Security in Single Computers And ItsLimits

After talking about physical security, we will now draw a picture of ”logical” protectionof isolated computer systems from unauthorized access and computer viruses.

2.1 Physical security assumptions

All technical security measures need to be physically embedded as part of the system,where the considered attacker does not have any physical access. In general we denyreading (Security of Confidentiality) and modifying (Security of Integrity and Availabil-ity). Depending on the protection concept and regarding to the single security measures,this encapsulated part of the system can vary in size, graphicly from a ”data processingcenter X” to a ”chipcard Y”. This part of the system should sensibly be a differentone for the various attackers. We differentiate between several sizes relating to theirdimension and complexity. In both cases humans not only act as possible attackers, butalso (in hopefully useful organization structures) as wanted ”defenders”.

It will prove as very advantageous to consider not only the IT system as a whole, butespecially every single user to achieve security. We can embed the security for user Binside the exclusively by user B protected part of the system, e. g. a chipcard, which iscarried around by B all day, even during having a shower and certainly put under hispillow at night.

What can we expect from physical security measures at best? Which security measurescan we use to achieve this at least approximately? Are chipcards really physically safe?What are useful security assumptions? We will discuss these questions one by one.

2.1.1 What can we expect at best?

The availability of an locally concentrated part of the system can not be guaranteed ifwe assume, that we have to protect it against absolutely conceivably mighty attack-ers such as terrorists or in general owners of sufficient conventional explosives or evenatomic or hydrogen bombs. If we consider the enormous physical costs of security forthick reinforced concrete, emergency power supply, etc., we will be more efficient byusing distributed systems instead, in the hope that the attacker is not capable to attackphysically at many different places at this strength.

Through the distribution of the IT system the problems in confidentiality and integritycertainly increase rapidly. Regarding to these aims of protection you can expect more

37

2 Security in Single Computers

from physical measures, namely protection against all at time conceivable attackers:Certainly you can try, e. g. to construct machine cases in a way, that an attacker whois restricted to the currently known laws of nature, may not be able to take unautho-rized possession of the information stored inside these cases nor being able to modifythem unauthorized. If you can achieve this sufficiently, then nothing impedes a physicaldistribution.

2.1.2 Design of security measures

Physical security measures consist of an combination of the following five basic func-tions:

Observed attacks, like analyzing electromagnetic radiation must be prevented byshielding. (Techniques and norms are known under the abbreviation TEMPTEST[Horg 85]. Often we forget, that even inputs, like usage of energy must be independentfrom the secrets we want to protect [Koch 98].)

Modifying encroachments must be detected and estimated relating to their permis-sibility.

Modifying attacks are unauthorized detected modifying encroachments where inter-ventions must be delayed. If necessary the information must be deleted before the endof the delay period.

Depending on dimension and complexity it is useful to design the physical part of thesystem we want to protect as circular layers [Chau 84]. We can include several basicfunctions inside these layers as we can see in Figure 2.1. Even multiple calls of the samefunctions can be implemented, i. e. the detection of different physical dimensions. Asan alternative or a supplement these function can occur again in more interior layers.Figure 2.1 shows three examples, where each basic function occurs maximal two times.

Even if it is possible to increase the physical security by implementing an appropriatecombination of physical security measures, they do have two principle disadvantages.

- It is very difficult to validate them impartial. There are two reasons:

At first, security of some physical security measures is based on the secrecy of theirexact mode of action - or at least it is kept secret. This can be because of militaryreasons or to protect know-how with commercial interest.1

Secondly it is very important to know about the actual state of development ofmeasuring techniques, because it is not cheap to use physical measures stronger

1One of very few exceptions [Wein 87] is described here shortly: The part of the system, which ismeant to be secured from physical attacks will be densely wrapped round with thin isolated wire andtransfused with an opaque substantiating liquid. The resistance of the wire is measured continuouslyand the data meant be treated confidential, will become actively deleted if a suspicious change isdetected. If the security layer will become physically removed or drilled out, the wire will becomeinterrupted. If someone tries to dismantle the secured item chemically, short circuits will take placebetween the wires, because the isolation of the wires has been chosen better chemically soluble thanthe sustaining liquid. In both cases the resistance will be changed clearly.

38

2.1 Physical security assumptions

delay (e.g. hard materials)

detect (e.g. sensors for shocks, pressure)

shield,evaluate

delete

Figure 2.1: The five basic functions arranged in circular layers (upper left figure) and three possibilitiesof iterated layers.

than needed (as it is for cryptographical measures, cf. §3. Since in the militaryfield, the quality of measuring techniques is kept secret, we will have problems toknow the actual state of development. For these reasons the subjective credibilityis often fairly low.

- They are compared to others solutions costly and not reducing in price as fast as”normal” digital components. They complicate maintenance in particular.

2.1.3 A negative example: chip cards

Chip cards are physically much more secure than standard magnetic strip cards, but inmy opinion they are a bad example regarding their true security:

- It is difficult to shield them, because we want them to be thin and elastic. Fur-thermore it is possible to measure their energy consumption, because during ser-vice energy is provided externally. In common chip cards the amount of energyconsumed depends on the processed instructions and even on the processed infor-mation and keys [Koch 98]. Furthermore today chip card receive their clock pulsefrom outside, which made sensational explorations possible [AnKu 96].

This could violate confidentiality (at least between the card holder and the deviceswhere the card is put in).

39

2 Security in Single Computers

- Usual chip cards are not able to detect encroachments due to the lack of an ownpower supply, e. g. no battery.

This could violate confidentiality and if applicable the integrity of data processedinside the chip card.

- While they are supplied with energy by the reading device, usual chip cards donot delete their sensible data, not even after an unauthorized encroachment wasalready detected.

This could violate confidentiality of data processed inside the chip card.

2.1.4 Useful physical security measures

According to the in §2.1.2 named reasons, physical security measures should possiblytreated economically:

A widely conformity between organizational and information technologicalstructures should be achieved. The ideal case would be if everybody wouldprocess and secure his own data, whereas others are only able to get accessin case of the owners permission.

This means, that we possibly want to avoid that devices ought to be secure againsttheir users.

Despite of all this technology it is careful and probably necessary to presume, that onlyin space there exist data where nobody has physical access. Fortunately it is possibleto achieve security (Confidentiality, Integrity, Availability; according to the specific usecases e.g. security by law, protection of personal correlated data of unauthorized access,etc.) even under these defensive presumptions.

2.2 Protection of isolated computers from unautho-

rized access and computer viruses

”Logical” protection, presumes that physical and organizational protection as dis-cussed in the previous subchapter is given.

At first we will describe how it is possible to identify humans and IT systems2. Thisis the basis to lock services of the IT system to unauthorized users (admission control)and to grant authorized user only access to resources which are thought to be used bythem (access control). At the end of this chapter we will see how to restrict the threatof ”computer viruses” towards the threat of ”transitive trojan horses”.

2Some authors distinguish between identification (for them: assertion of an identity) and authenti-

cation (verifying an identity). We understand identification at a global level, that means assertionand verification of an identity. We handle it like that, because in our case both are appearing alwaystogether. We want to apply the term authentication on messages, e. g. ”Does a specific messagereally come from the pretending originator?”

40

2.2 Protection of isolated computers from unauthorized access and computer viruses

2.2.1 Identification

IT system are able to recognize a human by what he is3, has got or knows, cf. Figure2.2. The same terms can be applied to recognition of humans by humans, whereas handgeometry, retina patterns, magnet strip cards, chip cards and calculators do not play animportant role.

Of course, combinations are possible and useful, e. g. a passport combines ”whatyou are” (appearance through a picture and partially unconscious actions through anautograph signature) with ”What he has got” - the passport itself.

are

own

know

?

What you

hand geometryfingerprintlook

autograph signatureiris codevoice

paper documentkey (metal)swipe cardchip cardcalculator

passwordanswers to questionsresults of calculations on numbers

Figure 2.2: Identification of humans by IT systems

Figure 2.2 does not give an complete list:

On the one hand the given examples can be split up further more. For autographsignatures we could ”check the results of the signing process” by comparison of two staticline pattern and we could ”analyze the dynamics of the signing process” in comparingpressure and acceleration values to sample signatures.

At the other hand there exists more examples like analysis of typing behavior forkeyboard inputs or the style of writing for pen inputs. Both are representatives for theclass ”What you are”.

For each measurement you have to decide if it is performed:

• just at the beginning

3The technique to determine what you are is called biometric technique. An overview of performanceand costs in biometrics is given in [DaPr 89], [DuD3 99], [Eber 98], [FMSS 96], [Mill3 94].

41

2 Security in Single Computers

• additionally at fixed time intervals or

• permanent

Among other things, for a ”metal key” and the analysis of the typing behavior a per-manent controlling action is suitable (the key remains inside the lock and the analyzingprocess for keyboard input can take place in the background), whereas you cannot de-mand from anybody to keep his hand for permanent registration of the hand geometryfix on the scanner.

Humans can recognize an IT system by what it is, knows and where it is located,cf. Figure 2.3. The example will show, that it is not enough if the IT system is capableto recognize the user, but the user itself must be capable to identify the IT system.The location by its own could eventually be insufficient, as you will see in the followingincident:

During the night from Friday till Saturday a dummy cash machine was installedin front of a real one, covering this. The dummy one is asking for the pin for theinserted EC cash cards and stores it. The system replies through the display: ”Youfaulty EC cash card will be impounded, please ask at Monday your credit institutefor help!”. In between money could be withdrawn from a real cash machine usingimpounded cash cards and the stored personal identification numbers (PIN).

is

knows

located

?

What it

Where it is

CaseSealHologramSoilings

PasswordAnswers to questionsResults of calculations on numbers

Figure 2.3: Identification of IT systems by humans

An IT-System can recognize an other IT-system by what it knows and where thewire comes from, compare Figure 2.4. The last one might offer a sufficient effectiveidentification if we think about components at the same printed circuit board, but willbe completely insufficient in open communication networks.

42

2.2 Protection of isolated computers from unauthorized access and computer viruses

It is important, that the communication between IT systems is not restricted to com-parably insecure algorithms as pass-phrases, Question-Answer-Dialogues and Challenge-Response-Systems like ”a number is sent, a calculation result is expected”. Crypto-graphic protocols of high complexity and security must be applicable. Here an example:

IT-system 1 generates a random number, sends it to IT-system 2 and expects acipher text encrypted using a symmetric block cipher4 for which only the two systemsdo know the key. If IT-system 1 has received the expected cipher text it does knowIT-system 2. Accordingly IT-system 2 is able to identify IT-system 1 (task 3-18agives hints to attacks, which cannot be ward off through this).

knows

wherefrom

What it

Wire

passwordanswers to questions

results of calculations on numberscryptography

?

Figure 2.4: Identification of IT systems by IT systems

2.2.2 Admission control

Admission control describes the behavior of an IT-system, where it requests the iden-tity of its communication partners and only proceed in the communication process withauthorized ones. Acting like this it impedes unauthorized usage of its resources andmaybe even much worse things.

It is remarkable, that for many services this identity has not to be bound to naturalpersons, rather it so called pseudonyms can be used instead [Chau 87, PWP 90]. Howthis is possible will be covered in §6.

It should be mentioned, that even for identification we cope with a symmetric problem.It is in the interest of the human as well, that he does not waste his time in spending itwith fake IT-systems nor telling them his secrets.

4Using a given alphabet a block cipher is capable to en- and decrypted blocks of fix length, whereas asteam cipher can do this using strings of any length, compare §3.8 Block ciphers do have normallykeys of fixed length (e. g. 128 bit long), which are used to en- and decrypt millions of blocks whereeven an adaptive active attacker should not manage to break the system.

43

2 Security in Single Computers

2.2.3 Access control

Access control describes the behavior of an IT-system, where it does not grant every-thing to authorized users: Each subject (human, IT-system, process) does only holdspecific rights to run operations on objects (processes, data, peripheral devices, etc.).A preferable small and good delimited part of the IT-system checks in advance beforethe execution of all processes, if the evoker has the necessary rights. This part of thesystem is called access monitor, cf. Figure 2.5. The access monitor memorizes therights propounded to him or the implicit arisen ones and has to be able to detect theirannulation.

The allocation of rights itself is usually called authorization. Certainly it must beprotected at least as good as the actual access to objects. Because you often cannot beaware in advance about the rights to be granted or you do not want to workout a formalspecification, you give rights which are actually too generously conceded. Therefor theaccess monitor is very often used to log all or an appropriate set of operations. Does adata security officer possesses suitable programs to evaluate the log files, you can hopethat he is able to detect violations of the rights defined in the world of the IT-system toinduce consequences if applicable.

access monitor

check authorisation,log originatorand operations

data,programs

userprocess

···

Figure 2.5: Access control by an access monitor

Two facts that have to be emphasized:

Processes as representatives for acting humans or IT-systems are monitored in theiraccess to programs and data (including output) by the access monitor. It is not thehuman accessing programs, data or devices which is monitored.

Running on a specific operating system machine user processes are not always doingthe wanted or what they seem to do - because they can contain trojan horses. Thereforthe monitor gather its information at the right places. It is located between user anduser process at the on side and the data and programs which can be owned ny others atthe other side. This will be explained by means of ”Computer viruses” in the followingparagraph in more detail.

44

2.2 Protection of isolated computers from unauthorized access and computer viruses

If we demand from access control not only to help with confidentiality and integritybut also with the realization of availability, it is not enough if the the access monitorchecks only the presence of rights, but also it has to watch the frequency of there usageand analogical restrictions. In general: To achieve availability admission control is notenough. The usage of resources has to be monitored, too, certainly including the usageof objects underlying the admission control.

2.2.4 Limitation of the thread through ”computer viruses” to-

wards the one of ”transitive trojan horses”

Being in public discussion ”Computer viruses” are special trojan horses which arecategorized through their mechanisms spread out:

If an executed program which contains a computer virus has write access to anarbitrary program, it can alter the program in a way that if executed, a copy of thevirus (optional changed) is executed together with this program.

Computer viruses can spread in the transitive closure of (mostly generously granted)write access rights.

General security measures to avoid the propagation of computer viruses were alreadyknown before viruses became invented [Denn 82] (page 317f):

Protection can be achieve by limiting the access rights of an program during exe-cution the necessary minimum (principle of least privilege).

To protect a program of unknown manipulation outside the area of physical secu-rity, all programs become digital signed by the creator, cf. §3 and [ClPf 91, Cles 88].Before a program becomes executed this digital signature has to be checked to ensurethat the program has not been changed unauthorized.

Using these security measures the thread of computer viruses can be limited to the oneof transitive trojan horses. Therefore Computer Viruses are not a research topic in thefield of Computer Science. Unfortunately this general security measures did not becomeinstated. One the one hand this was because of laziness to think and to a smaller partbecause of the cost which come along with it. Resulting we have a fatal problem with ahuge amount of computers being widely unprotected against computer viruses.

Annotation with explanation: It is foreseeable, that all attempts of coping with computerviruses without enforcing the principle of least privilege, will fail:

1. lemma: It is not decidable, if or if a program does not contain a computer virus.proof indirectly (cf. [Cohe 84, Cohe 87] see also [Cohe1 89, Cohe2 92]) We assumewe have a procedure decide(), which returns TRUE to each program, if it is acomputer virus5, and FALSE otherwise. Then we get a false result to the followingprogram:

5Here (as in many explanations) we have to pay attention to the exact wording of the definition andthe statement. The above definition of computer viruses says (on word highlighted):

If an executed program which contains a computer virus has write access to an arbitrary program,it can alter the program in a way that if executed, a copy of the virus (optional changed) is executed

45

2 Security in Single Computers

restiction of attack distribution byleast possible privileges:

Computer virus

Infection

write access permission,e.g. for game program

Program 1

Program 2

transitive Trojan Horseneccessarywrite access permissionProgram 1

Program 2

--> No computer viruses,--> only transitive Troj. Horses

unneccessary

e.g. for compiler or editor

No unneccessary access permissions!

Figure 2.6: Computer virus, transitive Trojan Horse and restriction of security threats caused by com-puter viruses to threats caused by transitive Trojan Horses

program showOpposite;

if decide(showOposite) then doNotExecuteVirusDefinition

else executeVirusDefinition;

Hence, no such procedure decide() can exist.

2. Because computer viruses are special trojan horses, you can say:

together with this program.Hence we do not ask if a program contains a virus definition, we ask if it is capable to execute it.Only for this case the the above program example gives an example to show the opposite.

46

2.2 Protection of isolated computers from unauthorized access and computer viruses

It is not decidable if a program is a computer virus or not.

Therefore generally it is not possible to detect computer viruses or trojan horses.There is only one way how you can protect yourself. Restrict the access of a programduring execution and/or do not execute suspicious programs. In short: Instead of exactdetermination it is better to treat a good natured program as potentially dangerous!

Unfortunately two further modest aims (detection of known computer viruses andafter identification of an virus attack the determination of the damage done to data)wewould like to achieve are not reachable:

3. statement: Even for known computer viruses there exist no effective detectionmechanism!explanation: computer viruses can modify themselves using events which occurduring runtime and which are therefore unknown to the detection mechanism.The detection mechanism would have to analyze all possible self modifications ofthe computer virus. Given an intelligent programming of the known virus, this isnot feasible. If we estimate now and search for short patterns using a virus scanner,there are nearly no program, which is not suspicious to contain a computer virus.First (not yet really good) tools are known between relevant people, e. g. MutationEngine [Adam2 92, DuDR2 92].

4. statement: Even for known trojan horses there exist no effective detection mech-anism!explanation: same as in 3.

5. statement: It is not possible to determine with certainty, if after a computer be-came infected by a computer virus the data became unauthorized changed and/orunauthorized handed down.explanation: After execution of the so called damage function, the computer viruscan modify the damage function and the modification mechanism as well.

2.2.5 Remaining problems

1. You have to specify what an IT System has to do and what it should forbore.The later one becomes often forgotten.

2. Afterwards the total correctness of the implementation has to be proofed.Due to matters of complexity this is often not feasible.

3. If you could achieve the these, you have still the problem, that you do not know ifyou have been able to detect all hidden channels.

The first an the third problem cannot been solved with certainty, because it is alwayspossible to miss something out. Also the second problem is not solved satisfyingly yet.Therefore it is recommended:

47

2 Security in Single Computers

IT-systems should generally designed and implemented as an distributed systemin a way, that attacker who have the control about a limited number of computers(could be as the provider or as user of an universal trojan horse), are not capable todo serious harm.

Recommended readings

Physical security measures: [Chau 84, Clar 88, Wein 87]

Identification: [DaPr 89, §7] [Mill3 94, DuD3 99]

Admission control: [Chau 87, PWP 90]

Access control: [Denn 82, ZSI 89, ZSI 90]

Actual research literature:

Computers & Security, Elsevier

Congress series IEEE Symposium on Security and Privacy, EDORICS, IFIP/Sec,VIS; Congress volume published by IEEE, as part of the series LNCS publishedby Springer-Verlag Heidelberg, at Chapman & Hall London, at the series ”Daten-schutz Fachberichte” published by Vieweg-Verlag Wiesbaden

Reference to actual research literature and comments to recent develop-ments:

http://www.ieee-security.org/cipher.html as newsletter at [email protected](with subject line ”subscribe”)

Datenschutz und Datensicherheit DuD, Vieweg-Verlag

48

3 Cryptology Basics

If an IT-system can not (or shall not1) be completely physically protected, then cryptog-raphy is the efficient additive, to accomplish confidentiality and integrity nevertheless.But cryptography does not (or at all events only indirectly) contribute to availability.

If efficiency is not overly important, then with steganography an even further-reaching– but by far not comparably well researched – aid is available (see 4).

Notional, cryptology, i. e. the science (in the Grecian meaning of the word: thewisdom) of the cryptographs, can be distinguished into cryptography and cryptanal-ysis: Cryptography describes how cryptographs work. Cryptanalysis tries to analyzecryptographs – meaning to decipher them.

3.1 Systematics

We differentiate the most important cryptographic systems by three criteria here:

1. The purpose.

Here we consider mainly

• conzelation-systems, i. e. systems to assure the nondisclosure of messages(often called encryption-systems or simply crypto-systems in the literature).Hence these systems serve the protection-purpose confidentiality.

• authentication-systems, i. e. systems to protect messages from unnoticedalteration, similar to failure-cognitive codes, but against vicious errors. Hencethese systems serve the protection-purpose integrity. A particular special caseare

– digital signature-systems, which assure that a receiver, who receivesa correctly ”coded” message, can not only be sure that the message camefrom the alleged sender, but that he can also prove this to an outsider,e. g. a court.

1This is especially for computer-networks the case: Lets assume – to accomplish physical protection –we need one guard every 100m to keep a view on the line. Lets further assume we would like to havesecurity round-the-clock, then we need 4-shift-operating. Lets further assume, guards would workfor the very low wage of 25,000 Euro/year gross. Then the physical protection costs 500,000 Europer year and kilometer. So working through this chapter and purchasing a few cryptographic devices(or even only software) turns to account quickly. You can of course compute the reduction of costsgained by replacing the human guards by dogs and by replacing continuous watching by physicalobstacles such as subterranean laying of the cable (and then only hourly visual inspection), but asmuch as you are going to compute, you will never get as economically priced as cryptography!

49

3 Cryptology Basics

Alongside there are e. g. pseudo-bitseries-generators (see 3.4) and collision-resistenthash-functions (see 3.6.3.3 and 3.8.3) [Damg 88] and much more.

2. The necessary key-distribution.

This criterion mainly alludes to such cryptographic systems, that have two kinds ofparties, sender and receiver (like conzelation- and authentication-systems). Hereone distinguishes between symmetric and asymmetric systems, as the case maybe, whether both parties use the same key or different keys (see below).

3. The security-level.

The main-differentiation here is between information-theoretical and crypto-graphic security. The former is absolute (in as much as one only considers thecryptographic system, so e. g. not key-theft). The latter is weaker, but can bereasonably subdivided (see below).

Two limitations of conzelation-systems should be mentioned at the beginning:

• They only protect the confidentiality of the content of the message, but they do notprotect communication-conjunctures, for instance who communicates from wherewhen with whom. If also the communication-conjunctures need to be protected,further protection-measures are necessary (see 5).

• Also, often the message-content is not being protected completely, but some at-tributes of the content, for instance its length, are simply being ignored2. Thisbecomes especially obvious in the proof of the perfect security of the Vernam-Chiffre (see exercise 3-7b).

What might have caused these widespread – by most of the specialists not noticed –failures (or euphemistically: inaccuracies)? Maybe the following proceeding:

On the highest design level – the requirement-definition – the following wasdetermined: Messages (atomic objects on this level) need to remain confiden-tial. Later the atomic type ”message” gets sophisticated on lower levels, bydeciding about the representation of messages and thereby implicitly intro-ducing for instance the length of messages. Eventually – on the lower level– the requirement ”confidentiality of messages” is only based on the cor-responding main-attribute, implicitly introduced auxiliary-attributes – e. g.”length” – either do not come to the fore or are considered not important.

What do we learn from that? All sophisticated, also all implicitly introduced, attributesof an object that needs to remain confidential also need to remain confidential.3 Thus

2If some attributes of the content of a message are being ignored, one needs to pay attention that thoseattributes do not give away ”too much” about the content of the message. A drastic example wouldbe unary coded message-content together with the not-protected attribute message-length. Thenthe message-length gives away everything about the message-content, so that encryption for instancewith the Vernam-Chiffre would not utilize (Unary coding means: There is only one code-sign – forexample 17 would be represented by writing this code-sign 17 times).

3This is considered to be also for derived attributes of other objects, as the following two examplesshow:

50

3.1 Systematics

one confidentiality-requirement becomes several in the course of a sophistication. Ifthis seems too costly to the designer (i. e. the sophisticator), then he needs to ask theoriginators of the requirement-definition, if they accept the not-implementation of theconfidentiality-requirement concerning some attributes. However this sounds much eas-ier, than it probably will be, as the designer first needs to explain his (sophisticated)world to the originators of the requirement-definition. And normally the originators ofthe requirement-definition did not want to occupy themselves with this ”world”.

The most important – in the following considered – cryptographic systems consist inthree algorithms (e. g. key-generation, encryption, decryption in figure 3.1), that may bepublicly known. Initially the functions of the different systems are illustrated togetherwith the required key-generation and -distribution.

3.1.1 Cryptographic systems and their key-distribution

Initially an overlook over symmetric and asymmetric conzelation-systems is given, sub-sequently, an overlook over symmetric and asymmetric authentication-systems is given.

The following basic-structure is laid underneath all four figures (see figure 3.1): To theleft and to the right resides ever the confidence level of the respective participant4. Inthe lower middle, there is the zone of attack, i. e. the region, where the attacks shallbe warded off using the cryptographic system, as they are not prevented (or at leastdefeated) there by physical and/or organizational measures.

1. For the in 3.2 described Vernam-Chiffre, the presentation of the cipher-text, that the attackerfinds out, is not allowed to depend on the composition of the modulo-sums. So for the binaryVernam-Chiffre 0 ⊕ 1 needs to look exactly like 1 ⊕ 0 concerning all properties – analog signal-levels, timing etc. (same applies on 1 ⊕ 1 and 0 ⊕ 0). In practice this is easily achievable, forinstance by temporarily storing the result before outputting it – in a software-implementationthis happens automatically in every computer. But in an hardware-implementation one needs toremember this.

2. Also, timing is crucial for the software-implementation of cryptographic systems – e. g. theexecution-time of cryptographic operations must not depend on the value of the secret key. Thiscan be achieved – while programming cryptographic software – by making sure that the value (oreven parts of the value) of the secret key do not appear in any jump-condition. Alternatively itcan be ensured via time-queries that all cryptographic operations take the same time.

Both examples are known for a long time – but seemingly not continuously heeded in practice. Oth-erwise the publication of P. Kocher about Timing Attacks would not have met such broad response.Find more about these attacks in [EnHa 96].

4This confidence level needs to be assured respectively by other than cryptographic measures, in partic-ular it needs to be delimited (and if necessary to be defended) by physical means and organizationalmeasures. In the confidence level, there should only happen things, that are authorized by the re-spective participant. And only that information should leave the confidence level, whose leaving isauthorized by the respective participant. This means that the confidence level needs to be shielded(see 2.1): No analyzable electromagnetic radiation may leave it, and also its inputs – for instanceits energy consumption – may not depend on the things that happen in it. If this is not respected,then for instance the varying energy consumption of individual transistors can be used to come forinstance by secret keys [Koch 98]. Hence, confidence levels should be decoupled as much as possiblefrom their environment. A confidence level is generally realized by a – (hopefully) for the respectableparticipant trustworthy – end device [PPSW 95, PPSW 97].

51

3 Cryptology Basics

decryption

random number

plaintextplaintext

k

k

x x

= ( ( ))k k x-1

( )k xencryption

keygeneration

secret key

secret zone

trusted zone trusted zone

zone of attack

ciphertext

Figure 3.1: Basic scheme of cryptographical systems shown by the example of a symmetrical concelationsystem

3.1.1.1 Conzelation-systems, overview

The oldest and best known type of cryptographic systems is the symmetric conzela-tion-system (see figure 3.2).

k

k

x x

= ( ( ))k k x-1

( )k xdecryption

random number

plaintextplaintextencryption

keygeneration

secret key

secret zone

ciphertext

Figure 3.2: Symmetrical concelation system (black box with a lock; two identical keys)

Annotation: The notation k(x) is just an abbreviation because the k that is generatedby the key-generation is normally not a whole algorithm. It is just a key, i. e. a

52

3.1 Systematics

parameter, that is put into some default en- and decoding-algorithms, e. g. en andde (hence, in reality en and de would be once and for ever programmed or imple-mented in hardware). Then the cypher-text is S:=en(k,x) and the decoded text isde(k,S)=de(k,en(k,x))=x.

The problem with symmetric conzelation-systems is, that if a message needs to be en-coded to be sent over an insecure net, the key k first needs to be present at the senderand the receiver.

If the sender and the receiver have met each other in real life earlier, they could havechanged k on that occasion.

But if one just takes the net-address of the other one (e. g. the service-provider inan open system) from a directory and both want to communicate confidentially now, itbecomes difficult. In practice one usually assumes the existence of a trustworthy ”key-distribution-center” C to solve this problem. Every participant A exchanges a keykAZ with C upon registration to the open system. If now A wants to communicatewith B and does not yet have a common key with B, then A requests a key from C. Cgenerates a key k and sends it to both A and B, namely encrypted with kAZ to A andencrypted with kBZ to B. From then on, A and B can use the key k to send messagesin both ways. Of course the confidentiality is not very high: Apart from A and B, alsoC can decrypt all messages (And it is even very likely that C can read all messages, asthe operator of the key-distribution-center and the network operator are often the sameperson).

One can improve this, by utilizing several key-distribution-centers in a way, that theycan only read the messages, if they all work together (see figure 3.3). Again, everyparticipant A exchanges a key kAC with every center C in the beginning. If A wantsto communicate with B, then A asks every center to generate a fragment-key. Thosecenters send the fragment-keys ki again confidentially to A and B. A and B use thesum k of all ki (in an appropriate group) as actual key. To finally get an e. g. 128 bitlong key, every ki needs to have length 128 bit, and k could be the bit-by-bit XOR ofall ki (or also the sum modulo 2128). As long as at least one of the centers keeps itsfragment-key ki secret, the other centers have no information about k, as by adding anappropriate ki, still all k can be reached (This means, that k is still perfectly encryptedwith a Vernam-Chiffre from the view of the other centers, see 3.2).

Asymmetric conzelation-systems were invented to simplify this key-distribution(see figure 3.4).

Thus different keys c and d are necessary for en- and decoding, and only d needs tobe kept secret.

Annotation 1: Again, the notations c(x) and d(c(x)) are just abbreviations. If enand de are again en- and decoding-algorithms, then c and d are actually input-parameters for these algorithms. The cypher-text is S:=en(c,x) and the decodedtext is de(d,S)=de(d,en(c,x))=x.

Annotation 2: The random number in the lower left of the figure 3.4 has the followingmeaning: Lets assume, the encryption would be deterministic. If an attacker then

53

3 Cryptology Basics

X Y Z

A B

k ( )kAX 1

k ( )kBX 1

k ( )kAY 2

k ( )kBY 2k ( )k

AZ 3k ( )k

BZ 3

key distribution center

participant key k = k + k + k1 2 3

k(messages)

participant

Figure 3.3: Key distribution for symmetrical concelation systems

sees a cypher-text S and knows some plain-texts xi (of which probably one is thecorrect plain-text for S ), then he can check this himself by creating all c(xi) andcomparing them to S (This is certainly hardly possible for long private letters, butit is possible for standard-steps from protocols, e. g. for payment transactions). Forthis reason, the encryption must be made indeterministic, i. e. S must also dependon a random number that can not be guessed by the attacker (this is also known asprobabilistic encryption).

This random number can be understood as part of the plain-text. Because thedecoding-algorithm acquires the complete plain-text, it also obtains the containedrandom-number, but does not give it to the outside.

To assure that c really does not need to be kept secret, d may not be gained from c withreasonable expense (this will be specified more exactly later in security concepts and in3.6).

Now, every user A can generate (in any case) his own pair of keys (cA, dA) and mustnot communicate dA to anyone else. Further on, cA just needs to be distributed in sucha way, that every other user B, who wants to send a confidential message to A, has cA

or can find it. If A and B previously knew each other, they can exchange cA privately;B can now write cA openly in his address-book. Also, users can retell cA to other users.

For connections with unknown persons, cA could also stand in the same directory inwhich B also looks up the net-address of A. If there exists no such directory outside ofthe net, then a key-register inside the net can take the place of the directory (see figure3.5).

Be aware, that in the case of figure 3.5, the key-register R (or its operator respectively)can not decipher the message. Indeed there is a changing attack-possibility for R (see

54

3.1 Systematics

c

d

x xc(x)

encryption key,

public

decryption key,secret

= d(c(x))random number'

decryption

random number

plaintextplaintextencryption

keygeneration

secret zone

ciphertext

Figure 3.4: Asymmetrical concelation system (Black box with a spring lock, there is only one key: Ev-eryone can put things in, only the key owner can take it out.Thereby, the analogy between an asymmetrical concelation system and a black box with aspring lock reaches its limits, when the lock has clicked. After that, nothing can be put intothe box without assistance by the key owner. With a asymmetrical concelation system thisis always possible, i.e. after encryption of one message additional messages can be encrypted.So, an analogy with a black box with a lock, for which only one key exists, and where thebox has a slot for putting things in, would be better. The slot must be properly designed,i.e. it must be impossible to take things out through the slot, and one must not be ableto see the box contents through the slot. But this analogy needs one additional mechanicalsecurity feature)

exercise 3-4c).

In order that participant B can be sure that cA really belongs to participant A, thepublic key-register not simply authenticates cA with its signature but it authenticatesthe correlation between A and cA. Otherwise an attacker could intercept the responsemessage (step 3. in figure 3.5) from the key-register and send another ”authenticated”key to B, for example a key to which the attacker knows the according decryption-key.

3.1.1.2 Authentication-systems, overview

A symmetric authentication-system is illustrated in figure 3.6.

So the message is not being obliterated by the left cryptographic algorithm in figure3.6 here, but a verifying-part is appended to the message. This verifying-part is oftencalled MAC. The receiver can also create the correct MAC (using x ) and check, if theMAC that came with the message is the same.

Annotation: Again, the notation k(x) is just an abbreviation. If the cryptographic algo-rithm is called code, then MAC:=code(k,x) and the receiver of a message (x,MAC )tests, if MAC=code(k,x).

The key-distribution can be handled in the same way as the key-distribution for sym-

55

3 Cryptology Basics

A registers his public

encryption key c

(optionally anonymous)

participant

A

publickey-register R

c (message to A)

B requests A's public

encryption key from the

key-register R

A

B receives c , A's public encryption

key, signature,certified by R's

from R

1.

2.

3.

participant

B

A

A

Figure 3.5: Key distribution for asymmetrical concelation systems by a public key-register

metric conzelation-systems. Also, there are the analog problems, e. g. a single key-distribution-center could now foist faked messages.

k

k

x x

"ok" or "false"

( )x, k xverify:

random number

plaintext andplaintext

generate:

keygeneration

secret key

secret zone

text and

MAC :=k x( )

MAC :=k x( ) ?

authentication verification result

=: MAC(messageauthenticationcode)

Figure 3.6: Symmetrical authentication system (Glas box with a lock, there are two identical keys forputting things in)

Asymmetric authentication-systems, also called digital signature-systems, simplifythe key-distribution analogously to the asymmetric conzelation-systems (see figures 3.7and 3.8).

But their main-advantage is another one: Namely that the receiver B of a signedmessage from A can now prove to anyone else (who also knows A’s key tA), that this

56

3.1 Systematics

message came from A. This is not possible with a symmetric authentication-system,even if the key-distribution-center would confirm the keys that A and B had to a courtfor example: In a symmetric authentication-system, B could as well have created theMAC himself. But in digital signature-systems, A is the only one, who can create thesignature.

Therefore, digital signature-systems are inevitable, when it comes to relevant legalthings that need to be handled digitally in a secure manner, e. g. in digital paymentsystems. There, they correspond to the role of the handwritten autogram in today’slegal transactions (hence the name of course). In the following, the notions ”autogram”and ”signature” are used widely synonymic.

The just described main-advantage of digital signature-systems concerning integrityis always opposed to a disadvantage concerning confidentiality: The receiver of a digitalsignature can show the message and the signature around – and if the key to test thesignature is public, everyone can test the validity of the signature. Depending on thecontent of the message, this can be much more unpleasant for the signer of the mes-sage, than if the message could just be shown around behind his back – anybody canmake up arbitrary messages and spread them as non-verifiable rumors. This verifica-tion of the integrity of a message behind the sender’s back is not possible in symmetricauthentication-systems: In such systems, also the receiver of the message could havecreated the MAC, so that third persons can not verify the authenticity of the message insymmetric authentication-systems. Please note that there exist authentication-systemsthat combine both advantages: In undeniable signatures, the assistance of the signeris necessary to verify the signature, so that verification behind his back is impossible.But on the other hand, then it needs to be stated under which circumstances the (pro-fessed) signer is obliged to actively assist the verification of ”his” signature. Because ifnot, the receiver of the signed message could (in case of a conflict with the sender) nottake any advantage of the fact that the message is signed, because he could not convinceanyone of this fact (find more information about this type of authentication-systems in3.9.4).

Annotation 1: In detailed notation, signing- and testing-algorithms could be called signand test. The signature is Sig:=sign(s,x) then, and the receiver of a message (x,Sig)computes test(t,x,Sig)∈{true,false}.

Annotation 2: The random number in the lower right of figure 3.7 is chosen indepen-dently from the random number above (the reason why this random number is nec-essary, will not become understandable until 3.5).

Please be aware that, in contrast to symmetric authentication, a separate testing-algorithm is needed here, as the receiver has the key k there, with which he can computethe correct value MAC from x ; But here, the receiver does not have s.

Further be aware that the possibility to exchange the testing-key privately with acommunication-partner only suffices in the case that one trusts this partner, i. e. thatone just uses the signature-system as convenient form of mutual authentication. But if

57

3 Cryptology Basics

random number

text

text, signature and

verification resulttext and

signature

s

x, s(x)

key for signatureverification,public key for signature

creation,secret

x, s(x),

"ok" or

t

x

random number'

secret zone

"false"

verify sign

keygeneration

Figure 3.7: Digital signature system (Glas box with a lock; there is only one key for putting things in)

A

message from A, s (message from A)A

1.

2.

3.

A registers his key for

signature verification t

(optionally anonymous)

participant

A

publickey-register R

B requests A's key for

signature verification from the

key-register R

B receives t , A's key for signature

verification, signature,certified by R's

from R

participant

B

A

Figure 3.8: Key distribution for digital signature system by a public key-register

one wants to be sure that a signature is accepted by a court (if necessary), then oneneeds to assure that one has the correct testing-key, i. e. at least ask a key-register tocontrol this.

Again, a single key-register would have opportunities for changing attacks (comparewith exercise 3-4c).

The attestation of the public testing-key refers – as with the public decryption-keys– not to the key alone, but to the correlation between the key and the participant.If we ignore other information, such as the time-stamp (compare with exercise 3-4f),then the attestation from the key-register R for tA (step 3. in figure 3.8) is as follows:

58

3.1 Systematics

A, tA, sR(A, tA). This attestation of the public key is called certification of the publickey, the instance that executes it is called certification authority (CA).

Recapitulating, there exists one more requirement for the key-distribution here thanfor asymmetric conzelation-systems: It must be consistent. This is a common target ofseveral receivers, who do not trust the certification authority concerning the fulfillmentof this target: Even if the certification authority cheats, all receivers need to get thesame value t. Imagine, a receiver would check with a key t, but the court would checkwith an entirely different key t’ later on! Further on, the integrity of the distributionmust be assured, i. e. if the certification authority does not cheat, everybody receives thecertified t.

The consistency of the key-distribution should be supported by the fact that the userwho wants a public key to be certified, proves the knowledge of the associated secret key.Otherwise, an attacker could present foreign public keys to the certification authorityand let them be certified as his own public keys. Though this would not allow himto execute the secret operation, but the certification of the same public key for severaldifferent participants could cause confusion and disorientation. An elegant possibility toprevent all this for digital signature-systems is, that in the example, A does not presenttA for certification but first of all certifies in person the correlation between his nameA and his public key tA using the secret key sA. So he presents A, tA, sA(A, tA) to thecertification authority and receives as attestation A, tA, sA(A, tA), sR(A, tA, sA(A, tA)).This is often called the key-certificate.

Recapitulating, the certification authority has the following tasks:

1. Verification of the identity of a participant (How thoroughly this is done, is definedby the certification authority in a so called certification policy. Often this is doneby submission of an official identity card).

2. Verification of the application of the participant (How thoroughly this is done, isagain defined by the certification authority in the certification policy. Often thisis done by a written, personally signed contract, if necessary, the autogram of theparticipant is even notarized).

3. Verification of the uniqueness of the name (that is about to appear in the key-certificate) of the participant.

4. Verification, that the participant can apply the associated secret key.

5. Digital signing of the name of the participant and the public key, if necessary,addition of the public (itself again certified) key of the certification authority.

3.1.2 Further annotations to the key-distribution

The important classes of cryptographic systems were presented together with their key-distribution in 3.1.1. Three facets of the key-distribution are going to be illustrated here,before we will discuss the denotation of security in cryptographic systems in 3.1.3.

59

3 Cryptology Basics

3.1.2.1 Whom are keys assigned to?

The distinction of three possible answers is interesting here:

1. single participants,

2. pair-relations or

3. groups of participants.

In asymmetric cryptographic systems, pairs of keys are usually assigned to singleparticipants. The secret key (d or s) is then just known by the single participant, thepublic key (e or t) is potentially known by everybody (see figure 3.9).

In symmetric cryptographic systems, keys are usually assigned to two participants,i. e. to a pair-relation: If the (for the key-distribution unimportant) case, that someonesends messages to himself, is disregarded (see exercise 3-6c), then always at least twoparticipants need to know the key in symmetric systems. As symmetric keys need toremain confident, they should only be known by the least possible number of persons.From this follows that every symmetric key is known by exactly two participants.

Thus, (except for special circumstances) keys are not assigned to groups, but eitherto single persons or to pair-relations.

asymmetric

concellation system:

confidentiality

symmetric cryptographic

system: confidentiality

and / or integrity

digital signature system:

integrity

crypto-

graphic systems

and their goals

knowledge aboutthe key

one person two persons every one

encryption key

(bothpartners)

(publicly known)

cdecrytion key

d

(generator)

sign keys t

secret key

k

verification key

Figure 3.9: Goals and key distribution of asymmetrical concelation system, symmetrical cryptographicalsystem and digital siganture system

3.1.2.2 How many keys need to be exchanged?

If n participants communicate confidentially with each other, they need n pairs ofkeys, if an asymmetric cryptographic system is used. If they want to digitally signtheir messages, n additional pairs of keys are needed. Hence, altogether 2n pairs ofkeys are needed (If participants shall be able to stay anonymous and if their public

60

3.1 Systematics

keys shall not be person-tags (see 6), then every participant does not only need two, butmany digital aliases, i. e. public keys).

If n participants use symmetric cryptographic systems and if everyone wants pro-tected communication with everyone, they need n·(n-1) keys. It is assumed that keysare assigned to pairs of participants (see 3.1.2.1) and that a separate key is used for ev-ery direction of the communication (compare with the solution of exercise 3-7e). Thus,the participants will often exchange two symmetric keys in one step, so that the key-exchange will only occur n·(n-1)/2 times.

In particular for IT-systems with many participants, the following question turns up:When shall the (pairs of) keys be generated and exchanged?

In asymmetric cryptographic systems, the generation can happen without problemsbefore the actual operation of the IT-system or – if the participants exchange – it canhappen before the registration of the single participant. The distribution of the publickeys can take place on demand.

In symmetric cryptographic systems, the generation or even the distribution can (atleast in IT-systems with many participants) not happen for every possible communica-tion connection because of the substantially higher number of necessary keys. Hence,only the keys between the participant and the key-distribution-center(s) are created andexchanged (when he registers) in such systems. Then all other keys can and must becreated and exchanged on demand. This saves considerable effort, as in IT-systems withmany participants, generally just an infinitesimal fraction of all possible communication-connections is ever seized.

3.1.2.3 Security of the key-distribution limits the cryptographically reach-able security

Because a secure primary-key-distribution outside the communication-system that is tobe protected is assumed in the following, its importance shall be emphasized here:

The – by usage of a cryptographic system – reachable security is at mostas good as the security of the primary-key-distribution.

Therefore it is reasonable and – as shown in 3.1.1.1 and in the exercises 3-4c) and d) –possible, not to assume just a single primary-key-distribution, but several. Thereby, thereachable security can be significantly improved.

Annotation: Also the security of the key-generation limits the cryptograph-ically reachable security:

A secret key (k,d or s) can be at most as secret as the random number from which it wascreated. This random number hence must be – if any possible – created at the futureowner of the key and must not be guessable by an attacker. Date or time may in nocase be chosen as random number and one must not use the simple function random inones programming-environment. Generation of good random numbers is not trivial.

On the one hand, physical phenomena can be used. Special hardware, which isnot possessed by every personal computer, is needed for this. Examples for this are

61

3 Cryptology Basics

noise-diodes, radioactive processes or turbulences in the fan of an ordinary computer[GlVi 88, DaIF 94].

If only a few real random numbers are needed, then they can also come from theuser. Firstly, he could dice. This might be a bit exhausting, but for the creation ofa signing-key for really important things, it is possibly the best idea. Secondly, onecan just let the user type in characters ”randomly” and then compress the input insidethe computer. Thirdly, sometimes the most low-order bits of the distance between thekeystrokes is used instead of the entered characters, as they are more randomly. But thisis – at least in combination with the too low sampling rate of some operating systems –problematical.

If one does not completely trust in any of the existing methods, one should createseveral random numbers using different methods and then use their XOR (see figure3.10). If only one of the numbers is random and secret, then it is proven, that alsothe XOR is random and secret. One should proceed in such a way particularly if thegovernment is interested in being involved in the creation of random numbers becauseit fears that some users could enter ”bad” random numbers, as the government can notdemand that users trust in foreign random numbers.

z

z

z

z

z

1

2

3

n

Å

Å

Å

···

gen

gfjjbz

user device

Figure 3.10: Generation of a random number z used for key generation: XOR of random numbersgenerated by a device, received from a manufacturer, diced by the user or calculated fromtime intervals

3.1.3 Basics of security

3.1.3.1 Attack-aims and -successes

During the establishment of a cryptographic system, four phases can be differentiated:

1. Definition of the cryptographic system

62

3.1 Systematics

2. Choice of the security-parameter

3. Choice of the key(-pair)

4. Processing of messages

While the break can already be investigated after the first phase with general complexity-parameters, the investigation for concrete effort (that can be yielded by an attacker) isonly profitable after the second phase.

For the following attacks, it is assumed, that they occur after the third, or evenafter the fourth phase. The following are differentiated in distinguished order of attack-successes:

a) Obtaining the key (total break)

b) Obtaining a procedure that is equivalent to the key (universal break)

c) Breaking (i. e. faking or decoding) only for some single messages5 (message-depen-dant breaking).

c1) for a self-chosen message6 (selective break)7 or

c2) for a random message (existential break)8.

It is obvious, that breaking in terms of a), b) and c1) is not acceptable. If also c2) is notallowed, one is certainly secure enough. But there are weaker cryptographic systems,where one just hopes that the reasonable (and for the attacker, the interesting) messagesare so rare in the total amount of messages, that the attacker will never obtain any ofthem with his success in terms of c2).

In conzelation-systems, one can further distinguish b) and c):

1. Complete decoding, i. e. the attacker obtains the whole plain-text.

2. Partial information: The attacker only obtains some characteristics of the plain-text, e. g. single bits or the cross sum of the plain-text.

5For ”faking”, a single message usually is a plain-text, for ”decoding”, it is a cipher-text.6In conzelation-systems, this is typically an intercepted, encrypted message, of which the plain-text

is interesting to the attacker (so the attacker selects the message). If the attacker would create thecipher-text himself, then he would learn nothing about others from the corresponding plain-text. Butin authentication-systems, he can freely chose a plain-text for authentication. This authenticationcan be useful for him, when he uses it against others.

7A clear difference to universal breaking does only exist, if the selective break includes an active attack,see 3.1.3.2.

8In conzelation-systems, only selective message-dependant breaking is reasonable – it is futile to decoderandom cipher-texts, that were sent by nobody. Exactly this would be done by existential breaking.

63

3 Cryptology Basics

3.1.3.2 Types of attacks

Ordered by ascending strength:

a) Passive. In conzelation-systems, this means, that the attacker just watches all thetime. In authentication-systems, this means, that the attacker just watches untilhe tries to fake a message.

In conzelation-systems, one further distinguishes:

a1) cipher-text-only attack: The attacker only sees cipher-texts all the time.

a2) known-plain-text attack: The attacker sees the corresponding plain-textsfor some cipher-texts, e. g. because some message are made public after sometime. In a weaker manner, this occurs when the attacker can guess parts ofthe plain-text with the utmost probability (e. g. on the basis of the reactionof the receiver, or on the basis of events that just happened near the sender,or because those parts are typical letter-openings); many classic conzelation-systems were broken like this.

b) Active. Ahead the actual attack, the attacker already interferes with the system,e. g. he first makes a signer sign some harmless-looking messages.

One can further distinguish: Which communication-partner is attacked by theattacker? In asymmetric systems, only an attack on the partner with the secret isprofitable, as the attacker can encode or test signatures himself.

In symmetric conzelation-systems, one distinguishes:

b1) plain-text → cipher-text (chosen plain-text attack): The attacker canchoose plain-texts to be encoded and sent by the sender. Example: He visits abank-agency as client and executes certain transactions, which are then beingencoded and sent to the main-office.

b2) cipher-text → plain-text (chosen-cipher-text attack): The attacker sendscipher-texts to the receiver9 and see the decryptions. Example: The attackeris employee in the banks main-office, but is not allowed to know the key. Nowhe sends weird cipher-texts (with feigned sender-address) and observer whatthe decryption-computer does with them.

According to the annotation above, only b2) is relevant in asymmetric conzelation-systems.

In symmetric authentication-systems, one distinguishes:

b1) plain-text → MAC (chosen plain-text attack10) and

9In order that the receiver uses the corresponding key while decoding, the attacker usually needs to (beable to) state a (faked) sender-address that fits. If the receiver (in symmetric conzelation-systems)would use the key that he exchanged with the attacker while decoding, the attacker would not obtainnew information – he would know the key and could execute the operation himself.

10Until now, this term is only used for conzelation-systems in literature.

64

3.1 Systematics

b2) MAC → (a possible11) plain-text (chosen MAC attack12).

Only b1) plain-text → signature (chosen plain-text attack) is relevant in digitalsignature-systems.

Adaptivity:

• non-adaptive (i. e. the attacker has to choose all his messages in the beginning)

• adaptive (i. e. the attacker can first choose some messages, and later he can choosesome further messages depending on the result, etcetera)

An example to point up the sense of this distinction: An attacker goes with a crateof beer to a televisor which is not averse to the odd drink and gives him beer until thetelevisor needs to use the bathroom. As a general rule, the televisor will – contrary to hisorders – not take the cryptographic device with him to the bathroom. Now the attackerhas the opportunity to process several texts with the cryptographic device – but he willcertainly not have the calmness and the time to imagine new clever inputs dependingon the outputs of the cryptographic device. So our beer-crate-bathroom-attack is annon-adaptive active attack. But if we replace the beer with sleep-tablets, then we getan adaptive active attack – at least for attackers with good nerves that are able toconcentrate in uncomfortable environments.

It is desirable, that a cryptographic system does allow none or only minimal attack-successes, even for very strong attackers. So the best option is a cryptographic systemthat can not even be existentially breached even by adaptive active attacks.

3.1.3.3 Connection between observing/changing and passive/active attack-er

While the admission for actions is crucial in the informal definition of the attacker-attributes observing/changing in 1.2.5, passive/active distinguishes actions indepen-dently whether they allowed or forbidden to the attacker. Hence, observing and passiveattacker – as well as changing and active attacker – do not mean exactly the same.

For example, an observing attacker can be a passive attacker, if he is interested inthe contents of communications of a communication-connection which is protected by asymmetric conzelation-system, in which he is not allowed to interfere. Or he can be anactive attacker, if he – as in the example above – visits a bank as client and executesallowed, but self-chosen transactions, which are then being encoded and sent to themain-office.

Conversely, a changing attacker can be a passive attacker, if he illicitly obtains copies ofthe encrypted password-file by bypassing and/or breaking the access-control in another

11For short MACs but potentially long plain-texts, single MACs must in average correspond to severalplain-texts.

12Admittedly, a chosen MAC attack is a bit theoretic: A stolen authentication-device would expect theplain-text as input and it would generate the corresponding MAC – and not the other way around.But there is nothing like persevered systematics. . .

65

3 Cryptology Basics

IT-system (active attack) and tries to obtain the plain-text (passive attack)13. But ifthe changing attacker is also a legal user of the IT-system and if it is also forbidden forhim to copy the password-file (changing attack), so he is to be considered active, as hemay change his password arbitrarily.

The examples are illustrated in figure 3.11.

observing

changing

passive aktiveAttacker

Interested in the contentsof foreign communicationrelationships without inter-ferring that communication

Allowed selection of cleartexts,which are then decrypted witha key unkown to the attacker andtransmitted. These ciphertextsare used by the attacker tobreak the cryptographic system.

Breaks access controland tries to decryptforeign encrypted files.

Breaks access controland tries to decryptencrypted files whichenclose cleartext partiallychoosen by the attacker.

Figure 3.11: Combinations of attacker properties observing/changing and passive/active

3.1.3.4 Basics of ”cryptographically strong”

If possible, one should use information-theoretical secure systems in the aspect of the se-curity of the cryptographic system. That is the symmetric conzelation-system ”Vernam-Chiffre” (one-time-pad) (described in 3.2) or the authentication-codes (described in 3.3).The attacker can compute as much as he wants – it will not help him14.

Three reasons could force a change to systems that are only complexity-theoreticalsecure:

• The necessary security-service is not reachable by information-theoretical securesystems, for example a digital signature in the narrow sense (see 3.1.1.2).

• The risk of unauthorized notice of the exchanged key is higher than the addi-tional security gained by an information-theoretical secure cryptographic systemcompared to an alternative asymmetric (and therefore only complexity-theoreticalsecure, see below) system.

13In a certain sense, it is being cheated here, as ”changing” and ”passive” refer to different things:”changing” refers to the retrieval of the password-file and ”passive” refers to the decryption of theencrypted password-file. If this should be prevented, then passive changing attacks would be thoseattacks, where something should happen (changing), but does not happen (passive). For example aTrojan horse inside a component could refuse the service of the component to its environment (denialof service attack).

14So the security is (as far as the algorithmic parts of the system are concerned) absolute

66

3.1 Systematics

• The effort of exchanging the necessary keys for information-theoretical secure sys-tems is too big. This effort grows unlimitedly with the total-length of the messagesthat need to be protected.

The next-secure systems after information-theoretical systems are called ”cryptographi-cally strong”. Independently on the type of the system, one can say several things aboutthe security in terms of cryptography here:

1. Many cryptographic systems are in principle breakable: If keys of fixedlength l are used, and enough information is available, making the the key unique,then an attacker-algorithm could in principle always try all the 2l keys and therebycompletely break the system (This affects most asymmetric systems, concerningconzelation as well as authentication, as there is generally only one possible d or sfor every known c or t. And this also affects most symmetric conzelation-systems(except the Vernam-Chiffre) for chosen plain-text attacks with enough ”material”.Moreover, the analogue fact holds e. g. for the faking of a signature for a specificmessage: The attacker knows that his faked signature Sig for the message x isgood enough, if it passes the test test(t,x,Sig). As there generally is an upper limitto the length of signatures, he does again only need to try finitely many Sigs).

This depleting trying needs exponentially many operations though , so it is notrealistic for e. g. l > 100.

But this means, that the best, that the developer of cryptographic system mayhope in terms of complexity-theory, is complexity for the attacker-algorithm whichis exponential in l.

2. Not only asymptotic ”worst case”-complexity:

Complexity-theory provides mostly asymptotic results.

Complexity-theory deals mostly with ”worst-case”-complexity.

It is nice to know how the security acts when the security-parameter l (generaliza-tion of the key-length; applicatory useful) is sufficiently big. But in the practicalapplication of cryptographic systems, the values of all parameters must be fixed.So the interesting question is: How secure is the system for specific, fixed valuesof the parameters? Or differently asked: Where starts ”sufficiently big”?

The ”worst-case”-complexity is insufficient for security, likewise is the ”average-case”-complexity (For there should not only be a few keys that can not be broken,but most keys should not be breakable).

One wishes: The problem must be difficult almost everywhere, i. e. it may only benot difficult in a infinitesimal fraction of all cases (i. e if an honest user chooses hiskey randomly, the likelihood that the key can not be broken must be overwhelm-ingly big).

If the security-parameter is l, then one demands:

If l→∞, then likelihood of breaking → 0.

67

3 Cryptology Basics

One hopes: The likelihood of breaking decreases rapidly, so that l does not needto be overly big.

3. How to define easy and difficult? Essentially, two complexity-classes areneeded in cryptographic systems: The things that the normal users must do, mustbe easy, whereas the breaking must be difficult. Formally, this is mostly representedby the difference between polynomial and non-polynomial, thus:

key-generation, en- and decoding: easy = polynomial in lbreaking: difficult = non-polynomial in l ≈ exponential in l

Why this special separation? (articles b and c do also apply on general complexity-theory)

a) More difficult than exponential is not possible, see 1).

b) Closure: Inserting polynomials into polynomials gives again polynomials, i. e.reasonable combinations of polynomial algorithms are polynomial too (with-out needing to prove it).

c) Reasonable computation-models (Turing-, RAM-machine) are polynomially e-quivalent, so one does not need to appoint a specific machine-model.

A polynomial of high order would suffice as lower bound for the attacker-algorithmin practical use (and the simple algorithms are for the most part only of l3 or less).

Annotation to: polynomial in l ≈ exponential in lHere stands no equal sign as there exist more functions between polynomial (lk

with fixed k) and exponential (2l), e. g. 2√

l.

4. Why are complexity-theoretical assumptions necessary? (e. g. ”factoriza-tion is difficult” (see 3.4.1))

Until now, the complexity-theory can not prove any practical lower boundaries.

For experts: More exactly, the breaking of many of such systems obviously lies inNP (in principle, NP means here: If a solution has been guessed, one can rapidlytest if it is correct. It has been like this e. g. in 1) with the signature that was tobe faked). Therefore, it is obvious for such cases, that a prove, that the breakingof such a system is exponential, would imply P 6=NP. But as many scientists try toprove or refute P 6=NP since many years, we can not expect such a prove for thenear future.

Inside of NP, there are no appropriate lower boundaries known, hence it would nothelp if one would be content with e. g. l100.

5. What kind of assumptions are made willingly?

68

3.1 Systematics

The more compactly and the longer (and by the more people) examined, the moretrustworthy. In certain respects thus, cryptographic systems are either gettingbroken or become more trustworthy while getting older if they are not information-theoretical secure. Nevertheless one needs to gradually increase the parameters ljust because of the advancing of computer-performance. (Pay attention, if thingsneed to remain secret for a long time, or if signatures need to remain valid i. e.they need to remain fake-proof for a long time!)

Of course, an NP-complete problem would be especially nice, but no such problem– that is also hard in the sense of 2) – has been found yet.

6. What, if the assumption turns out to be wrong?

a) Make other assumptions.

b) More detailed analysis, i. e. appoint a specific computation-model (RAM- orTuring-machine? How many tapes has the Turing-machine? etcetera) andthen analyze, if the order of the polynomial is high enough.

7. Proof-intention: If the attacker-algorithm can break the cryptographic system,then it can also solve the assumedly difficult problem.

3.1.3.5 Overview over the in the following presented cryptographic systems

About the crossed-out fields in figure 3.12: Such systems do not exist with the specifica-tions from figures 3.4 and 3.7, as justified in article 1) above (Indeed there exists a systemnow, that almost has the characteristics of digital signatures and that is information-theoretical secure. But in means of efficiency, it presently seems not to be workable.Compare 3.9.3 and [ChRo 91]).

Since 1998, the field 3 is not entirely empty any more. Till then, there was no sys-tem known that was practical against active attacks, and there was no system againstadaptive active attacks known at all [BlFM 88, NaYu 90, Damg 92]. But the – 1998 byRonald Cramer and Victor Shoup published – system CS [CrSh 98] has not been provenequivalent to a true standard-problem like the factorization of large numbers (see 3.4.1)or the computation of discrete logarithms (see 3.9.1). Its security is only as strong, as theDiffie-Hellman-decision-problem is difficult. Though this assumption is also not unusual,it is a stronger assumption than the Diffie-Hellman-assumption which itself is a strongerassumption than the discrete-logarithm-assumption. For these reasons and because sys-tems which are based on the discrete logarithm are not a main-topic in this lecture, CSwill not be explained in further detail in the following, but a system which is based onthe factorization-assumption will be described by introducing the s2-mod-n-generator.

The fields 10 and 11 are empty because no such systems are known. But in contrastto fields 1 and 2, an invention is not impossible.

About the subset-signs:

Horizontally: Every asymmetric system can also be used as symmetric system if thecreator of the secret key communicates it to his partner.

69

3 Cryptology Basics

Vertically: Every information-theoretical secure system is also cryptographicallystrong, and the latter are automatically well-researched. Also every system that issecure against active attackers is more than ever secure against passive attackers (Butpay attention: This does not mean that every single cryptographically secure systemmust be better than every single well-researched; i. e. one could imagine, that the s2-mod-n-generator can be broken and that DES can not be broken because it is based ona completely different assumption.).

That explains the remaining empty fields: There one can use the system from thefield to the right, or the system from the field above, e. g. GMR for 4, 6, 7, 9 and 11 orauthentication-codes for 4, 6 and 9. Indeed there are sometimes more efficient system forweaker requirements. In fact, this is the only reason, why e. g. a pseudo-one-time-padinstead of a real one-time-pad or RSA as signature system instead of GMR, or DESinstead of an authentication-code might pay off.

Cryptographic systems which are not even well-researched were not included here.The term is a bit controversial of course. Particulary those systems for which alreadyseveral similar systems were broken were not called well-researched here (Nevertheless,some people use nonlinear shifting-registers for symmetric conzelation as these are evenfaster than DES).

Systems which were not even published, but which were only analyzed by their creatorand his ”intimates”, can be trusted even less.

3.1.4 Hybrid cryptographic systems

Because symmetric cryptographic systems can be implemented between two and threetimes more efficiently than asymmetric systems in hardware as well as in software, asym-metric conzelation-systems (and if necessary digital signature-systems) are mostly onlyused to submit a confidential (and if necessary digitally signed) key for symmetric cryp-tographic systems. With this submitted key, the actual data can be encrypted and/orauthenticated much more efficiently i. e. much faster. This cryptographic system, whichconsists of an asymmetric conzelation-system, if necessary of a digital signature-systemand of a symmetric cryptographic system, is called hybrid cryptographic system. Itcombines the comfortable key-management of the former cryptographic systems and theefficiency of the latter. This hybrid cryptographic system is of course only as strong asthe weakest system that was used to build the hybrid.

A hybrid cryptographic system may not only be used because of its higher efficiency,but also because it uses the transfer-channel more efficiently: long blocks could createtoo much overhead; the error-propagation of the symmetric cryptographic system couldbe more adjusted to the application and to the channel (see 3.8.2)(The latter is onlyreasonable, if authenticity is not important for the application.).

70

3.2 one-time-pad

Authentikation

sym. asym. sym. asym.

Security level

sym.

c con elation

system

asym.

c con elation

systemdigital

s eignatur system

informationtheoretical

crypto-gra-phicallystrong

wellre-searched

aktive

attack

passive

attack

m -athematical

chaos

Vernam-Chiffre

(one-time pad)

GMR

RSA RSARSA

DESDES

Pseudo-one-

time-pad with

s -mod- n-

Generator

System with

s -mod- n-

Generator

Authenti ac tion

Codes

System

type É

Ç

against ...2

2

1 2

3

10

98

765

4

11

sym.

a cuthenti ation

system

CS

É

Ç

Ç

system is impossible dominated by the known system

Concelation

Figure 3.12: Overview on the following cryptographical systems

3.2 one-time-pad

This section deals with information-theoretically systems of secrecy that are secure andsymmetrical. They consist thus of the components represented in figure 3.2. It is shownthat the simplest example, the one-time-pad, is so perfect that one needs no further.

Broudly security means here that an aggressor, who has a cryptotext S (of arbitrarylength), thus nothing gets to know about the plain text. This is achieved by requiringthat at least for each possible plain text x one key k does exist, so that k(x) = S. (Thatmeans: x is actually a possible content of S, and straight in that case, where the secretkey is k.) So that the different plain texts do not become differently probable, it isrequired that in the normal case, where all keys have similar probability, for each x asmuch as k do exist, mostly exactly 1.

By the example of the one-time-pad one sees that this definition is very natural.

71

3 Cryptology Basics

Example:We assume that we want to encode 2 bits and have 2 key bits. The encoding functionis addition (mod 2) (= XOR bit by bit). Now the aggressor sees e.g. the cryptotextS=01. In this case it could have been either the plain text x=00 with the key k=01, orx=01 and k=00, or x=10 and k=11, or x=11 and k=10, cp. figure 3.13. Thus all plaintexts are possible in the same way.

ciphertext

S

k

x

0 1

0 0

0 1

1 0

1 1

0 1

0 0

1 1

1 0

key

cleartext

Figure 3.13: ciphertext can correspond to all possible cleartexts

This was the binary one time pad for 2 bits. One can extend15 this on as many as desiredbits, and on other additions instead of modulo 2. (for example mod 26 for letterwiseencoding by hand.)

In the left of figure 3.14 the represented issue in figure 3.13 is represented for allpossible cryptotexts, whereby from lack of space the values of the keys were omitted.

In the right of figure 3.14 an insecure cipher is represented. This is recognizable fromthe fact that each cryptotext cannot be reproduced to each plain text. An aggressor,that observes e.g. the cryptotext 10, experiences by the fact that e.g. the plain text 11 isnot transfered at the moment. (of course also the unsecure cipher in figure 3.14 is rightbetter than nothing. Is a 1-bit-keysection only used once in each case for en- and/or

15It is reminded of a warning from §3.1: perfect secrecy is supposed to work (and in certain senseVernam’s cipher is the optimum secrecy) rather independently of the probability distribution of theplain text area and of the (source)encoding, which is selected for the plain text. The binary Vernam’scipher shows now particularly clearly that this is not the case in a terrifying way: Are the plaintext messages coded unaer, then the binary Vernam’s cipher is totally ineffective. In addition tothe description of the Vernam’s cipher the message length should be specified a-priori (more exact:the cryptotext length), so that it cannot betray anything about the content of the message. Thenonly the problem remains that if this length is selected too briefly, long messages are divided intoseveral short ones and then the message frequency could betray information about the content of themessage. With sufficient largely selected cryptotext length this problem should be a really negligibleproblem now. . .

72

3.2 one-time-pad

decoding of a not-redundantly coded16 2-bit-textsection, it will be never experienced byan aggressor, which plain text is transferred exactly. He can count, as long as he wants.In this limited sense then also this cipher is information-theoretically safe: The aggressoris able to win information about the plain text, but not enough, in order to determine itexactly. In its innovative work about information-theoretically secure secrecy [Sha1 49]Claude Shannon calls this limited information-theoretical security ideal secrecy - a veryunfortunate concept formation. The information-theoretical security, which is taken at abasis and defined in this work, calls Claude Shannon perfect secrecy. In newer literaturethese perfect secrecy is designated as unconditional secrecy.)

ciphertextS k x

0 0

0 1

1 0

1 1

0 0

0 1

1 0

1 1

S k x

0 1

1 0

1 1

0 0

0 1

1 0

1 1

insecure cipher

subtraction of one key-bitmod 4 from two text-bits

0 0

key cleartext key cleartextciphertext

Vernam Cipher

Figure 3.14: Every ciphertext should correspond to all possible cleartexts

Vernam cipher in general:

If someone wants to use Vernam’s cipher in a group G and to encode a character stringof the length n (of characters of G), then one has to choose randomly and to exchangeconfidentially17 a character string of the length n as key in advance, e.g. k = (k 1, k n).

16If the 2-bit-textsequence contains sufficient redundancy, then the unsecure cipher can be completelybroken. When it is known that the plain texts 00 and 10 do not appear, then the plain text 11 alwaysdodges behind the cryptotext 00, 01 always dodges behind 01, 10 always behind 01 and 11 alwaysbehind 11.

17In order to answer the question “what brings Vernam’s cipher at all, if I have to exchange, in orderto be able to exchange a message confidentially, a key of same length confidentially before? There Icould exchange the message confidentially straight!”: Concerning the length Vernam’s cipher does notbring anything at all, but the key can be exchanged at any time, i.e. for both communication partnersfavorable, before the message has to be exchanged confidentially. When the situation for confidential

73

3 Cryptology Basics

The i’th plain text character x1 coded as

Si := xi + ki

It can be decoded by

xi := Si − ki.

For the security proof see task 3-7 b).(it is clear that one can regard each plain textcharacter individually, cause they’re all coded independently.)It is also clear that thiscipher is safe against active attacks (see task of 3-7 e) ): Even if the aggressor receivessome pairs of plain text - cryptotext of his choice, then he experiences only somethingabout the key bits used there, which later (thus if he wants to really decode something)will never occur again18.

Concerning the computation Vernam’s cipher is simple and fast, and it causes nomessage expansion. Its only disadvantage is the length of the key: It is just as longas the message. Therefore Vernam’s cipher applies to many purposes as unpractical.(with improvement of the storage media it becomes however better applicable, anywayin cases, where the participants can exchange e.g. disks in advance.)

One can understand easily that for information-theoretical security the keys must be aslong as it is described in the following: Is K the key quantity, X the plain text quantityand S the quantity of the cryptotexts that appear at least once. First | S ≥| X has toapply, because with a constant key all plain texts must be encoded differently, so thatone can decode them clearly. Now we required for security that for all S and x a k existswith k(x) = S. With constant x for each cryptotext S has to be another key. Thusapplies | K ≥| S, and altogether | K ≥| X.

That implies that at least as many keys as plain texts must exist. If the plain textsare skillfully coded19, then the keys must be just as long as them.

Annotation 1: With the straight introduced designations the definition for information-theoretical security, in the case that all keys are selected with the same probability,follows thus briefly:

∀S ∈ S ∃const ∈ N ∀x ∈ X : |{k ∈ K|k(x) = S}| = const . (3.1)

Since it is a central definition, it is be described more in detail now:

communication is favorable, the message will not be present usually yet. Thus the favorable situationis used for confidential communication “on supply”.

18“again” refers to the concrete, objectively thought key bits, not their values. Of course every bit can,even has to occur in a randomly selected, infinite bit sequence again and again if only its value isregarded as the characteristic. Appropriate applies not only to individual bits, but also to each finitebit string.

19In order to prevent a misunderstanding: Here “encoding”does not have to do anything with privacyor authentication, it does only mean that with bit strings of the length l altogether 2l values canbe differentiated, that means 2l of plain texts can be skillfully coded - or differently expressed:represented. Appropriate applies to the values of the keys, so that the appropriate length of the(encoding of)keys results.

74

3.2 one-time-pad

Cause of N = {1, 2, 3, 4, ...} it applies: const ≥ 1The constant const may depend on the concrete cryptotext S in the above defini-

tion. In many information-theoretically secure systems of secrecy, e.g. the one timepad, it does not do that. There is const = 1, as previously mentioned. Therefore onemay be tried to change the order of the quantoren in the definition: ∃const ∈ N ∀S ∈S ∀x ∈ X : |{k ∈ K|k(x) = S}| = const . Thus the definition becomes however morerestrictive, without getting sufficient systems of secrecy more secure. So one remainsto the weaker definition. For the same reasons one does not take the following evenmore simplified definition: ∀S ∈ S ∀x ∈ X : |{k ∈ K|k(x) = S}| = 1.

Now also - conceding rather exotic - systems of secrecy are specified, which sufficesthe general definition (1), but not the closer one: Example for const = 2: We extendthe key area of the one time pad by a bit, which is not at all used during the en- anddecoding. Then each plain text can be assigned to each cryptotext in each case by2 keys, which differ exactly in the not used bit. Example of const dependent on S:We double the key length of the One time pad, use however each second key bit notfor en- and decoding at all. Then each cryptotext of the length l can be assigned toeach plain text through 2l keys, which differ exactly in the not used bits. Thus constdepends on the cryptotext, more exactly: its length. Then it applies: const = 2l.

Annotation 2: Information-theoretical, thus absolute, in security is the fact that wedidn’t worried in no way about whether something might be to calculate with dif-ficulty: We accepted the worst case that the aggressor would have an exact table,which k, x and S belong together. The later systems of secrecy with short keys won’tbe safe in this sense; e.g. there are many asymmetrical systems of secrecy where foreach c only one exactly possible d does exist (see Figure 3.4), and therefore it gives,because the aggressor knows c, for each cryptotext S only one exactly possible plaintext x. Here one has to rely fully and completely on the fact that x can be calculatedvery difficulty.

Annotation 3: The usual definition of information-theoretically secure systems of secrecy[Sha1 49] strikes out a bit more further; one can prove ours then as equivalent. (andin concrete proofs also different authors are using the above simpler formulation (4.1)usually .) It means: No matter, what information the aggressor has a priori aboutthe plain text, he does not gain information by seeing the cryptotext. The a prioriinformation is represented by a probability distribution, the plain texts have from hisview (e.g. in letter beginnings W(Love) = 0.4, W(very) = 0.4, W(xrq, q) = 0,0001).Also the keys have probabilities and are selected independently of the plain text.Therefore also the cryptotexts have probabilities. The A-posteriori-probability of aplain text x, if the aggressor saw the cryptotext S, is the conditioned probability ofx, if one knows S, W (x|S). The security demand is then

∀S ∈ S ∀x ∈ X : W (x|S) = W (x). (3.2)

75

3 Cryptology Basics

This means that for someone, who does not know the key, plain text x and cryptotextS are stochastically independent. To the equivalence of the definitions (3.1) and (3.2):

W (x|S) =W (x) ∗W (S|x)

W (S)

(3.2) is equivalent to20

∀S ∈ S ∀x ∈ X : W (S|x) = W (S). (3.3)

We show that this is equivalent to

∀S ∈ S const ′ ∈ R ∀x ∈ X : W (S|x) = const ′. (3.4)

(3.3)⇒(3.4) is with const ′ := W (S) obvious. Inversely we show const ′ = W (S):

W (S) =∑

x

W (x) ∗W (S|x) =∑

x

W (x) ∗ const ′ = const ′ ∗∑

x

W (x) = const ′.

(3.4)already looks very similar to (3.1): In general is W (S|x) = W ({k|k(x) = S}),and when all pairs of keys are similar probable, W (S|x) = |{k|k(x) = S}|/|K|. Then(3.4) is equivalent to (3.1) with const = const ′ ∗ |K|.

3.3 Authentication codes

Authentication codes are systems of authentication , that are information-theoreticallysafe and symmetrical. They consist of the components shown in figure 3.6. Before we’llgive an example (figure 3.15 ), we regard informally, what is meant by information-theoretical, thus absolute, security in this case.

Security means that, if an aggressor tries to simulate a message, the receiver noticeson the basis of the (not-suitable) MAC that the message is wrong.

This cannot apply with a probability of 1: There must be any correct MAC, andthe aggressor can always have luck and guess just these. It applies more exactly, thatone can get by guessing completely coincedential,the correct MAC with a probability of1/W , if the range of values for the MACs have W elements. (in particular with σ − bitMACs with a probability of at least 2−σ.)

The best achievable security is that there is no better facility for an aggressor toguess/advise purely randomly.

Annotation: Regard however that the aggressor cannot examine here locally, whetherhe guessed/advised correctly, i.e. the case from 3.1.3.4 does not match here. Wewill reach provably here that a falsification attempt is recognized almost surely. Incontrast to that an aggressor with as many as desired computation ability, usinga digital signature system in the sense of Fig. 3.7, could produce a lot of such

20We regard only such cryptotexts S, which can occur, so that applies: W (S) > 0

76

3.3 Authentication codes

signatures that look really correct, because he knows the test, which they must pass.(see however 3.9.2 and 3.9.3)

Authentication codes thus have a safety advantage in principle in relation to digitalsignatures; but therefore they are more unpractical, see 3.1.1.2.

Example 1:

Before we formalize the safety statement, we regard Fig. 3.15, which represents a smallauthentication code (from the view of the aggressor). Here is 1 message bit∈ H, T (comesfrom Head, Tail) to be authenticated; the key is about 2 bits long, and the MAC 1 bit.As it is the case for all following systems all keys are selected with the same probability.

The possible results are in the headline of the table; one sees in the body of the table, for which plain text this solution results. (one can represent this code very much morebriefly , see task 3-8 b).) As it is the case for Vernam’s cipher each key section (here ineach case the two bits) is only used once.

H,0 H,1 T,0 T,1

text with MAC

ke

y

00

01

10

11

H T-

T

T

T

H

H

H

-

-

-

-

-

-

-

Figure 3.15: A simple authentication code

On the basis of the MAC-length we know that the recognition of falsifications ispossible with a probability of 1/2 in this example. Now we want to show that actuallyeach falsification is recognized with a probability of 1/2.

We regard only the case that the aggressor would like to simulate the message T. Ifthe correct sender did not send anything at all, all 4 keys are same probable; and withtwo pieces T has 0 as MAC resp. 1. Thus the aggressor selects the wrong value with aprobability of 1/2.

The interesting thing of an authentication code is however that security is still validalso if the correct sender sends the message H with the correct MAC, and that theaggressor intercepts this and tries to replace it by T.

1st case: The aggressor intercepts H,0. Then the keys k = 00 and k = 01 are still

possible. When the key is k = 00 the aggressor has to select 0 as MAC for T,

77

3 Cryptology Basics

but when it is k = 01 he has to choose 1. Again the aggressor guesses wronglywith a probability of 1/2.

2nd case: The aggressor intercepts H,1. Similar (for more see task 3-8 b)).

A more exact definition of safety:A more precise definition, when an authentication code is secure with error probabilityǫ, describes exactly what we did in the example above: ”Even if an aggressor sees acorrect message (x , MAC), and eliminates it and tries to send the plain text y instead,no matter how skillful he chooses MAC ′: The falsification will be recognized with aprobability ≥ 1− ǫ, i.e. the probability for MAC ′=code(k, y) is at the most ǫ (whereask is the correct secret key).”

We just have to indicate precisely, whereof the probability is actually formed. Thisis exactly the quantity of the keys that are still possible from the aggressor’s view (asin the example of the 1st case the quantity of 00 and 01). The aggressor’s view impliesthat only such keys k are applicable, for MAC = code(k, x) is valid.

Completely formally: An authentication code code with key quantity K (and probabilitydistribution W to K ) is called safe with error probability ǫ, if the following is valid:∀x ∀MAC ∈ code(K , x) ∀y 6= x ∀MAC ′:

W (MAC ′) = code(k, y)|MAC = code(k, x)) ≤ ǫ.

Example 2 (Improvement of the code of fig. 3.15):Does one want to reduce the error probability of the code from fig. 3.15 from 1/2 to1/2σ, then one can append σMAC′s to one message bit in the past sense, e.g.

x,MAC 1, . . . ,MAC σ.

So one must have exchanged 2 key bits before. The aggressor guesses only each single-MAC correctly with probability of 1/2, and they are all independent, therefore he guessescorrectly only with a probability of (1/2)σ each time.

If one wants to use the code on messages of the length l Bits, then one needs l∗2σ keybits, and each bit is authenticated individually in the above sense. (the total probabilityof errors changes somewhat!) A substantially more efficient code is regarded in task 3-8f).

Outlook: Borders of authentication codes.Authentication codes need as well as systems of secrecy, both information-theoreticallysecure, keys of a certain minimum length.

We nearly showed that one needs for an error probability ǫ at least a key quantitycontaining ⌈1/ǫ⌉ elements: It was clear that ⌈1/ǫ⌉MACs must be possible for the messagey which can be counterfeited. In addition, to each possible MAC must belong a possiblekey.

It is also evident that there must be anymore keys, because on the basis of the correctmessage (x, MAC ) the aggressor generally could exclude many keys. One can show (notthat difficult) that it must be ⌈1/ǫ2⌉ keys [GiMS 74].

78

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

If one chooses ǫ = 2−σ, then one needs at least keys of the length 2σ. This is not asbad as the lower barrier of systems of secrecy: There the key had to be as long as themessage. But in practise a value of approximately 2−100 would be sufficient for theerror probability. (otherwise the probability, that the computer which is used for theauthentication of the MAC falls out unnoticed and lets pass falsified messages as valid,or the like, or all risks of the practical life at all.)

Therefore the code from fig. 3.15 is optimally for 1-bit-messages, and the first exten-sion from example 2, too. Though the key length needn’t to grow with the length ofthe message, as in the 2nd extension of exampe 2. With the substantially more efficientcode from task 3-8 f) one is able to e.g. authenticate messages of the length 100 with200 key bits and error probability 2−100. In this code has been extended in such a waythat one is able to authenticate a message of the length l with an error probability of2−σ+1 with a key of a length about 4σld(l). That is just not optimal, but practicable.

If one authenticates n messages consecutively with one key and one wants an errorprobability of ≤ 2−σ, then it is valid that one needs at least σ key bits per message; thusone does not need 2σ per message anymore: There is a procedure, which gets along with(n + 2)σ bits for n short messages [WeCa 81, §4].

The idea is the following: For each of the short messages is always with the same2σ bits of the key a MAC computed, which has to be kept in secret. These are neverput out directly (otherwise an aggressor could falsify arbitrary after having observed 2MACs), but in each case they are encrypted using one-time-pad. For the encryption ofthe n-MACs with the one-time-pad are nσ- bit keys needed, thus (n+2)σ bits altogether.

3.4 The s2−mod− n-pseudo-concidence-bitsequen-

ce-generator: cryptographically strong secrecy

Now we know: If one wants information-theoretically secure secrecy, one can firstlytake only a symmetrical methods, what means that one has problems with the keydistribution, and secondly the keys must be at least as long as the message. Thereforeone almost always contents in practice with cryptographic security. (red telephonesbetween Moscow and Washington are allegedly using Vernam’s cipher.)

In this section we regard the secondary best safety level, i.e. cryptographically strongsymmetrical and asymmetrical secrecy. The asymmetrical system has however the dis-advantage that it is definitely uncertain against active attacks. In many cases, that arerelevant for practice, one would therefore prefer RSA although that is not cryptograph-ically strong (§3.6).

The symmetrical and the asymmetrical systems of secrecy are based on the samecryptographically strong PRNG. Such cryptographically strong PRNG is otherwise frominterest, too, because the generation of genuine random numbers is in practice a consid-erable problem.

Furthermore the mathematical and cryptographic bases for this system are the samethat are needed for GMR and RSA, since all these systems are based on the assumption

79

3 Cryptology Basics

of factorizing.

3.4.1 Basics for systems with factorizing assumptions

In all asymmetrical cryptographic systems someone must select a secret (a decoding orsignature key), about which he can publish any information (the encoding or test key).The latter must make it possible on the one hand to examine what he does with hissecret, but on the other hand it may not permit to deduce the secret with reasonableexpenditure. In 1976, when [DiHe 76] was published, the idea that things like that arepossible at all, was token for rather sensational.

Nearly all asymmetrical systems, that are seriously discussed at present, are based ononly two different ideas for such secrets. We regard that ones, that are based on thefactorizing assumption. (the other one is called discrete logarithm, see §3.9.1, and isbased on similar elementary number theory.)

Idea for secrets:The secret essentially consists of two large, independently randomly selected

prime numbers p and q.

(large means at present circa 300 to 1000 bits long.) The public information is essentiallythe product

n = p ∗ q.

For letting it become meaningful, one has to be able to:

1. select large prime numbers in a reasonable time (one can multiply in any case),

2. but it may not be possible to factorize such products with reasonable expenditure.

3. Moreover it is not enough, of course, to have a secret, but there must be any onmessage dependent functions, e.g. for decoding or signing, which one can onlycompute efficiently with the help of p and q, while for the reverse operations n issufficient.

For some of the systems a few special characteristics of p and q are required, e.g. in this§3.4 that

p ≡ q ≡ 3 (mod 4).

Concerning the factorizing assumption:Accordingly §3.1.3.4 one cannot prove at present that factorizing is difficult, but onemust make an assumption. This means that for each fast (factorizing) algorithm F theprobability that F is actually able to factorize a number of the special shape p ∗ q,becomes fast smaller the larger the length l of the factors gets. This l is then thesafety parameter. One understands that worst-or-average-case complexity would not be

80

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

sufficient: Even if F e.g. could factorize fast in each case each 1000th number with alllengths, would mean that he could intend the secrets for 1/1000 of the participant. Onerequires more exactly that the remaining probability sinks faster than 1 divided by anypolynomial.

Assumption For each (probabilistic) polynomial algorithm F and each polynomial Qthe following is valid: It exists a L, so that for all l ≥ L it’s valid: If p, q are selectedas independently random prime numbers of the length l and n := p ∗ q

W (F(n) = (p; q)) ≤ 1

Q(l).

This is the assumption, with which one works in proofs; for concrete systems, used inpractice, the rather definition for what one assumes is, of course: There is no algorithmF that factorizes for the entire determined length l, which is just used (e.g. 1024), morethan an entire determined proportion (e.g. 2−100) in less than one century on the largestavailable computer. While no ”fast” algorithms are known, that factorize all numbers n,are fast ones known, if p+1, p−1, q+1 or q−1 only have small prime factors [DaPr 89].In order to avoid this, one demands:

p +1 must have (at least) a large prime factor.

p -1 must have (at least) a large prime factor.

q +1 must have (at least) a large prime factor.

q -1 must have (at least) a large prime factor.

Here the large prime factors should be at least in each case 100 bits long [BrDL 91].

For very large p and q (and therefore in particular asymptotic) these demands arecomplied ”of alone” with very large probability. For the practical use this is sufficientlyprobable for prime numbers, which are substantially longer than 500 bits [BrDL 91].Inorder to give concrete notes for the dimensioning of cryptographic systems, which arebased on the factorizing assumption, the state of the factorizing assumptions are reportedbriefly. By means of

• single ”supercomputer” (array processor),

• one hundred workstations connected by a local area network and

• world-wide distributed UNIX systems connected by electronic post office, whoseempty time was used,

could numbers with 100 to 130 decimal places be factorized in 1989 within a month[LeMa1 89, LeMa1 90, Silv 91, Schn 96].In 1999 one arrived at 155 decimal places[Riel 99]. In [Schn 96] one can read that the factorizing algorithm quadratic sieve, withwhich the factorizing records specified above was obtained in 1989, is the fastest well-known for numbers n with up to 110 decimal places. Its running time amounts

Lquadratic sieve(n) = exp(√

ln(n)ln ln(n)).

81

3 Cryptology Basics

The running time of factorizing algorithms is as usual specified in the time unit, whichis needed for the multiplication of two numbers of half length, for example for thecomputation of n = p ∗ q.

For numbers n with more than 110 decimal places21 the factorizing algorithm generalnumber field sieve which is just used since 1993, is faster and the fastest known one. Itsrunning time amounts to

Lgeneral number field sieve(n) = exp(1.923 ∗ 3√

ln(n)(lnln(n))2).

For numbers with approximately 130 decimal places the factorizing algorithm quadraticsieve needs for 4 further decimal places double cost of computation, the factorizingalgorithm general numbers field sieve needs this only for 5 further decimal places. Inpractice l should be selected within the range of 350 to 2048 bits length nowadays, seetask 3-11 a), so that the number n which is supposed to be factorized has a lengthbetween 700 and 4096 bits.

Concerning the production of prime numbers:• First it is to be determined that there are still enough prime numbers also under

very large numbers: According to the theorem about prime numbers [Kran 86,§1.15] the following is valid for the amount of the prime numbers π(x) up to acertain maximum x:

π(x)

x≈ 1

ln(x).

Does one regard the numbers up to the length l-bit, then every (l ∗ ln(2))-th is aprime number on average.

• According to the Dirichlet theorem [Kran 86, §1.15] half of all prime numbersare on average ≡ 3 (mod 4) (and a quarter is ≡ 3 resp. 7 (mod 8)). These twosentences mean on the one hand that there is at least no danger that someonefactorizes all numbers n of the special form p ∗ q and a firm length l, by makinghimself a list of all applicable prime numbers. On the other hand they are neededfor the prime number production.

• The principle of the fast production of random prime numbers p is as simple asproducing prime numbers with additional characteristics, e.g. p ≡ 3 (mod 4):

while no prime number is found, yetDo select random number p with suitable size22

(if necessary e.g. with p ≡ 3 (mod 4))

21In the case of equating the specified running times of quadratic sieve and genernal number field sieve

and solving after ln(n) one gets: ln(n)approx286. This corresponds to 124 decimal places. Thedifference to the 110 decimal places specified by Bruce Scheier may explain itself by the fact thatalso with quadratic sieve a factor stands in front of the root, which, since it lies quite near 1, was setto 1 in the formula of run-time.

82

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

Test, if p is a prime numberOd.

One will succeed in l ∗ ln(2) runs.

• The test, whether p is prime, cannot be made just by factorizing p and by havinga look that p has no genuine divisors, because we just assumed that factorizing isdifficult.

Fortunately there are faster tests, even if these have an exponentially small errorprobability. The best method is the Rabin-Miller-Test. It is particularly simpleespecially for numbers as p ≡ 3 (mod 4). The following is valid

p prim⇒ ∀a ∈ Zp∗ : ap−12 ≡ ±1 (mod p)

(derivation in §3.4.1.5), while it is at the most only valid for 1/4 of all appropriatea for compound p. (the notation Z∗

p introduced in the following more in detailmeans {1, 2, . . . , p − 1}, if p is a prime number.) One thus verifies this formulafor σ random values a. If it is not valid at least once, it is obvious that p is notprime. Otherwise one can say in a certain way that p is at least a prime numberwith probability 1− 4−σ.

The error probability is not disturbing regarding the factorizing assumption, sincethere is an error probability of similar size anyway, i.e. the probability that F canfactorize. Though one checks in practice wether p contains quite small prime factorsbefore one takes this test. Therefore one puts on a table of the first prime numbers(once and for all), e.g. all of a word length ≤ the word lenght of the computerused. By all these prime numbers it is divided on trial. The exponentiation, seee.g. [Kran 86](§1, §2) is then worthwhile.

Referring to computing with and without knowledge of p and q:The calculations with p and q resp. with n are nearly all calculations in cosets. Thereforethe most important facts about calculating in cosets are represented in the following. Inparticular it is differentiated how easily the operations can be executed. (when requiredsee e.g. [Lips 81, Lune 87, ScSc 73, Knut 97] for more information) or any arbitraryintroduction to so-called elementary number theory.)

3.4.1.1 Calculating modulo n

Here is n completely arbitrary. These operations can be executed by participants witha secret and as well as by participants without a secret.

• Zn means the coset ring (mod n). It is represented by the numbers {0, . . . , n−1}in general, and probably counting in this is intuitively clear for most of the readers.Nevertheless the fundamental definitions are repeated briefly here:

83

3 Cryptology Basics

The coset ring (mod n)is defined more formally over a congruence relation ≡:

For a, b ∈ Z the following is valid

a ≡ b (mod n) :⇔ n | (a− b).

(The sign ’|’ stands for ’is divisor of’, i.e. x | y :⇔ ∃z ∈ Z : y = xz.) a ≡ b(mod n) is valid if they leave the same remainder in division by n.

Now one defines to each a ∈ Z a remainder class a := {b ∈ Z|a ≡ b (mod n)}. Onecan calculate in these classes, because

a1 ≡ a2 (mod n) ∧ b1 ≡ b2 (mod n) ⇒

a1 ± b1 ≡ a2 ± b2 (mod n) ∧ a1 ∗ b1 ≡ a2 ∗ b2 (mod n)

is valid, as one can recalculate quite easily. (i.e. in order two add two classesa, b, one can take two arbitrary elements from it and will always receive thesame result.) In general one concretely takes the representatives of {0, . . . , n− 1}and forms from the result the remainder with division by n, especially this isthe most usual representation in the computer. For ”computing by hand” onetakes the symmetrical representation around zero, for odd numbers n thus {−(n−1)/2, . . . , 0, . . . , (n − 1)/2} and for even numbers n somewhat less symmetrically{−n/(2 + 1), . . . , 0, . . . , n/2}.Most usual arithmetic rules are valid, more exactly: Zn is a commutative ring with1.

• Addition, subtraction and multiplication in Zn is ’easy’. One can calculate with thelong numbers from several computer words in principle as with decimal numbersfrom several numbers in school. (is l the length of n, then the expenditure of plusand minus is O(l), the expenditure from multiplication is O(l2) - it doesn’t matterwhether it is calculated within integers or (mod n) [Lips 81]. There are also fasteralgorithms, but the differences for numbers of the orders of magnitudes usual incryptography are not that large yet.)

The exponentiation ab (mod n) with an exponent b of the same order of magnitudeis ”easy”23 too. It is often used (e.g. above in the prime number test above).However simple b-times multiplying would last too long here. One uses so-called”square and multiply” algorithms, with which the exponent is binary processed; Isb = (bl−1 . . . b0)2 the binary notation. Here the computation from left is outlined(one can begin from right side, too):

It is calculated in sequence a(bl−1)2 ,a(bl−1bl−2)2 and so on. Thereby one gets froma(bl−1bl−2...bl−i)2 the next value, depending on the next exponentiation bit, as

a(bl−1bl−2...bl−i0)2 = a2∗(bl−1bl−2...bl−i)2 = (a(bl−1bl−2...bl−i)2)2

23As a clue: Practically such exponentiation has a order of magnitude of 0.1 in software on PCs availablein 1997, if all three parameter are 512 bit long. It depends of course on the exact performance of thecomputer and the quality of the implementation.

84

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

resp.

a(bl−1bl−2...bl−i1)2 = a2∗(bl−1bl−2...bl−i)2+1 = (a(bl−1bl−2...bl−i)2)2 ∗ a.

It is thus squared per binary digit and with a 1 it is additionally multiplied - ineach case (mod n).

Therefore the expenditure for a exponentiation ab (mod n) with an exponent bof the same order of magnitude is at the most O(l3), because squaring (and ifnecessary also multiplying) is passed through l-times. If the exponent b is shorter -we strike the leading zeros away -, then needs to square (and if necessary multiplied)for each bit, which he is shorter, once less. If |b| is the length of the exponent b,then the expenditure for the calculation of ab (mod n) is therefore at the mostO(l2 ∗ |b|).example: 722 (mod 11). 22 is (10110)2. Thus we calculate

7(1)2 := 7;

7(10)2 := 72 ≡ 5;

7(101)2 := 52 ∗ 7 ≡ 3 ∗ 7 ≡ −1;

7(1011)2 := (−1)2 ∗ 7 ≡ 7;

7(10110)2 := 72 ≡ 5.

• Z∗n describes the multiplicative group of Zn, e.g. the set of all element that have

an inverse. (One can show easily that this is really a group). The following is valid

Z∗n = {a ∈ Zn| gcd (a, n) = 1}.

The calculation of the inverse is likewise ’easy’. One uses the so-called extendedalgorithm of Euklid. It calculates in general for a, b ∈ Z the gcd and two integersu, v with

u ∗ a + v ∗ b = gcd (a, b)

That is represented only by one example: a = 11, b = 47. therefore we use first thenormal algorithm of Euklid, with which the previous divisor is iteratively dividedby the previous remainder:

47 = 4 ∗ 11 + 3;

11 = 3 ∗ 3 + 2;

3 = 1 ∗ 2 + 1.

Now one starts with the last line to represent the gcd as a linear combination of

85

3 Cryptology Basics

the divisor and the dividend:

1 = 3− 1 ∗ 2 (Now substitution of 2 by 3 and 11)

= 3− 1 ∗ (11− 3 ∗ 3)

= −11 + 4 ∗ 3 (Now substitution of 3 by 11 and 47)

= −11 + 4 ∗ (47− 4 ∗ 11)

= 4 ∗ 47− 17 ∗ 11 (sample: 4*47= 188, 17*11= 187)

(one can optimize this especially concerning storage location, s. literature.)

Particularly for inverting of a we calculate integers u, v with the extended algorithof Euklid with

u ∗ a + v ∗ n = 1.

Thenu ∗ a ≡ 1 (mod n)

is obviously valid, and this just means that u is the inverse of a. In our examplewe thus have received

11−1 ≡ −17 ≡ 30 (mod 47)

(and 47−1 ≡ 4 (mod 11) as well as 11−1 ≡ −17 ≡ 3 (mod 4) and 47−1 ≡ 4(mod 17), too.)24

3.4.1.2 Number of elements of Z∗n and the coherence with exponentiation

Here are first characteristics of Zn shown, where the knowledge of the secret (p, q) helps:

• The φ function by Euler is defined as

φ(n) := |{a ∈ {0, . . . , n− 1}| gcd(a, n) = 1}|,

whereby it is valid for arbitrary integers n 6= 0: gcd (0, n) = |n|, see e.g. [Lune 87].It follows immediately from these two definitions that

|Z∗n| = φ(n)

Specifically for n = p ∗ q with defining p, q as prime numbers and p 6= q one cancalculate φ(n) easily:

φ(p ∗ q) = (p− 1)(q − 1).25

24Therewith one sees that the equation Z∗n = {a ∈ Zn| gcd (a, n) = 1} shown above is correct: We

actually just intended for each a with gcd (a, n) = 1 an inverse. Conversely: If there is an inverse,thus a ∗ a−1 ≡ 1 (mod n), n|(a ∗ a−1 − 1) is valid , thus there is a v with a ∗ a−1 − v ∗n = 1. If a andn would have a common divisor d, then this would also divide 1.

25Those numbers that have a common factor with n (i.e. with gcd 6= 1) are viz 0 once, then p, 2p, . . . , (q−1)p, and last q, 2q, . . . , (p − 1)q, and these 1 + (q − 1) + (p − 1) = p + q − 1 numbers are all differentfor p 6= q.

86

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

When n (= p ∗ q) is given, then calculating of φ(n) is as difficult as finding p andq [Hors 85]:

With summarizing φ(p ∗ q) = (p− 1)(q − 1) = p ∗ q − (p + q) + 1 = n− (p + q) + 1the following is valid

p + q = n− φ(p ∗ q) + 1 (∗∗)

(p− q)2 = p2 − 2pq + q2 − 4pq = (p + q)2 − 4n

Extracting a root results:

p− q =√

(p + q)2 − 4n

Inserting (∗∗) implies p − q = sqrt(n− φ(p ∗ q) + 1)2 − 4n Addition with (∗∗)implies 2p = n− φ(p ∗ q) + 1 + sqrt(n− φ(p ∗ q) + 1)2 − 4n

• For some cryptographic systems (e.g. RSA) one needs the following coherencebetween exponentiation and the number of elements: The small sentence by Fermatsays: For all a ∈ Z∗

n:

aφ(n) ≡ 1 (mod n).26

3.4.1.3 Connections between Zn and Zp, Zq

For n = p ∗ q with p, q as prime numbers and p 6= q.) This will later serve for the factthat the participant can compute secret things particularly modulo p and q with thesecret (p, q), but then using modulo n where all other participants are computing.

The basis for this connection is the following formula (a special case of the ”Chineseremainder theorem”):

a ≡ b (mod n)⇔ a ≡ b (mod p) ∧ a ≡ b (mod q) (*)

One sees this easily: According to the definition of ’≡’ this is equivalent to

n|(a− b)⇔ p|(a− b) ∧ q|(a− b),

and this is quite clear, because p ∗ q is the decomposition of the factors of the primenumber n27.

26This is a general theorem from the theory of groups [Lips 81, Waer 71]: It is valid for every elementa in all finite groups G: a|G| = 1. The proof is not that difficult. Both basics are:

1. it is valid for every subgroup H: |H|||G|.

2. the powers of an element, thus a, a2, . . . are forming a subgroup. It exists a first power av = 1, andjust from there the sequence repeats itself cyclically. v is called ’the order of a’. This subgrouphas thus exactly v elements, and from (1.) it follows v||G|

27It is easily to see, that the Chinese remainder theorem is not only valid for n = product of twodifferent prime numbers, but also for n = product of as many as desired, in pairs divisor-strangenatural numbers.

87

3 Cryptology Basics

If one would like to calculate e.g. y := f(x) (mod n) now, but the calculation of f(x) isonly easy for modulo prime numbers, then the owner of the secret is able to determineyp := f(x) (mod p) and yq := f(x) (mod q), and the following is valid:

y ≡ f(x) (mod n)⇔ y ≡ yp (mod p) ∧ y ≡ yq (mod q).

We just have to look whether we can compound efficiently y from yp and yq. That doesthe ”Chinese remainder algorithm (CRA)” (the actual Chinese remainder theorem justimplies that such a y exists for arbitrary yp, yq):

First one determines with the extended Euclid’s theorem integers u, v with

up + vq = 1.28

The statement is that

y = CRA(yp, yq) := u ∗ p ∗ yq + v ∗ q ∗ yp (mod n).

That means that we have to show that y ≡ yp (mod p) ∧ y ≡ yq (mod q). One sees thisbest with the following table:

(mod p) (mod q)

up 0 1vq 1 0

up vq + vq yp 0 ∗ yq + 1 ∗ vp 1 ∗ yq + 0 ∗ vp

≡ yp ≡ yq

(the upper two lines result from up + vq = 1 and do imply that up and vq in Zn arerepresenting something like two ”basis vectors”, if one compounds Zn from Zp and Zn.The computation of up and vq can happen once and for all with production of the secret.)

3.4.1.4 Squares and roots in general

From §3.4.1.3 we know, how a secret p, q can help to calculate a function f (mod n),when it is computed modulo prime numbers much more easily. Such a function will be amodular root-extracting. One will need the secret for extracting roots (mod n), whileanyone else is able to calculate the square on the basis of n, thus executing the inverseoperation. In addition we regard first roots and squares in coset rings more exactly.

• They are defined exactly like one would imagine, except that one worries mostlyonly about invertable elements:

QRn := {x ∈ Z∗n|∃y ∈ Z

∗n : y2 ≡ x (mod n)}.

The x’s from this set are called square remainders (thus what in N are the squarenumbers), and an appropriate y is called root from x.

28Two different prime numbers have gcd = 1 of course. It’s easy to see, that the CRA works too if pand q are not prime numbers, but furthermore divisor-strange. By reapplication of CRA the Chineseremainder theorem can be covered for products that consist of more than two factors, too.

88

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

•y is root from x⇒ −y is also a root from x

is valid as usual, because (−1)2 = 1.

• Other laws, which one knows from N or R are not valid in general. For instancea number can absolutely have more than two roots (mod n) 6= p ∗ q, for exampleit’s valid for (mod 8) : 12 ≡ 1 ∧ 32 ≡ 1 ∧ 52 ≡ 1 ∧ 72 ≡ 1

• The square remainders create at least one (multiplicative) group: Are x1, x2 ∈ QRn

and y1, y2 their roots, then the roots from x1 ∗ x2 and x−11 are exactly y1 ∗ y2 and

y−11 : (y1 ∗ y2)

2 = y21 ∗ y2

2 = x1 ∗ x2 and (y−11 )2 = (y2

1)−1 = x−1

1

3.4.1.5 Squares and roots (mod p)

While modulo of arbitrary n says only a little about squares and roots (see §3.4.1.4),in calculating modulo prime numbers everything is prettily arranged, because p is afield.29

• Above all every number has at the most two roots (because in groups has a polyno-mial of degree 2 at the most 2 zeros)30 We know too, that ±y are always togetherroots. In general y 6≡ −y is valid too, except for y ≡ 0 or p ≡ 2. (Especially forodd numbers p, for compound ones too, this assumption is in addition valid fory ≡ p/2.) Thus all square remainders have exactly 2 roots for p > 2.

• From this it follows that for p > 2exactly the half of all numbers (mod p) aresquare remainders, i.e.:

|QRp| =p− 1

2.

because the squaring function is exactly 2→ 1. One can plot in the following way:

x 0 1 2 . . . p−12 −p−1

2 . . . −2 −1

x2 0 1 4 . . . (p−12 )2 (p−1

2 )2 . . . 4 1

(To the memory: Because (mod p) all numbers have at the most 2 roots, s.above,the values in everyone of the two lower fields are different in pairs. Because other-wise this number arising several times in a field, i.e. this square, would have morethan 2 roots (mod p).)

29Clear: Every Element except 0 has a multiplicative inverse, as shown above.30If you would rather have proven it explicitly: The proof is indirectly offered. We assume that there is

a number which has more than 2 roots. Then there are in particular two substantially different roots,i.e. two roots, which differ in more than the sign. (Would there be no two substantially differentroots, then there would be at the most two roots at all.) We call the two substantially differentroots x and y. Then it’s valid: x2 ≡ y2 (mod p)andx 6≡ ±y (mod p). According to the definition ofcongruence it’s valid: p|(x2−y2). This is equivalent to p|(x+y)∗(x−y). Becausep is a prime number,p must either divide x + y or x − y. Then however must be valid x ≡ ±y (mod p). Contradiction!

89

3 Cryptology Basics

• First as pure way of designation for the distinction of squares and not-squares wedefine the Legendre-Symbol(later called Jacobi-Symbol in a generalization) for allx ∈ Z∗

p asx

p:= +1ifx ∈ QRpand

x

p:= −1else.

• A fast algorithm for square examination provides for p > 2 the Euler criterion:

x

p≡ x

p−12 (mod p).31

(so far we would have (p− 1)/2 possible roots to try one after the other, in orderto determine whether x is a square, which would be exponentially in the length lof p would be; however the Exponentiation goes into O(l3), see §3.4.1.1).

• Sometimes one needs that the Legendre-symbol is multiplicative, i.e.

x ∗ y

p=

x

p∗ y

p. (3.5)

This follows immediately from the Euler criterion.

3.4.1.6 Squares and roots (mod p) for p ≡ 3 (mod 4)

Some things become more easy, when one supposes p ≡ 3 (mod 4), as in the most of thefollowing cryptographic systems.

• First there is a simple algorithm for extracting roots:32(so far we would have totry (p− 1)/2 possible roots one after the other.) It is valid for each x ∈ QRp that:

w := xp+14 is a root from x (mod p).

Firstly this formula is useful concerning p ≡ 3 (mod 4): (p + 1)/4 lies in N. Sec-ondly we have to proof that w2 ≡ x (mod p):

w2 ≡ (xp+14 )2 ≡ x

p+12 ≡ x

p−12 ∗ x ≡ 1 ∗ x,

whereby the last equation follows directly from the Euler criterion. Moreover weknow that the so received root w is itself a square again: It is a power of the squarex. That becomes important when we want to extract roots for several times oneafter another, e.g. in order to extract the 4th, 8th, etc. root.

31If one believes the small Fermat theorem (s. §3.4.1.2), one can prove that easily: For prime numbersis φ(p) = p − 1, thus for (mod p) is valid: xp−1 ≡ 1. Thus x(p−1)/2 is a root from 1, thus ±1 (itlies then at least in the range of values of the Jacobi symbol). a) if x is a square, e.g. x ≡ y2, thenx(p−1)/2 ≡ yp−1 ≡ 1. b) the equation x(p−1)/2 ≡ 1 can have at the most (p − 1)/2 solutions as apolynomial equation in the field Zp. Because of a) we already know that all square remainders aresolutions, and we know that there are (p − 1)/2 square remainders. Thus there cannot be furthersolutions. For the square not-remainders thus x(p−1)/2 6≡ 1 is valid; it remains only x(p−1)/2) ≡ −1.

32Also modulo other prime numbers extracting a root is easy in the sense of fast; the algorithm is muchmore complicated [Kran 86]

90

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

• It is valid

−1 6∈ QRp,

because when e.g. p = 4 ∗ r + 3 (with r ∈ N0), then it is valid according to theEuler criterion:

−1

p≡ −1

p−12 ≡ (−1)2∗r+1) ≡ −1 (mod p).33

• This has the following meaning for the two roots ±w from one x: Is w the withthe above method found root, thus w ∈ QRp, then it is valid:

−w 6∈ QRp,

because when both roots would be square remainders, then would follow (becauseQRp is a group): −1 ≡ (−w) ∗w−1 ∈ QRp; contradiction!. Thus there are exactlytwo 4th-roots from x, viz the two roots ±w” from w’, etc. for every square.

3.4.1.7 Squares and roots (mod n) with knowledge of p, q

(For n = p ∗ q with p, q as prime numbers and p 6= q.) Now we set our cognitions from§3.4.1.3, §3.4.1.5 and §3.4.1.6 together, in order to receive possible operations, which canexecute only owners of the secret: These are now able to test (mod n), whether x is asquare and, if necessary, extract a root:

• square test: It’s valid

x ∈ QRn ⇔ x ∈ QRp ∧ x ∈ QRq.

(The owner of p, q can evaluate with the Euler criterion the two right conditions.)Proof:

(B1) For x ∈ QRn e.g. w is a root, i.e. w2 ≡ x (mod n). Then w2 ≡ x (mod p)and w2 ≡ x (mod q) is valid more than ever. (because of (*)).

(B2) Conversely is x ∈ QRp ∧ x ∈ QRq, e.g. w2p ≡ x (mod p) and w2

q ≡ x (mod q).According to the Chinese remainder theorem there is a w with

w ≡ wp (mod p) ∧ w ≡ wq (mod q)

Therewith it’s valid, again with (*),

w2 ≡ w2p ≡ x (mod p) ∧ w2 ≡ w2

q ≡ x (mod q)⇔ w2 ≡ x (mod n)

33therefore the choice of p ≡ 3 (mod 4) is essential. Because otherwise for large p p would be ≡ 1(mod 4) and therewith −1 ∈ QRp according to the Euler criterion.

91

3 Cryptology Basics

• Every x ∈ QRn has exactly 4 roots: It lies, as known, in QRp and in QRq and has2 roots ±wp and ±wq in each. Each of the 4 combinations can be set together toone root according to (B2): w := CRA(±wp,±wq).

• Now we will enhance the Legendre Symbol to the Jacobi Symbol modulo n, , firstagain as pure designation:

(x

n) := (

x

p) ∗ (

x

q).

It’s not valid that the Jacobi Symbol of x (mod n) is +1 if and only if x ∈ QRn, aswith prime numbers, but when either (x ∈ QRp∧x ∈ QRq) or (x 6∈ QRp∧x 6∈ QRq.(The first condition suits on its own to x ∈ QRn.) Thus all squares have JacobiSymbol 1, but not the other way round.34

The Jacobi Symbol is multiplicative, i.e.

(xy

n) = (

x

n) ∗ (

y

n)

This follows immediately from its definition and the multiplicative properties ofthe Legendre Symbol.

3.4.1.8 Squares and roots (mod n) with knowledge of p, q (mod 4)

n is p ∗ q with p as a prime number and p 6= q.

• Extracting roots is easy: The roots ±wp and ±wq can be determined with themethod explained in §3.4.1.6, and w can be calculated from each as

w := CRA(±wp,±wq).

• −1 is a non–square with Jacobi–Symbol 1:

(−1

n) := (

−1

p) ∗ (−1

q) = (−1) ∗ (−1) = +1.

3.4.1.9 Squares and roots (mod n) without knowledge of p, q

For having extracting roots and square tests good as secret operations, we still have toprove or to assume that these operations are difficult without knowledge of p and q. Onecan prove it for extracting roots. therefore the following but also useful theorem mayhelp:

• If someone knows two substantially different roots w, w′ from the same number x(mod n), i.e. w 6≡ ±w′, then he can definitely factorize n:

w2 ≡ w′2 ≡ x (mod n)⇒ n|(w2 − w′2) = (w + w′) ∗ (w − w′). On the other handmeans w 6≡ ±w′ that n is not a factor of (w + w′) or (w − w′). This means that

34Because mod p and q are each the half of all numbers 6≡ 0 of square remainders, follows that they arein sum exactly 1/4, but that exactly the half have Jacobi–Symbol 1.

92

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

each of the both factors must consist of at least one prime factor of n. One receivesone of that, e.g. as

gcd(n, w + w′).

• Now a proof sketch follows that according to the factorizing assumption the ex-tracting a root (mod n) is difficult too.35 This can be used as a sample for allproofs of this kind.

It must be to shown ”Factorizing is difficult ⇒ Extracting root is difficult”. Oneshows the equivalent inversion ”Extracting root is easy ⇒ Factorizing is easy”.Thus one assumes that a root–extracting algorithm W does exist and from this onederivates a good factorizing algorithm F. In addition one constructs F explicitallyand lets F use W as a subprogram, cp. figure 3.16.36

program F

. . .subprogram W[black box];

. . .begin F

. . .call W. . .call W

multiple times, but polynomial

. . .end F.

Figure 3.16: Structure of a turing reduction from factorization F to extracting roots W

Our special F looks like:

begin Fchoose w ∈ Z∗

nrandomly; set x := w2;extracting root: w′ := W (x, n);

35The statement means formal (similar to the factorizing assumption): For each (probabilistic) polyno-mial algorithm W and each polynomial Q is valid: It exists a L so that for all l ≥ L it is vaild: Ifp, q are chosen as random prime numbers of the length l; n := p ∗ q, and x is randomly chosen fromQRn, then it is valid:

W((W(n, x))is a root from x ≤1

Q(l).

36The difference to in the complexity theory used so–called Karp reduction [GaJo 80, Paul 78] is thathere W can be arbitrarily often called in F (of course only polynomial), while only one call is certifiedthere

93

3 Cryptology Basics

if w 6≡ ±w′ then beginp := gcd(n, w + w′);q := ndivp

endend.

It’s clear that with W f is polynomial too.

The other assertion is: If W finds a root at a certain n with probability ǫ, thenF factorizes the same n with probability ǫ/2.37 From the point mentioned beforefollows that F factorizes definitely, if

1. W actually finds a root and

2. w′ 6≡ ±w.

The first point is valid with probability ǫ. Thus we have to show that the rootw′ that have been found is with probability 1/2 significantly different from w oneknows before. This is descriptive because of the fact that W does not know, whichroot is known outside. More exactly: The result w′ of W depends only on theinputs to W, i.e. (n, x) and possibly internal random decisions of W. Would thefour roots x be ±w′ and ±w′′, then W would have w′ as output, all the samewhether outside w = ±w′ or w = ±w′′ has been selected, because one can’t seeit from the x. Because one choses randomly, all 4 values appear with the sameprobability as w. With probability 1/2 is w 6= ±w′, which was to be shown.38

37We can improve the probability by executing F σ–times one after another; then the probability isǫ(1 − 2−σ),and the algorithm is still polynomial with constant σ.

38Formally: The assumption W is a ’good’ algorithm for extracting roots is just the opposite to thefootnote ”extracting root is difficult”. That means that there is nevertheless a polynomial Q so thatit is valid for a infinite number of l: When p.q are chosen as random prime numbers of the length l,n = p ∗ q and x is random from QRn then it is valid:

W(W (n, x)is a root from x) >1

Q(l). (3.6)

We have to show that F contradicts to the factorizing assumption., i.e. that there is a polynomial R

so that it’s valid for infinite l: If p, q are chosen as random prime numbers of the length l, n = p ∗ q,then it is valid

W(F (n) = (p; q)) >1

R(l). (3.7)

This is shown for R := 2Q and the same l’s as in (3.6): Such a l is given. One takes from theexplanation of F that there the x is chosen random from QRn (because each of the x’s have exactly4 roots and occurs at 4 of the w’s with the same probability) With calling of W all assumptions of(3.6) are present what means that (3.6) is valid. Now it follows, as in the main text above, that inthe half of all cases w′ 6≡ w and therewith F is successful. Altogether this is valid with a probabilityof (l/(2 ∗ Q(l)), as claimed. So we have shown formally: ”Extracting root is easy ⇒ Factorizing iseasy”.

94

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

• One assumes that it is also difficult to test whether x ∈ Z∗n is a square, but one could

not prove this so far. One makes therefore a quadratic-residuosity-assumption,similar to the factorizing assumption). Thereby one has to make two restrictions:

1. There is a fast algorithm for computation of Jacobi symbols, even if p and qare not known (with the so-called square reciprocity law). This is not usedin the following. But it means that someone, who gets presented x with(x

n) = −1, is able to determine that x 6∈ QRn. Therefore one limits theassumption to the x with (x

n) = +1.

2. One is able to guess correctly with a probability of 1/2 for random x with(x

n) = +1 by purely random guessing whether x ∈ QRn. Therefore onerequires only that no polynomial (guess -) algorithm G is able to make itbetter substantially.

qr is the predicate whether something is a quadratic residue, i.e.

qr(n, x) := true, if x ∈ QRn qr(n, x) := false, otherwise.

The assumption runs now as follows

Assumption It is valid for each (probabilistic) algorithm G and each polynomial Q :It exists a L, so that for all l ≥ L is valid: If p, q are chosen as random primenumbers of the length l, n := p ∗ q, and x is randomly under the remainderclasses with Jacobi symbol +1, then it is valid:

W (G(n, x)) = qr(n, x) ≤ 1

2+

1

Q(l).

• Chosing a random square is easy: One simply chooses an arbitrary w ∈ Z∗n and

squares it. Since every x has the same number of roots (for specialists: exactly 4),every square is chosen with same probability.

3.4.2 Requirements concerning the Pseudo–Random Bit Gen-

erator

The following is described more in detail in [BlBS 86]

What is a Pseudo–Random Bit Generator (PRBG)A Pseudo–Random Bit Generator (PRBG) does the following: It generates a longPseudo–Random Bit Sequence from a randomly chosen initial value (seed). That means(cp. figure 3.17):

• a key – and a seed–generating algorithm belong to it. It creates a key n and a seeds of the length l from the security paramer l. (A key or a seed would be enoughat this point. But it is interesting in the later use of asymmetric systems wherethe key n is allowed to be public while the seed s has to be a secret for the wholetime.)

95

3 Cryptology Basics

PRBG

genuine random

number

long sequence of bits

b b b ...

key and

initial value

secret zone

generation ofkey and

initial value

0 1 2

sn,

lsecurity

parameter

Figure 3.17: Pseudo random number generator

• The actual PRBG is an algorithm which generates a bit sequence b0b1b2 . . . (whichcan be of arbitrary length) with the key n and the short seed s. Though only apolynomial long start–piece of this sequence may be used.

• PRBG is deterministic, i.e. from the same initial value and key the same bitsequence is generated every time.

• PRBG is efficient, thus in polynomial time calculable.

• PRBG is secure, i.e. the results are not distinguishable from genuine randomsequences by no polynomial test. (See below for more detail.)

Usage as pseudo–one–time–padAn important application is to use the pseudo random bits instead of a one–time–pad (see§3.2) as a symmetrical system of secrecy (see figure 3.18): Instead of a genuine random–sequence a pseudo–random–sequence is added bitwise (mod 2) to the plain text. Thisidea comes from Vigenre (1523-1596) and in honor to him the pseudo–one–time–pad issometimes called Vigenre’s cipher.

The short key and initial value of the PRBG must be only exchanged instead of thelong one time pad.

Demands on pseudo–one–time–pad: For every polynomial limited aggressor thepseudo–one–time–pad is supposed to be as safe as one–time–pad up to an infinitesimalportion of cases.

The idea is that if no polynomial test (thus no polynomial aggressor too) can differentiatebetween pseudo–one–time–pads and the genuine one–time–pad that there can’t be adifference concerning a polynomial aggressor whether one encrypts with a pseudo–one–time–pad or a genuine one–time–pad.

96

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

encryption:

generate

b b b ...,0 1 2

add

decryption:

generate

b b b ...,0 1 2

add

genuine random

number

cleartext ciphertext

n, s

n, s

x

clearrtext

xk(x)

key gen. =

secret key =

secret zone

key and initial value

x x x ...=0 1 2 =x b ,0 0Å

x b , …1 1

lsecurity

parameter

Å

generation ofkey and

initial value

Figure 3.18: Pseudo–one–time–pad

Demands on PRBG

”strongest” demand: PRBG passes each probabilistic test T in polynomial runtime.

’Passes’ means here: The test can’t differentiate sequences which the PRBG producesfrom genuine random–sequences with a significant probability.

A probabilistic test T with a polynomial runtime is a probabilistic, polynomial time-limited algorithm that assigns to each input out of 0, 1∗ a real number out of [0, 1] (thusa finite sequence it has to test). (The value depends generally on the random decisionsin T.

The basic idea is now the fact that not each individual sequence is regarded (becauseeach individual finite sequence can be produced with same probability by a genuinerandom generator, i.e. to talk of the eventuality of an individual sequence is senselessfor cryptography). Instead of this one compares the average value of the test for thePRBG sequences with the average value for genuine sequences. If on the average withboth kinds of sequences the same results are achieved the test cannot help to differentiatethe PRBG sequences from the genuine random sequences.

Therefore the following is specified

• αm is the average value T assigns to a genuine random m–bit–sequence (i.e. theexpectancy value at uniform distribution over all m–bit–sequences at all), and

• βl,m is the average value T assigns to a m–bit–sequence generated by PRBG if thesecurity parameter is l (whereas the average is created over the keys and initialvalues that are created with the security parameter l)

Then it is more formally valid (i.e. at a transposition of the formal parameter m to aarbitrary polynomial Q(l)):

97

3 Cryptology Basics

A PRBG passes T if and only if:For all polynomials Q and all t > 0 is valid: For all sufficient large l lies βl,Q(l) in the

interval αQ(l) ± 1/lt

The following 3 demands are equivalent to this ’strongest’ demand (but easier to proof):For each produced finite initial bit sequence, in which an arbitrary (the right or left) bitis missing, every polynomial time-limited algorithm P (Predictor) is only able to guessthe missing bit (i.e. it cannot better forecast than with a probability of 1/2 ± 1/lt).Ideas of proofs (indirect proofs):

1. (Simple Part) Each Predictor is to be used as a test also: One takes all bits fromthe sequence which is to be tested except one, gives them as input to the predictorand compares the result with the correct bit from the sequence. The test result is1, if the predictor has estimated correctly otherwise the result is 0. The value αm

is 1/2, because each predictor can estimate correctly only with a probability of 1/2according genuine random–sequences. If the predictor estimates thus on PRBGsequences substantially differently (better) the PRBG did not pass this test.

2. (More difficult part) To show: From each of the 3 weaker demands follows the’strongest’.

We accept thus, any test T exists that PRBG does not pass, and we show thatthere is a predictor which forecasts a bit from PRBG to well. This is shown for aleft bit here:

For a t > 0, a polynomial Q and infinite l lies βl,Q(l) on the outside, e.g. above ofthe interval αQ(l) ± 1/lt.

One now submits T a bit sequence consisting of 2 parts, with the lengths j + k =Q(l), the left part is genuinly random and the right is generated by PRBG. Onecompares always two adjacent types of sequences.

genuinly random

Aj+1 = {r1 . . . rj rj+1 b1 . . . bk} results in testresults nearer at αQ(l)

Aj = {r1 . . . rj b0 b1 . . . bk} results in testresults farer away, e.g. higher.generated by PRBG

The things shown above must be validat least for one j because A0

are the PRBG sequences and AQ(l)

are the genuine random–sequences.

Out of this one constructs a predictor for the bit sequence b1 . . . bk as follows: Onechooses additionally a genuine random sequence r1 . . . rj applies the test T twice,viz

T to {r1 . . . rj 0 b1 . . . bk}: The result is called α0

T to {r1 . . . rj 1 b1 . . . bk}: The result is called α1

Guess b0 = 0 with a probability of 1/2 + 1/2(α0 − α1).

(More detailed: [BlBS 86]

98

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

3.4.3 The s2 (mod n)–generator

The s2 (mod n)–generator is a certain PRBG, which works with squares (and the dif-ficulty of the extracting roots and/or deciding whether something is a square). (In theoriginal publication [BlBS 86] it is called x2 (mod N)–generator. Because the x is usedas a variable designator for the plain text here and the N is used in many other pa-pers instead of the small n where it is used as designator for the compounded Modulusconsisting of two large prime numbers, I want to introduce this notation because of theconsistency.)

Definition of the s2 (mod n)–generator:

As key is n = p ∗ q selected with |p| ≈ |q| and p, q are prime numbers≡ 3 (mod 4). (the amount lines mean here ’length’ in bits. p and q arerandomly and independently selected, whereby the conditions mentioned in§3.4.1 concerning p and q must be kept – otherwise n could be factorized.)

Select randomly s as initial value Z∗n = {a ∈ ∗

n|0 < a < n, gcd(a, n) = 1}.Generation of the sequence: Calculate gradually the internal states s0 := s2

(mod n), si+1 := s2i (mod n),

and form in each step bi := si (mod 2) (= last bit of si)

Output: b0b1b2 . . .

(as a noticing assistance for the designator si: internal states = status =condition of the generator)

example:n = 3 ∗ 11 = 33, s = 2si: 4 16 25 31 4 . . .bi: 0 0 1 1 0 . . .

security considerationsThe security of the s2 (mod n)–generator can be proven using the factorizing assumption[ACGS 84, ACGS 88]. The tad simpler proof using the CRA is shown here (cp. §3.4.1.9):

Assertion: Under QRA the s2 (mod n)–generator is a unpredictable (cryptographi-cally strong) PRBG.

According to the paragraph above concerning demands on PRBG it is sufficient to provethat there is no predictor which is able to forecast well from the remaining bits the leftbit of the sequences which the s2 (mod n)–generator produces. More exactly:a) a predictor P [•, •] is a probabilistic, polynomial time-limited algorithm that puts 0or 1 out when inserting n, b1 . . . bk(bi ∈ {0, 1}).b) one says, P has an advantage for a certain n when predicting to the left of k–bit–sequences of the s2 (mod n)–generator if and only if:

s∈QRnW (P [n, b1(s) . . . bk(s)] = b0(s))

φ(n)/4>

1

2+ ǫ

99

3 Cryptology Basics

with bi(s) = last bit of s2 (mod n). (Be aware that φ(n)/4 is the random simple becauses must be a quadratic residue at the forecasting to the left – otherwise one is not able toextract roots. In contrast to that s is allowed to be arbitrary in Z∗

n when the sequenceis generated to the right.)c) The s2 (mod n)–generator is secure if and only if it is valid for each predictor P . eachconstant 0 < δ < 1, each polynomial Q and each natural numbers t: As long as l issufficient large: P has for all numbers (except of a σ–portion) n of the length l at themost an advantage of 1/lt for n when forecasting Q(l)–bit–sequences to the left.

Proof. (sketch):Assumption: There is a predictor P for the s2 (mod n)–generator with a ǫ–advantageto numbers of the length l. (To complete the proof we have to set all quantors over theǫ as shown in c) ).

First Step: P can be efficiently and uniformly transformed into a procedure P ∗ thatguesses with an ǫ–advantage the last bit of s0 to the given s1 from QRn:

s1 is given. Create b1b2b3 . . . with the s2 (mod n)–generator. Now apply P to thissequence. Then P guesses the bit b0 with ǫ–advantage. This is exactly the fitting resultof P ∗

Second Step: Now the procedure P ∗ is given. From this a procedure R can be con-structed efficiently and uniformly which guesses with ǫ–advantage whether a given s∗ isa square with Jacobi symbol +1:

s∗ is given. Set s1 := s∗2. Apply P ∗ to s1. P ∗ thus calculates the last bit of s0 withan ǫ–advantage. Thereby s∗ and s0 are roots of s1, and in fact s0 is just the one that isa square itself39. Then it is valid:

s∗ ∈ QRn ⇔ s∗ = s0.

By means of the last bits that means the known b∗ of s∗ and the guessed b0 of s0 it isnow to be decided wether s∗ = s0. The following is obvious: If s∗ = s0 then their lastbits must be equal. It will be shown now that from s∗ 6= s0 follows inversely that thelast bits are different: If s∗ 6= s0 then is s∗ ≡ −s0 (mod n) (because of the equivalentJacobi–symbols) thus s∗ = n − s0 ∈ Z. But n is an odd number thus s∗ and s0 havedifferent bits.

Thus the procedure R ends that way: If and only if b∗ = b0 it is guessed that s∗ is asquare and therewith one guesses as well as P ∗.

Third Step: The so constructed procedure R stands in contradiction to QRA.

Annotation: One can show that it is possible to take more than one bit per squaring–step videlicet O(log(l)).

39This was already shown in exercise 3− 9k): That only one of the 4 can be a square itself follows fromp ≡ q ≡ 3 (mod 4). Then only one of the two roots ±wp of s1 (mod p) is a square as shown in§3.4.1.6, and as well as for ±wq. According to §3.4.1.7 the 4 roots of s1 are combinations of ±wp and±wq. Last a combination is a square if and only if both parts are squares (follows from §3.4.1.7 too).

100

3.4 The s2 −mod− n-pseudo-concidence-bitsequence-generator

3.4.4 s2 (mod n)–generator used as an asymmetric system of

secrecy

One can use the s2 (mod n)–generator as an asymmetric system of secrecy (cp. figure3.19):

• The secret key consists of the factors p and q of n.

• The public key is n.

• A sender and a receiver do not exchange a secret start value s anymore but foreach message a new random start value s is chosen by the sender. (therewiththe encryption becomes indeterministic that means that same clear texts are notnecessarily encrypted to same cryptotexts, cp. §3.1.1.1 annotation 2.) It is exploitnow that one is able to extract roots (mod n) with the help of p and q so thatthe receiver is able to decode nevertheless.

• The senders are sending beside the previous cryptotext

x0 ⊕ b0, x1 ⊕ b1, . . . , xk ⊕ bk

also the value

sk+1

just the value whose last bit would have get bk+1 if one would have needed this.

• The receiver is decoding this way: He calculates backwards the sequence si fromsk+1,

si−1 := the root from siwhich is a square itself

= CRA(s(p+1)/4i (mod p), s

(q+1)/4i (mod q).40

Then the receiver can determine the last bits bi of si and therewith he receives theplain text bits xi.

Secure against passive attacks: Like above one can prove that the procedure issecure against attacks from passive aggressors. (One only has to consider additionallythat the aggressor sees each sk+1.)

Unsecure against active attacks: The aim of an active attack is to decode a crypto-text x0⊕b0, x1⊕b1, . . . , xk⊕bk, sk+1 which has been observed by an aggressor. Thereforethe aggressor gets arbitrary cryptotexts decoded that are different from these.

A simple chosen cryptotext–plain text–attack can happen this way (see figure 3.20):The aggressor squares sk+1 and receives sk+2. He sends

x0 ⊕ b0, x1 ⊕ b1, . . . , xk ⊕ bk, 1, sk+2

101

3 Cryptology Basics

encryption:

generate

b b b ...,0 1 2

add

decryption:

generate

b b b ...,0 1 2

add

genuine random

number

cleartext ciphertext

n

p, q

x

cleartext

xc(x)

keygeneration

public key =

secret zone

Modulus

x x x ...=0 1 2 =x b ,0 0Å

x b , …1 1

lsecurity

parameter

s randominitial value x b , sk k k+1

secret key =

factors

s s s ...,0 1 2 s ... s s ,k 1 0

Å

Å

p, q 3 mod 4;º

n := p•q

Figure 3.19: s2 (mod n)–generator used as an asymmetric system of secrecy

encryption:

generate

b b b ...,0 1 2

add

decryption:

generate

b b b ...,0 1 2

add

genuine random

number

cleartext ciphertext

n

p, q

x

cleartext

x, { }c(x)

key

generation:

public key =

secret zone

Module

x x x ...= 0 1 2 =x b ,0 0Å

x b , …1 1

lsecurity

parameter

s randominitial value x b ,

1, (s )k k

secret key =factors

s s s ...,0 1 2 s s ... s s ,k+1 k 1 0

p, q 3 mod 4;º

n := p•q

k+12

10

x x x ... { }=0 1 2

10

changed by the attacker

Å

Å

Figure 3.20: Choosen Ciphertext-cleartext attack on the s2 (mod n)–generator used as an asymmetricsystem of secrecy

to the victim that is just the observed cryptotext except the inner state sk+1, after thisan inserted bit, here 1, and then the ”next” inner state sk+2. The first k + 1 bits thatare received by the aggressor as an answer are the plain text he is interested in.

This simple attack could be detected by application of a comparison and in additiondefeated by not–answering of the victim. But in can be varied in two different ways andis then recognized much more difficult.

102

3.5 GMR: A strong cryptographic signature system

1. The aggressor replaces x0⊕ b0, x1⊕ b1, . . . , xk⊕ bk by k +1 random bits. From theanswer of the victim he is now able to calculate the pseudo–random–bit–sequence.Therewith he can decode the original cryptotext.

2. The aggressor squares j times instead of once thus he calculates sk+1+j . Accord-ingly he has to insert j bits in which values he is not interested in the answer.Even when the answers only allow k + 1–bit–messages the aggressor has only torenounce the first 1 + j–bit–messages.

Much more dangerous than the explained message–orientated attacks is that within anactive attack even a complete break is possible, i.e. the factorizing of the modulo. Thiscannot be seen here but it can be read in [GoMT 82].

3.5 GMR: A strong cryptographic signature system

GMR, named after the inital letters of it’s inventors names Shafi Goldwasser, Sivio Mi-cali und Ronald L. Rivest, is historically seen the first practical and strong cryptographicsignature system. It can be shown that even with an adaptive active attack it is im-possible (if the factorization assumption applies) to fake even a single new signature(no matter under what senseless message). (cp. §3.1.3: It is secure against the weakestsuccess c2 and on the strongest attack b. adaptive). It was published in [GoMR 88].

The proof of security is relativly complex; that’s why only the workings of the systemand some essential parts of the proof are pictured 41. You can also derive some propertiesof GMR from the security against adaptive active attacks; but here a simple bottom-upstrategy is chosen:

1. permutation pairs are introduced where only the one who knows the cryptgraphicsecret can find collisions. Such pair are called collision resistant 42.

2. with the help of a collision resistant permutation pair it is shown how to sign aone bit message. Therefore parallel to the permutation pair you need a authenticreference that is published as part of the public key.

3. contructing many collision resistant permutation pairs out of one.

4. it is explained how to dendriformly authenticate many references out of one tosign many messages with many collision resistant permutations.

5. some efficency improvements are shown.

41By courtesy of [FoPf 91]42In the litrature you can find an older unpropriate term. Instead of collision resistant (which means it

is really hard to find collisions) collision free was used - which of course can’t be applied literally

103

3 Cryptology Basics

3.5.1 Basic function: collision restistant permutation pairs

The basic part of GMR is the so called claw-resistant permutation pairs with trap

door (collision resistant permutation pairs with secret) 43.A collision of two permutations f0,f1 within the same domain definition is a pair (x, y)

with f0(x) = f1(y) (fig. 3.21).

f f0 1

yx

Figure 3.21: Collision of a permutation pair

“Collision resistant” means that under a cryptographic assumption it is impossible tofind those collisions if you don’t know the secret (even though they do exist: becausethey are in the same domain definition, for every z there is exactly one x with f0(x) = zand exactly one y with f1(y) = z.)

But the owner of the secret should be able to compute the inverse of the functions.

Construction under the factorization assumption: You can construct collisionresistant permutation pairs under the factorization assumption with the usual secrets(cp. §3.4.1): The secret are two randomly and stochastically indenpently chosen bigprimes p and q with the additional condition:

p ≡ 3 mod 8, q ≡ 7 mod 8

As explained in §3.4.1 (“About prime number creation”) there are sufficently manyprimes that meet these requirements.

In general permutations are squaring functions of which we already know that the secrecthelps to invert them (therefore to extract the root).

But first of all the squaring function is not a permutation but a mapping from 4 rootsonto the same square (cp. §3.4.17). That’s why we have to limit the domain of definitionthat only one root is in it. Therefore:

Dn = {x ∈ Z∗n | (

x

n) = 1, x < n/2}44

43In the original publication the term claw-”free” (collision-”free”) permutation pairs with trap doorwas used which doesn’t exist literally. That’s why I use the term collision resistant.

44That in fact exactly one of the 4 roots has this quality follows from p ≡ q ≡ 3 mod 4:

1. §3.4.1.8 shows that the 4 roots are CRA(±wp,±wq) where ±wp,±wq are the roots mod p and q.

2. Because of §3.4.1.6 holds: One of ±wp has the Jacobi symbol +1, the other -1. The same appliesfor ±wp. Let w.l.o.g. wp,wq be those with Jacobi-symbol +1.

3. If z = CRA(x, y): According to the Jacobi-symbol’s definition ( zn) = ( z

p) ·( z

q). The Jacobi symbol

of z modulo p only depends on the residue class mod p, therefore ( zp) = (x

p) and ( z

q) = ( y

q). That’s

104

3.5 GMR: A strong cryptographic signature system

The sign “<” on numbers modulo n refers to the standard representation of 0, . . . , n−1.Secondly we have to take care that result is within the domain of definition. But that

is easy (cp: §3.4.1.8): The result y at the squaring is a square; it’s Jacobi-symbol is+1. If y ≥ n/2 you’ll take −y. This is < n/2 and it’s Jacobi-symbol is +1 too because−1n =+1.Thirdly we need a second permutation with similar properties like x2. For this you’ll

take 4x2 = (2x)2. The gives the reason for the following choice:

f0(x) :=

{

x2 mod n, if (x2 mod n) < n/2

−x2 mod n, otherwise

f1(x) :=

{

4 · x2 mod n, if (4 · x2 mod n) < n/2

−4 · x2 mod n, otherwise

Proof. A Draft about that those pairs of permutations are collision resistant:That was already shown for f0 and analoguely for f1.

It will be shown for the collision resistence that you can factorize with every collision.For that you’ll need the unproven Lemma that for p ≡ 3 mod 8, q ≡ 7 mod 8:

(2

p) = -1 and (

2

q) = +1

(Second complement theorem on the law of quadratic reciprocity [ScSc 73])

From that follows:

(2

n) = -1.

Consider somebody has found a collision x,y with f0(x)=f1(y). Than

x2 ≡ ±(2y)2 mod n

The sign “-” is not possible because -1 is not a square mod n. So x and 2y are rootsof the same number. In accordance to §3.4.1.9 we can definitivly factorize, if they aresubstantially different:

x 6= ±2y

That follows from that they have different Jacobi Symbols: Because x, y ∈ Dn, x and yhave the Jacobi Symbol +1 and because of §3.4.1.8:

(±2y

n) = (

±1

n) · ( 2

n) · (y

n) = 1 · (−1) · 1 6= 1 = (

x

n)

The invertion with the secret works like this: Is x = f−10 (y) desired, first of all you

should test if y is a square (cp. §3.4.1.7). Otherwisly is according to the definition f0:

why ( zn) = (x

p) · ( y

q).

4. Therefore the Jacobi-symbols of w:=CRA(wp, wq) and −w:=CRA( − wp,−wq) are both +1 andthose of the other 2 roots -1. Of w and −w, one is exactly < n/2.

105

3 Cryptology Basics

y = −x2 and −y = x2. Then you extract the root of y resp. −y with the method from§3.1.4.8. The resulting root w is a square itself (cp. §3.1.4.6) so it has the Jacobi Symbol+1. Depending on if it’s < or ≥ n/2 you let x = +w resp. −w.

For f−11 (y), in addition to the root extraction mod n, you have to divide through

2. Because 2 has the Jacobi Symbol -1, but f−11 (y) is within the domain of definition

of the permutations and therefore must have the Jacobi Symbol +1, and due to themultiplicativity of the Jacobi Symbol the value from extracting the root must have theJacobi Symbol -1. For this purpose, after the root extraction mod p and mod q, theresults are put (with different signs) into the CRA.

In general: If you define collision resistant pairs of permutations precisly, you’ll notonly need a pair (f0, f1) but a whole family in dependency of the key. So everybodywho needs a pair can chose one with a secret only known to him. That requires a keygeneration algorithm gen for chosing a secret and a public key.

During the concrete construction we had it automatically: gen is the algorithm forchosing p and q (which represent the secret) and calculating the public key n. For every

n there is a different pair, so to be exactly you would have to write f(n)0 , f

(n)1 .

Collision resistance is defined by the unexistance of an polynomial algorithm the canfind collision with a significant probability (analogue to the factorization assumption).

Furthermore you’ll need a method to find a random element from the domain ofdefinition efficently only with the public key:Choice of an random element from DnDnDn: Pick a z ∈ Z∗

n randomly and square it; isz2 < n/2 you’ll pick x := z2 otherwisly x := −z2.

3.5.2 Mini-GMR for a message containing only one bit

If you want to sign a one bit long message you can simply chose a collision resistant pairof permutation: the secret signing key s is (p, q); the public verification key t consists ofn and a reference R which the signer randomly picks from Dn.

The signature beneath the message m=0 is f−10 (R) and beneath m=1 is f−1

1 (R) (fig.3.22).

f f0 1

R

s(1)s(0)

Figure 3.22: Mini-GMR for a one-bit-message. s(0) and s(1) are the possible signatures

Proof. Security (draft): A real active attack doesn’t exists here because only onemessage is signed. But maybe the faker conceived the signature beneath m=0 and

106

3.5 GMR: A strong cryptographic signature system

wants to fake the one beneath m=1 or vice versa. You could guess according to figure3.22 that this means the aggressor finds a collision.

But it’s not that easy: He got help for the one part of the collision from the signerwho knows the secret and can calculate collisions anyway. We must show that the fakercan get any collision even without that help.

W.l.o.g. we only watch case one where m=0. Let’s consider there is a good fakingalgorithm F with the input (n, R, s(0)). And then we show that a collision findingalgorithm K must exist:

K: Input n

Pick a “signature” S0 ∈ Dn randomly (here it is needed that this can beonly done with the public key. Because S0 is not picked with the signingalgorithm it is not denoted with s(0). Ditto for S1 and s(1).)Calculate R := f0(S0).Apply the input (n, R, S0) on F . If F is successful name the result S1.Output: (S0, S1).

It is clear that this K always finds a collision if the call of F is successful. It only needsto be shown that F is equally often successful if it’s called with a “real” reference andsignature. Real references are random and R is random too (because S0 is random andf0 is a permutation, so every R occurs with exacly one S0). For this R S0 is the (only)right signature.

3.5.3 Basic signatures: big tuples of collision resistant permu-tations

As second step we will enhance GMR for signing messages (but only this single one)with any lenght desired.

The idea is the same as above for one bit: You would like to have a function fm (likef0 and f1 for messages with m=0 resp. =1) for every of these messages. As public keyyou’ll again chose a reference R 45 additional to n. The signature beneath that messageshall be

s(m) := f−1m (R)

(figure 3.23) whereas you can break f−1m only with the secret.

Of course you can’t specify not all of the many functions fm explicitly. Therefore theyare being constructed from one pair of permutations:

Basic construction of the functions fm out of a pair (f0, f1): If you have pickeda collision resistant pair of permuations (f0, f1) you can construct a function fm for themessage m by a bit by bit composition of f0 and f1 in dependency from m:

45For this simplified case that the reference is known to the aggressor before a message chosen to sign,the security of GMR is not proven. And we don’t see how that could be achieved.

107

3 Cryptology Basics

f m

R

• • • • • •f m *• • •

s(m*) s(m)

Figure 3.23: Principle of GMR’s basic signature

Example: m=100101, f100101(x) := f1(f0(f0(f1(f0(f1(x))))))

With the knowledge of the secret you may calculate f−10 , f−1

1 and so f−1m . The evaluation

direction of m flips

Example: m=100101, f100101−1(x) := f−1

1 (f−10 (f−1

1 (f−10 (f−1

0 (f−11 (x))))))

The advantage of this is that fm exists for messages m with any length desired. Thereis no need to split m into block nor the usage of hash functions.

However the compostion of f0 and f1 may contain hazards. The “back counting” receiverthat tests the signature with fm obtains for every application of f0 and f1 a validsignature that is shortend by the last bit.

Example: f1(Sig100101) = f1(f100101−1(R)) = f10010

−1(R) = Sig10010

That can be prevented by fixing the form of the “valid message” through a prefix freemapping pref(·): No prefix (i.e. beginning part) of a message mapped that way maybe a prefix free mapping of another message at the same time. The original messageobtains a “valid tag” that is voided if the last bit is lost. Now you sign pref(m) withfpref(m)

−1(R).

You can derive from the collision resistance of (f0, f1) that the family of fpref(m) iscollision resistant in the same manner. Therefore it is hard to find two messages m,m∗

and “signatures” S,S∗ with fpref(m)(S) = fpref(m∗)(S∗).

Prefix free mapping: You can create a efficent prefix free mapping by preappendinga length predicate in front of the message m. That is a constantly long field (like acomputer word) that contains the lenght of the message. 46

In case of that the length of the message is fixed a additional prefix free mapping becomesunnecessary.

3.5.4 Complete GMR: authentification of many references

The simplest idea to sign many messages would be to announce a certain number ofreferences in the public key to “use the up” signature for signature. This procedure

46The advantage of GMR to sign messages with any length desired won’t become lost if you e.g. enablethe insertion of further length fields with an “extension bit”. To be specific you can pick the sign bitof the lenght field.

108

3.5 GMR: A strong cryptographic signature system

would have the disadvantage that the public reference list would become very long. 47

R0

• • •

R1 R2

fpref(m )0

fpref(m )1

fpref(m )2

s(m )0 s(m )1 s(m )2

Figure 3.24: Basic idea, how to use GMR for signing multiple messages

What remains from this idea is that for any signature needed another randomly chosenreference is used, like R0, R1,... for the 1-st, 2-nd, ... signature. (That makes it possiblethat an active aggressor gets his signature with a different reference. The makes it moredifficult for active attacks; one of the basic ideas of GMR).

But you can’t announce the specific Ri with the message - otherwisly the aggressorcould chose a signature Sig randomly to his message of choice and calculate with f0 andf1 the apropriate reference R := fpref(m)(Sig).

So it would be an good idea to authenticate the references with signatures too byincluding a single reference rε into the public key. Then you sign the list of referencesR0, R1, ... in relation to rε. Now you can use the apropiate Ri for signing the actualmessage. The disadvantage is that every signature would have to come along with thelist of all references.

That’s why you use a dendriform authentication of the references where for everyauthenticated reference you sign two new one’s. Before that you define the number 2b

of references you want to get that way.

Only one reference rε is published. The 2b references R0, R1, ... provided for messagesignatures (now called N−Signatures are now attached to the leafs of a binary tree withdepth b; every node of the tree contains again a randomly picked reference rj . The treeis called references tree (fig. 3.25).

For every signature a signature head is calculated that authenticates the current refer-ence Ri in relation to the public rε:

Starting with public reference rε (the root of the reference tree) every auxiliary refer-ence rj on the path from rε to Ri is used to sign both of their successor nodes rj0 and rj1

(example in fig. 3.26): for this rj0 and rj1 are concatinated (with seperators), interpretedas messages and mapped prefix free. The the signature KSIG := fpref(rj0|rj1)

−1(rj) iscalculated. These node signatures are called K−Signatures. Finally the attached ref-

erence Ri is autheticated with RSIG := fpref(Ri)−1(rj); this reference signature is call

R−Signature. 48

47For this simplified case that the reference is known to the aggressor before a message to sign is beingpicked, the security of GMR is not proven. And we don’t see how that could be achieved.

48For K- and R-Signatures you must use different secrets as for N-Signatures; this is needed in thesecurity proof and is shown in figure 3.28. Due to the readablilty this discrimination isn’t mentioned.

109

3 Cryptology Basics

•••

R010

r010r011

r01r00

r0r1

re

auxiliaryreferences

references R011

•••

r11r10

(public)

Ri

rj

Figure 3.25: Tree of references

Because all references rj at the nodes of the reference tree that are authenticated inthe signature head almost have the same length. You can assume a fixed length if youenlarge the shorter references by adding zeros in front. It is obvious to pick the lengthas security parameter l that defines the length of the modulus n (in bit). Therefore aadditional prefix free mapping become unnecessary.

During testing not only the N-Signature need to be verified but also the b K-Signaturesto all nodes rj of the path through the reference tree and the R-Signature of the referenceRi. As complexity estimations have shown the additional effort becomes minimal withlonger messages.

Efficeny improvements: First of all you have to notice that the signer doesn’t haveto generate the whole reference tree before, but only the reference Ri that is actuallyneeded, the upper auxiliary references and their brothers and sisters (because they areauthenticated together). What needs to be saved is shown in example R2 (fig. 3.26).

For every signature the signer can reuse (adding to it’s own signature head) the pre-vious beginning part of the path in the reference tree with all already calculated K-Signatures from the former signature. Therefore it’s recommended not only to keep thenecessary auxiliary references “memorized” but the whole signature head of the formersignature.

Because calculation of the inverse function (f0−1, f1

−1) is more expensive than calculat-ing with f0 and f1 you should avoid using f0

−1 and f1−1 whereever possible. It becomes

For example we simply write p and q even though for N-Signatures they are different from thosefor K- and R-Signatures. Which one is meant can be seen from the context. Also the necessity ofR-Signature only becomes clear in the proof.

110

3.5 GMR: A strong cryptographic signature system

•••b

r r010 011

r r010 011

R010

r010r011

r01r00

r r00 01r0

r r0 1

r1

re

m

RSig=f (r )pref(R010

) 010

-1

NSig=f (R )pref(m) 010

-1

KSig=f (r )pref(r0

r1

)| e

-1

K-Sig-natures

R-Signature:

N-Signature:

Figure 3.26: Signature for m relative to reference R2

clear that indeed for every signature only one authentication with f−1 (f0−1 and f1

−1)is necessary for each one:

For every new signature the signature head can be calculated “bottom-up” with f0 andf1: The signature NSIG for message m is picked randomly and the reference is calculatedas Ri := fpref(m)(NSIG). (Because f0 and f1 are permutations the randomness ofNSIG guarentees the randomness of Ri.) Subsequently the R-Signature RSIG is pickedrandomly, ri is calculated etc. until a “joint node” is reached where the path taken overis joint with the currently newly calculated one. Here you need f−1

0 and f−11 once (fig.

3.27).

For giving an overview into GMR at the end of this introduction it is shown in figure3.28 in the usual representation of cryptographical systems of §3.1.1. Depending on theimplementation it may be useful to make the tree depth b a part of the public key.

3.5.4.1 Further efficeny improvements

Especially the expensive caluclation can be speeded up essentially. By using the trick ofGoldreich which permits, instead of a bit by bit application of f−1

0 and f−11 , a efficent

calculation of fw−1(R) for any bit sequences w within one step. Furthermore there are

some more places where the signer can make use of the secret (p, q). For details see[FoPf 91].

111

3 Cryptology Basics

R010

R010

r010r011

r01r00

r r00 01r0

r r0 1

r1

re

pref(m)

K-Signatures

R-Signatures

:calculation direction: value already available

: value must be choosen randomly

: value must be calculated

Figure 3.27: “Top-down” and “bottom-up” calculations

To have a clue about the achievable efficency improvement: In a software implemen-tation

• on a MAC IIfx with the MC68030 CPU and 40 Mhz

• for a message of 8192 bit

• for a tree depth of 10 and modulus length 512

signing takes about 3,2 sec. and testing 16,6 sec. [FoPf 91]. Testing takes significantlylonger because you can’t calculate mod of the primes but only mod of the products(cp. §3.6.5.2) and you need to test the whole tree. At signing you need to calculate theN-Signature and R-Signature but in the average only one K-Signature.

Of course hardware implementations would be definitivly faster.

112

3.6 RSA: The most known system for asymetric concelation and digital signatures

key

generate

whole refe-

rence tree

once, or one

"branch" foreach message

random number

message

message,signature andverification result message and

signature

p, q

r

m, s(m)

key for signatureverification,publicly known

key for signaturecreation,secret

m, s(m),"ok" or"fal "se

verify

N-Signature,

R-Signature

and

K-Signatures

m

random number ’

p, p’ 3 mod 8º

q, q’ 7 mod 8º

rn := p q·

n’ := p’ q’·

en, , re

e

NSig = f (m)—1(Ri),

RSig = f (Ri )—1 (ri),

KSig = f (ri | )·—1(ri —1),

...

f (r0|r1)—1 (re)

generation:

n'

p',q'

pref

pref

pref

pref

Figure 3.28: The digital signature system GMR

3.6 RSA: The most known system for asymetric con-

celation and digital signatures

RSA, named after the inital letters of it’s inventors Ronald L. Rivest, Adi Shamir andLeonard M. Adleman is the most known asymetric cryptographical system. It waspublish in 1978 [RSA 78] and can be used either as asymetric concelation system ordigital signature system.

As like as the s2-mod-n generators (§3.4) and the collision resistant permutations asdecribed in GMR the security of RSA bases on the factorization assumption. But unlikethese systems there is no proof that breaking RSA would consequently mean a algorithmfor factorization. That’s why RSA can only be put into the class of “well examined” infigure 3.12.

3.6.1 The asymetric cryptographical system RSA

The core idea of RSA consists of exponentiation of messages (in a residue class ring),calucalting exponents inversive to each other and to distribute the knowledge about theexponents properly.

113

3 Cryptology Basics

Key generation with security parameter l:

1. Pick two primes p and q randomly and stochastical independent so that |p| ≈ |q| = land p 6= q. 49

2. Calculate n := p · q

3. Pick c with 3 ≤ c < φ(n) and bcd(c,φ(n))=1. 50

4. Calculate d with p, q, c as multiplicative inversive of c modulo φ(n). 51. Thenholds:

c · d ≡ 1 mod φ(n)

Publish c and n as public keys and keep d, p, q as private key.

Encryption resp. decryption is made by exponentiation with c and d in Zn.

What needs to be proven is that encryption and decryption in inversive for all messagesm:

49|p| means the legth of p in bit.The condition could be left out because it would only harm a exponentially little part of all cases.Because for p 6= q φ(n) can be represented quite easy and the decryption not almost always butalways works p 6= q is demanded here. (Who doesn’t believe that the decryption doesn’t work alwaysshould calculate the follwing example: p=q=5; φ(p2)=p(p − 1), so φ(25) = 20. c=7 results in d=3

and d · c=21≡ 1 mod 20. (153)7

is ≡0 mod 25 instead of ≡15). While p and q should be picked asalmost equally long (even the same length would be no problem), p and q - as mentioned - shouldnot be equal. They even should not be almost equal because then n would be easy to factorize. Thefactorization would be done like that: Extract the root of n in R and look below of it for a factorof n. If p and q are approximatly equal this can be done efficently. Even if p and q are exactlyequally long (but their choice was stochastically independent) only a exponentially small part would“approximatly equal”, which means it doesn’t bother. To make the breaking of RSA more difficultthe primes p and q should not only have the properties of §3.4.1 which make the factorization moredifficult. Additionally for at least one of the big prime factors of p−1 and q−1 should hold: Reducedby one it has a big prime factor on it’s own. That parries a special attack on RSA [DaPr 89]. Evenwith these additional properties there are enough primes p and q left and they can be found efficently[BrDL 91].

50Often c is picked randomly off the interval from the one’s that are relativly prime to φ(n). In otherwords: pick c from Zφ(n)

∗ 6= 151The is done with the extended algorithm of euklid decribed in §3.4.1.1.

114

3.6 RSA: The most known system for asymetric concelation and digital signatures

claim For all m ∈ Zn holds: (mc)d ≡ mc·d ≡ (md)c ≡ m mod n

proof The first two congruence signs are valid within the calulation of congru-ences. The third’s validity is to be shown.

c · d ≡ 1 mod φ(n)⇒c · d ≡ 1 mod p− 1⇔∃k ∈ Z : c · d = k · (p− 1) + 1

For this k and all m ∈ Zn holds: mc·d ≡ mk·(p−1)+1 ≡ m · (mp−1)k

mod p.With Fermats little theorem (§3.4.1.2) for all m relativly prime to p holds:

mp−1 ≡ 1 mod p

Consequently is for all m relativly prime to p: m · (mp−1)k ≡ m · 1k ≡ m

mod p. This also holds for the trival case m ≡0 mod p:

∀m ∈ Zn : mc·d ≡ m mod p

The corresponding argumentation for q gives:

mc·d ≡ m mod q

Because congruence holds true for p and q it also holds (due to §3.4.1.3)for p · q = n. Finally:

For all m ∈ Zn holds: mc·d ≡ m mod n. �

Every element of Zn can be a message which can be decrypted unambigiously afterencryption through modular exponentiation. This makes the modular exponentiationwith c injective. Because all ciphertexts are in Zn so image and inverse image are thesame and therefore equally powerful. So modular exponentiation with c is also surjectivewhich makes it a permutation. Then you can extract the c-th root out of every elementof Zn.

Unfortunatly there is (yet?) no proof of security of RSA known. It would be like this:If RSA is easy to break then it is easy to factorize. For the strongest form of breaking(the weakest form of security proofs): If someone can extract the c-th root modulo n hecan factorize n. Yet nobody could do so.

3.6.2 Naive and insecure usage of RSA

Here the insecure and naive (we’ll see this in §3.6.3) usage of RSA as asymetric concela-tion system and digital signature system is shown. In both cases plain text is representedby numbers m with 0 ≤ m ≤ n. For this purpose plain texts have to be divided intoseveral blocks (each with maximal length of |n| − 1 bit; blockage).

115

3 Cryptology Basics

3.6.2.1 RSA as asymetric concelation system

The identifiers from §3.6.1 are already picked properly:

Encryption is done by modular exponentiation with cFor the plain text block m: mc mod n

Decryption is done by modular exponentiation with dFor the cipher text block mc: (mc)d mod n = m

Figure 3.29 shows the complete system.

keygeneration:

random number

cleartext blockcleartext block ciphertext block

c, n

d, n

m

encryption key,publicly known

decryption key,secret

encryption:

m ndecryption:

º (m ) nc d

m

p, q prime numbers

n := p q·

c with ggT(c,( p-1)(q-1)) = 1

d · c mod ( p-1)( q-1)

lsecurityparameter

-1

c

( mod· )c

n ( mod· )d

n

mod

mod

secret zone

Figure 3.29: Naive and insecure usage of RSA as an asymmetric concelation system

Example:Keygeneration: Let p = 3, q = 17 so n = 51So φ(n) = (p− 1) · (q − 1) = 32Let c = 5; verify if bcd(5,32)=1. (Euklids algorithm). This is true.Desired d = c−1 mod φ(n) so 5−1 mod 32. With Euklids extended algorithm d = 13.

Encryption: Let the plain text m = 19. The according cipher text S = 195 ≡192 · 192 · 19 ≡ 361 · 361 · 19 ≡ 4 · 4 · 19 ≡ 4 · 76 ≡ 4 · 25 ≡ 49 mod 51.

Decryption: Sd = 4913 ≡ (−2)13 ≡ 1024 · (−8) ≡ 4 · (−8) ≡ −32 ≡ 19 mod 51 whatis indeed the plain text.

116

3.6 RSA: The most known system for asymetric concelation and digital signatures

3.6.2.2 RSA as digital signature system

The identifiers from §3.6.1 are renamed as follows:

c→ t, d→ s.

Then holds:

Signing is done by modular exponentiation with s.For the plain text block m: ms mod n.

Testing is done by modular exponentiation of the signature with t and aappended comparision of the result with the corresponding text block.For the plain text block m with the signature mS : (mS)

tmod n = m?

Figure 3.30 shows the complete system.

keygeneration:

random number

text block

t ,n

s, n

decryption:

1. Comp. º

(2. Comp.)t

mod n ?

m,m ns

encryption:

m

p, q prime numbers

lsecurityparameter

text block,signature andverification result

text block andsignature

key for signatureverification,publicly known

key for signaturecreation,secret

m,m n ,"ok" or" "false

s

n := p q·

t with ggT(t,( p-1)(q-1)) = 1

s º t mod (-1

p-1)( q-1)

mod mod( mod· )

sn

secret zone

Figure 3.30: Naive and insecure usage of RSA as a digital signature system

3.6.3 Attacks, especially multiplicative attacks from Davida

and Moore

Unfortunatly the usage of RSA shown in §3.6.2 is easy to understand but mostly insecure.Because RSA has a multiplicative structure (will be explained soon) the concelation

117

3 Cryptology Basics

system as well as the digital signature system can be broken without knowledge aboutthe secret key.

The multiplicative attack onto RSA as digital signature system can be explained moresystematic. So we start here with it.

3.6.3.1 RSA as digital signature system

Everbody can fake digital signatures described in §3.6.2.2 by calculating backwards: Hepicks a signature, exponentiats it modularly with t an receives the corresponding textblock. This passive attack breaks RSA existentially but not selectively. Because theaggressor can’t specify the message he would like to have a signature for (cp. §3.1.3.1and §3.1.3.2). The minimum that must be demanded additionally to §3.6.2.2 is that“useful” text blocks occur only occasionally with this attack.

To create active attacks which allow a selective break of the signature system of §3.6.2.2systematically a passive attack is described first. It doesn’t fake signature better than thecurrently one shown but explains the multiplicative structure of RSA. This is requiredfor the active attacks following.

If an aggressor knows two signatures m1S and m2

S for text blocks m1 and m2 he can gaina third signature m3

S and the corresponding textblock m3 by modular multiplication:

m3S := m1

S ·m2S mod n,

m3 := m1 ·m2 mod n

m3S is indeed the signature for m3 because: m1

S ·m2S = (m1 ·m2)

S . In other words:RSA has a multiplicative structure or more specifically: RSA is a homomophismon mutiplication. As mentioned above you can only do a existential break with thispassive attack.

Is an active attack possible a selective break with help from the multiplicative structurecan be performed (attack from Davida [Merr 83][Denn 84][Bras 88]):

The aggressor picks a text block m3 of his own choice.He picks a m1 mod n for which a mutliplicative inversive m1

−1 with exists.He calculates m2 := m3 ·m1

−1 mod n.He makes m1 and m2 become signed.He receives m1

S and m2S .

He calculates m3S := m1

S ·m2S mod n.

This (non adaptive) active attack requires two signatures from the victim to fake oneselectivly. The following attack improved by Judy Moore [Denn 84] only requires onesignature to fake one selectivly.

The aggressor picks a text block m3 of his own choice.He picks a random number r from Zn (1 ≤ r < n and r has for mod n amultiplicative inservsive r−1.He calculates m2 := m3 · r−1 mod n.

118

3.6 RSA: The most known system for asymetric concelation and digital signatures

He makes m2 become signed. 52

He receives m2S .

He knows: m2S ≡ (m3 · rt)

S ≡ m3S · rt·S ≡ m3

S · r mod n.He calculates m3

S := m2S · r−1 mod n

The idea behind of this (non adaptive) active attack is to multiply the message m3 whichhe likes to be signed with a “processed” random number rt. The attacked one signs itblindly (i.e. without knowledge what’s behind) and the aggressor just has to divide itthrough r. The application of this attack from Judy Moore was published in 1985 fromDavid Chaum [Cha8 85]. In §3.9.5 and §6.4.1 it is explained thoroughly.

3.6.3.2 RSA as asymetric concelation system

Is RSA applied like in §3.6.2.1 only a deterministic asymetric concelation system isrealised. An aggressor can guess plain text blocks that are likely to occur and encryptthem with the public key deterministically. Then he compares the cipher text sent tothe key owner with the self generated ones. If they are equal he can “decrypt” them(cp. annotation 2 in figure 3.4).

Because multiplicative attacks described in §3.6.3.1 only use properties of RSA whichexist in the naive usage as signature system as well as concelation system they can betransfered to the usage as concelation system. This is only shown for the improvedattack from Judy Moore.

Non adaptive active attack for a selective break of RSA naively used as asymetric con-celation system:

The aggressor picks a cipher text block S3 of his own choice (like the one heobserved).He picks a random number r from Zn (1 ≤ r < n and r has for mod n amultiplicative inservsive r−1.He calculates S2 := S3 · rc mod n.He makes S2 become decrypted. 53

He receives S2d.

He knows: S2d ≡ (S3 · rc)d ≡ S3

d · rc·d ≡ S3d · r mod n.

He calculates S3d := S2

d · r−1 mod n

The idea behind of this (non adaptive) active attack is to multiply the cipher text S3

which he likes to be decrypted with a “processed” randomnumber rc. The attacked one

52Is m3 ∈ Zn the attacked one doesn’t gain any knowledge about m3 because rt is distributed uniformlyin Zn. However the aggressor doesn’t need the help from the attacked one anyway: For m3 = 0 forall n is m3

S = 0. In all other cases the bcd(m3, n) gives a non trival factor of n (concretely p or q).That would mean the aggressor would have broken RSA completely (cp. §3.1.3.1)

53Here too the attacked one doesn’t gain any knowledge about S3 because rc is distributed uniformly inZn. However the aggressor doesn’t need the help from the attacked one anyway: For S3 = 0 for alln is S3

d = 0. In all other cases the bcd(S3, n) gives a non trival factor of n concretely p or q. Thatwould mean the aggressor would have broken RSA completely (cp. §3.1.3.1)

119

3 Cryptology Basics

decrypts it blindly (i.e. without knowledge what’s behind) and the aggressor just hasto divide the plain text received through r.

3.6.3.3 Thwarting the attack

In this section it is explained how all know attacks on RSA as concelation system as wellas digital signature system can be thwarted. However “known attacks” is clearly pointedout because even for them we don’t have any proofs. Finally not even the niftiest usageof RSA can be better than RSA on its own (cp. annotation about security of RSA atthe end of §3.6.1).

The usage of RSA for concelations as well as for authentication makes the applicationof a collision resistant hash function helpful. We already learned something aboutthat in §3.5.3. But there we considered them as permuation family. As a reminder: Bitsequences m with any length desired were assigned to collision resistent permutationsfm. If you consider these sequences as input of a function which applies to permutationonto a fixed value x and the permutated value y as output then you have constructed acollision resistent hash function. To be more precisly:

fm(x) = y

is a permutation for m fixed: Input x, Output yand a hash function for x fixed: Input m, Output y

A collision of the hash function

fm(x) = fm∗(x) with m 6= m∗

is also a collision of the permutation family

fm(x) = fm∗(z) with m 6= m∗

because x = z is allowed. The other way round doesn’t hold of course because x = z isnot true in general. The constructed hash function is as much as collision resistent asthe underlying permutation family. For fm(x) with x fixed and input m we will writeshortly h(m) and name h a collision resistent hash function (constructed from a collisionresistent permutation family).

More efficent but not provably more secure collision resistent hash functions can beconstructed out of block ciphers (cp. §3.8.3).

3.6.3.4 RSA as asymetric concelation system

RSA becomes a indeterministically encrypting concelation system with assigning arandom number z (has nothing to do with the random number for the key generation;therefore it is marked as random number’ in figure 3.31) to every plain text block m.

Active attacks are prevented by adding redundency to every plain text block by en-cryption which is check by the decrypter. This addition of redundency should guarentee

120

3.6 RSA: The most known system for asymetric concelation and digital signatures

that the modular multiplication of two plain text blocks with redundency won’t resultin a third plain text block with apropriate redundency.

The redundency can be created by appling the plain text block on a hash funtion whoseoutput is appended on the text block. Of course the hash function should have (if itdoes at all) another multiplicative structure than RSA for neutralizing RSA’s one. Thiscan be done by chosing a modulus for the permutations in §3.5.3 which is independentfrom the RSA modulus.

Figure 3.31 shows the complete system for combining indeterministic encryption andredundency with collision resistent hash functions.

Encryption of a plain text block is done by prepending a random number,applying the collision resistent hashfunction h on the random number andthe plain text block, appending the result, and finally the modular exponen-tiation with c of all three components.For the plain text block m and the random number z: (z, m, h(z.m))c mod n

Decryption is done by modular exponentiation with d. This results in threecomponents. It is checked by applying the hash function h on the first twoelements if it results in the third one. Only if so the second component itoutputted.For the cipher text block (z, m, y)c mod n:

((z, m, y)c)d mod n =: z, m, y. h(z.m) = y?. Output if h(z.m) = y is m.

Complicated and therefore in a certain kind a proofable secure constructions for thecombination of redundency and indeterministic encryption are decribed in [BeRo 95].These construction do not only apply on RSA but for every asymetric concelation cryptosystem which is a permutation (aka length fixed block ciphers; cp. §3.8.2 and §3.8.2.7).

3.6.3.5 RSA as digital signature system

To prevent backward calculating and for neutralizing the multiplicative structure of RSAa collision resistent hash function is applied on the text block before the modularexponentiation. If the hash function accepts arguments of any length desired it has theadditional advantage of signing texts with any length desired in one piece. The blockageproblem for digital signature systems is therefore solved.

If the hash function can be computed faster than RSA it’s usage with longer mes-sages gives a speed advance to the complete system relativly to the application of RSAdescribed in §3.6.2.2.

Signing is done by applying the hash function and modular exponentiationof hash result with s.For the text block m: (h(m))s mod n.

Testing is done by modular exponentiation of the signature with t and asubsequent comparision of the result with the hash value of the correspondingtext block.

121

3 Cryptology Basics

m

secret zone

mod n= 2. component of

after redun-dancy check using h

m

l

z mod n

p, qn := p q·

c mit ggT(c,( p-1)(q-1)) = 1

d º c mod ( p-1)( q-1)-1

c, n

d, n

random number'

( , ,h( , ))z m z mc

( , ,h( , ))z m z mc

( ) mod =:

, , ;

h( , )= ?

·d

n

z m y

z m y (( , ,h( , )) )

mod

z m z m

n

c d

keygeneration:

random number

cleartext blockcleartext block ciphertext block

encryption key,publicly known

decryption key,secret

encryption: decryption:

prime numbers

securityparameter

Figure 3.31: Secure usage of RSA as an asymmetric concelation system: Redundancy check by a collisionresistant hash function h and indeterministic encryption

For a text block m and a signature (h(m))s: ((h(m))s)t mod n =: y, h(m) =y?

Figure 3.32 shows the complete system.

3.6.4 Efficent implementations of RSA

By a skillful implementation the complexity of RSA can be reduced significantly. Firstof all this is shown for the user of the public key and then for the owner of the privatekey.

3.6.4.1 Public exponent with almost only zeros

Except for the condition that the public exponent must be relativly prime to (p−1)(q−1)it can be chosen freely (cp. §3.6.1). It is obvious to choose it (except for leading zeros)as short as possible to reduce the necessary multiplication and squaring operation. If itcontains some zeros after the leading one you save one modular multilication per digit,cp. “square and multiply” in §3.4.1.1. p and q have to be picked apropriately so that(p− 1) and (q − 1) are relativly prime to the public exponent.

Because (p− 1)(q − 1) is even, the smallest possible choice of the public exponent is 3.For RSA as digital signature system there is no known argument that says something

122

3.6 RSA: The most known system for asymetric concelation and digital signatures

t ,n

s, n

(2. Comp.)t

mod n ?

mod nm

l

mod n,"ok" or" "false

p, qn := p q·

t mit ggT( t,( p-1)(q-1)) = 1

s º t mod (-1

p-1)( q-1)

m m,(h( ))s

m m,(h( ))s

(h( mod· ))s

n

keygeneration:

random number

text blockdecryption:

1. Comp. º

encryption:

prime numbers

securityparameter

text block,signature andverification result

text block andsignature

key for signatureverification,publicly known

key for signaturecreation,secret

secret zone

Figure 3.32: Secure usage of RSA as a digital signature system with a collision resistant hash function h

against. For RSA as concelation system there is a passive attack from Johan Hastadthat breaks RSA message oriented [Hast 86][Hast 88]:

A participant is sending a message m to three others whose public exponentis 3 and whose each moduli are n1,n2 and n3. The moduli are relativly primeotherwisly the aggressor could factorize at least one (so he would get m).The aggressor observes: m3 mod n1, m3 mod n2, m3 mod n3.With CRA he can determine m3 mod n1 · n2 · n3.Because m3 < n1 · n2 · n3, m3 mod n1 · n2 · n3 = m3. The aggressor canextract the root from m3 mod n1 · n2 · n3 = m3 in Z and receives m.

The attack can be generalized in two ways:

Because there is no search through the plain text space and it is a passive attackthe application of RSA (indeterministic encryption as well as redundency creation undverification) decribed in §3.6.4.1 doesn’t thwart the attack if the same random numberwas used for all three messages.

Is the public exponent c, so c encryptions of the same message are sufficent for theattack. Though the aggressor has to calculate with numbers c-times long.

As a good compromise between thwarting the attack and the required computing power216 + 1 is proposed throughout many places as public exponent. [DaPr 89].

123

3 Cryptology Basics

There is nothing against that all participants do pick this as their public exponent.In consequence the public expontents won’t be need to be published. The public RSAkey then consists only of the modulus.

3.6.4.2 The owner of the secret key calculates the factors modulo

In situatitions where the efficency for the owner of the private key is emphasised you aretempted to picks a shorter private key instead of a “shorter” public key (cp. §3.6.5.1).This is possible because the public and the privat exponent behave symetrically due tothe formulas from §3.6.1. Even if you don’t do the stumpy mistake of choosing a secretexponent like 3, 216 + 1 or any other small number that can be found by try and error(simple search) the system may be insecure if the private exponent is smaller than a forthof the modulus length [Wien 90]. So be careful with this kind of effiency improvements!

Without the slightest change of the choosing of the key - therefore without any change ofthe security - the owner of the secret key can save up to three quarter of his calculations:Instead of modulus n he calculates modulo with both the factors p and q. Afterwardshe calculates the value wanted with modulo n from both the results with CRA:

To calculate the c-th root of the cipher text blocks S efficently - that isSd ≡ w mod n - the owner of the secret key determines once and for all:

dp := c−1 mod p− 1 so that (Sdp)c ≡ S mod p and

dq := c−1 mod q − 1 so that (Sdq)c ≡ S mod q.

To extract the c-th root from a concrete S:

(Sdp)c

mod p, (Sdq)c

mod q and thenw := CRA((Sdp)

cmod p, (Sdq)

cmod q)

Then holds wc ≡ (Sdp)c ≡ S mod p as well as wc ≡ (Sdq)

c ≡ S mod p.Because p and q are relativly prime follows: wc ≡ S mod n.

How much faster is this actually? In the domain of the securtiy parameter (the lengthof p and q) we’re interested in holds:

The complexity of the modular exponentiation growth cubical, of the modular multi-plication quadratic and linear for length of the modulus for modular addition. Because|n| = 2 · l the complexity of an exponentiation modulo n is proportional to (2 · l)3 = 8 · l3.The complexity of two exponentiations of half the length (of p and q) is proportional to2 · l3. The complexity of CRA bases on two multiplications and one addition modulo nand is therefore proportional to 2 · (2 · l)2 + 2 · l = 8 · l2 + 2 · l.

Because the proportional constants 54 are approximatly equal, the complexity of CRAfor even the smallest relevant value of l=300 is notably smaller than 2% of the complexityof 2 expontiations of half length. So calculating with modulo p and q reduces thecomplexity by the factor

8 · l32 · l3 = 4

54for the cubical complexity of the exponentiation, the quadratit of the multiplication and the linear ofthe addition

124

3.6 RSA: The most known system for asymetric concelation and digital signatures

3.6.4.3 Encryption performance

In [Sedl 88] a hardware implementation of RSA according to §3.6.2 is specified with aspeed of 200 kbit/s for the decryption with modulus length of 660 bit. This seems to besufficent for todays security of RSA. The speed is achieved through the usage of methodshown in §3.6.5.2. Is the encryption done with the exponent 216 + 1 the speed goes upto 3 Mbit/s.

A corresponding software implmentation on an Apple Macintosh IIfx (MC68030,40Mhz, 32 Kbyte onboard cache, 80 ns RAM) achieved a decryption speed of 2.5 kbit/swith a moduls length of 512 bit.

3.6.5 Security and usage of RSA

No matter how you make use of RSA as a asymtric cryptographical system it can’t bemore secure than the factorizing the modulus is hard to do. And a proof that breakingRSA is as hard as factorizing doesn’t exist yet (cp. §3.6.1).

So what are the useful fields of operation for RSA?

It is helpful for the discussion to go back to figure 3.12 in §3.1.3.5. There we can seethat the important field 3 is filled with only one system which is not equivalent to apure standard assumption. Therefore efficent asymetric concelation systems that arecryptographically strong relativly to a pure standard assumption are unknown yet.

Though RSA is not proven to be cryptographically strong (as secure as factorizingis difficult) but used as indeterminisitc asymetric concelation system as in §3.6.4.1 nosuccessful attacks are known - and this for years. If a asymetric concelation is requiredand if you can’t rule out active attacks you should use RSA (or CS). If you can rule outactive attacks than the s2− mod n generator should be preferred: On the one hand it iscryptographically strong (proven), so if you can break it you can factorize and thereforebreak RSA too. On the other hand it is proven for the s2 − mod n generator thata passive aggressor doesn’t gain any information about the plain text. Such a proofdoesn’t exist for RSA either.

Regarding the complexity s2 − mod n generator and RSA are compareable. Thatisn’t a good distinguishing criteria.

There are no reasonable security arguments against the employment of RSA instead ofGMR as digital signature system. However if you can break GMR you can factorizeand therefore you can break RSA. So you should prefer GMR over RSA where everpossible.

Unfortunately regarding the complexity they behave inversly: If a equally efficenthash function is used for RSA (cp. §3.6.4.2) as like as for GMR (a more complex onewouldn’t result in better security) both do have almost the same complexity. RSA isonly slightly better because it won’t have to create and verify a reference tree. But it ispossible to use a way more efficent hash function for RSA because it is never needed tobe inverted. In GMR this is not necessarily needed for N-Signatures but inevitable for

125

3 Cryptology Basics

R- and K-Signatures (cp. fig. 3.27). The complexity for creation and verification of theR- and K-Signatures in GMR can’t be reduced in the same manner.

The application of a hash function that can’t be inverted by nobody can boost theefficency but requires further cryptographical assumptions. This is that the hash functionmust be collision resistent. The more unproven assumptions are needed the bigger thepropability that at least one isn’t fulfilled. That would make security a mere illusion.Regarding digital signature systems you have to decide between security (GMR) andefficency (RSA with a fast hash function).

RSA was patented for and only for the USA. The patent ran out on september 20th2000 [Schn 96].

3.7 DES: The most known system for symetric con-

celation and authentication

DES is the first standarized (hence the name: Data Encryption Standard) and (therefore)the mostly used cryptographical system. DES is a symetrical cryptographical systemthat only has a 56 bit long key and encodes blocks of 64 bit length.

DES was published first in [DES 77]. It is printed in [MeMa 82].

3.7.1 Overview of DES

The DES algorithm is composed of a - cryptographically insignificant - inital permu-tation IP which seperates the input block into the two 32 bit blocks L0 and R0, 16iteration rounds which perform the actual encryption, and a inversive permutationIP−1 which is the inversive to the inital permutation. Before the inversive permutationis applied L16 and R16 as result of iteration round 16 are swapped.

The key is used to create 16 partial keys K1 to K16 - one for every iteration round.

3.7.2 A iteration round

Every iteration round i (cp. fig. 3.34) obtains the blocks Li−1 and Ri−1 as input andprovides the blocks Li and Ri as output. Thereby Li is directly copied from Ri−1 andRi is created by the addition (bit by bit modulo 2) of Li−1 and f(Ri−1, Ki):

Li := Ri−1 (1)Ri := Li−1 ⊕ f(Ri−1, Ki) (2)

f marks the encryption function.

3.7.3 The decryption principle of DES

It was shown in figure 3.34 how a iteration round of DES works. It is vital for thedecryption in DES that this process can be inverted. As result of the iteration round

126

3.7 DES: The most known system for symetric concelation and authentication

iteration round 1

iteration round 2

iteration round 16

IP

IP -1

64 bit block cleartext 64 bit key

creation of

partial keys

K

16

64 bit block ciphertext

(only 56 bits used)

L

15

L 16R 16

R 15

2K

K

L

L

L

0

R

R

0R

11

2 2

Figure 3.33: Sequence of steps in DES

you’ll get:Li := Ri−1 (1)Ri := Li−1 ⊕ f(Ri−1, Ki) (2)

Now you swap both the blocks Li and Ri. This is done (fig. 3.35) by placing theidentifiers of the values of the left side on the corresponding places on the right side.Although, according to the rule L=left and R=right, at all identifiers on the right sideL and R should have been swapped. Besides the index on the right side is countedbackwards instead of foreward. If you perform the same operation again:

Li := Ri−1 (1)and

Li−1 ⊕ f(Ri−1, Ki)⊕ f(Lj , Kj) = Li−1 ⊕ f(Li, Ki)⊕ f(Li, Ki) =: Li−1 (2)The necessary swapping of the blocks is done after the 16. iteration round. So the

swapping is done before the begining of the decryption. It is kept throughout everyencryption round and is reversed with a further swapping after the last encryption round.Thus you can en- and decrypt data with a single algorithm. You just have to invert theorder of the iteration rounds what can be achieved by inverting the order of the partialkeys.

127

3 Cryptology Basics

f

L = Ri i-1

R = L f(R ,K )i-1i i-1

Å

Å

iK

Ri-1L i-1

itera

tion ro

und

i

i

Figure 3.34: One iteration round of DES

f

L i-1R

i-1

L = Ri i-1

Å

iK f

L i-1R

i-1

L = Ri i - 1i-1i i-1

Å

iK

encryption in iteration round i decryption in iteration round i

R = L f(R ,K )Åi

i-1i i-1R = L f(R ,K )Å

i

Figure 3.35: Decryption principle of DES

As you can see the decryption principle is independent from the length of the blocksand the choice of the inital permutation IP (so is IP−1). Also the encryption function fcan be chosen unrestrictedly. 55

55This means, with the identifiers being introduced in the next section, that for the expansion permuta-tion E, the substitution boxes Si, (i = 1, .., 8) and the permutation P you may use selfdefined tables(by sticking to a basic structure). The creation of the partial key and the kind of linking of thepartial keys with with the block Ri−1 also can be very variable. Additionally it should be mentionedthat for decryption f won’t be needed to be inverted.

128

3.7 DES: The most known system for symetric concelation and authentication

Even though the choice of the encryption function f is irrelevant regarding the inverta-bility (with knowledgt about the key). But essential for the security (i.e. uninvertability)for anybody else who doesn’t know the key.

3.7.4 The encryption function

The first parameter of the encryption function shown in figure 3.36, a 32-bit block,is expanded to 48 bit with the expansion permutation E (a permutation swapssingle bits without changing their value or their number; in contrast at the expansionpermutation some input bit appear more than once at the output).

Å

E

iK

P

Ri-1

32

32

48

48

48

32

S2 S3 S4 S5 S6 S7 S8S1

f(R , K )ii-1

inflate

incorporate the key

create non-linearity(all other operationsare linear)

mix

f

Figure 3.36: Encryption function f of DES

As next the second parameter, a 48 bit block (one of the partial keys), is added (bitby bit modulo 2).

The sum is split into eight 6 bit blocks with each selecting a 4 bit value from one ofthe substitution boxes Sj , (j = 1, ..8). (The substitution boxes don’t work bit by bitbut they assign values of bit blocks to values of bit blocks. This creates non linearitywhile permutations and adding modulo 2 cause a inversive output bit if the input bitwas inverted. If DES would be linear a plain text cipher text attack the value of the keycould be determined bit for bit (cp. exercise 3-17f)).

The eight 4 bit values are combined into a 32 bit block. The permutation P is appliedon this whereby the function result develops. P scrambles the output of the substitution

129

3 Cryptology Basics

boxes so that in the next iteration round the output of every substitution box becomesa partial input into many other substitution boxes. Thereby 8 little substitution boxeseach with 26 entries and 4 bits each (i.e. 8 · 26 · 4 bit = 28 byte memory consumption)are as big as a single substitution box with 248 entries with 32 bit each. This wouldconsume 248 · 32 bit = 250 byte.

3.7.5 Partial key generation

As seen in figure 3.37 56 bits are picked from the 64 bit key through the permutatedchoice function PC-1 and split into two 28 bit blocks C0 and D0. You get Ci andDi, (i = 1, .., 16) by rotating the bits leftward (cyclic left shift LSi) for 1 or 2 bits(depending on i). The partial keys Ki are generated by concatinating Ci and Di andpicking 48 bits with PC-2.

64 bit key(only 56 bits used)

PC-1

C0 D0

C1 D1

C2 D2

C16 D16

LS1 LS2

LS1 LS2

PC-2

PC-2

PC-2

K1

K2

K16

¼¼

Figure 3.37: Generation of partial keys with DES

With DES the encryption rotates 2 bits left in the 12. and 16. iteration round and 1bit all the other four iteration rounds. Altogether a 28 bit rotation that makes the finalstate equal to the beginning state (C0 = C16 and D0 = D16). Consequencely you havethe opportunity for generating the partial keys for the decryption by the corresponding

130

3.7 DES: The most known system for symetric concelation and authentication

(i.e. the flipped order) right-rotation of the amount of bits.

3.7.6 The complementary property of DES

Inside the encryption function and the partial key generation DES has a single andsimple regularity: The so-called complementary property:

DES(k, x) = DES(k, x)

If plain text block and key are complemented bit by bit (every 0 is replaced by 1 andthe other way round) then the complementarity cipher text arises because

• permutations, especially PC-1, PC-2, IP and IP−1, receive the complementarity

• rotations receive the complementarity so that at complementarity keys the partialkey are also complementary

• every iteration round receives complementarity because after the expansion per-mutations E the complementarity partial key is added (modulo 2) to the comple-mentary value. This voids the complementarity before the substitution (because0 ⊕ 0 = 1 ⊕ 1 and 1 ⊕ 0 = 0 ⊕ 1). But it is recovered after the substitution bythe addition modulo 2 of Li−1 and because the coping of Ri−1 to Li contains thecomplementarity anyway.

• swapping of the left and right side contains the complementarity.

Figure 3.38 and 3.39 illustrate the gaining of the complementarity in every iterationround.

This regularity can be used to nearly halve the complexity of the search through thekey room: If you have the corresponding cipher text blocks for two complementary plaintext blocks (e.g. as result of a chosen plain text cipher text attack) then one of the plaintext blocks is encrypted with other keys for so long until either the corresponding ciphertext block or the complement of the other key is found. In the first case the key is found.In the second case the complement of the key was found because the complement of theplain text block encrypted with the complement result in the complement of the ciphertext.

In both cases the situation may occur (with a very small probability) that for oneplain text cipher text pair there would be more than one key. So the correctness mustbe checked for both cases on further plain text cipher text pairs. This holds - independentfrom the complementarity - for DES in common.

3.7.7 Generalization of DES: G-DES

The the security of DES has been zinged for long for two reasons:

131

3 Cryptology Basics

f

L := R i -1 R := L f(R ,K )i i-1Å

Å

RL

K iitera

tion r

ound i

complement

complement

original

i-1 i-1

i i-1 i

complement complement

complement

Figure 3.38: Complementarity in every iteration round of DES

• The key with it’s 56 bit is too short for applications that require security againstmassive attacks (i.e. intelligence services):

DES was completly broken with a plain text cipher text attack in jan-uary 1998. During the idle time of 22000 computers, coordinated overthe internet, they searched the whole key room. Within 39 day 85%of all possible key were verified until the right one was found [CZ 98].Alternativly to those software based attack, which are almost for free,the development of special maschines is evaluated [Schn 96]: With thetechnology available in 1995 the product of costs and time needed (inseconds) to find a DES key would be 1010. For 105 US$ you could builda maschine that would find the key within 35 hours, with 106 US$ in3.5 hours. In the past computer power doubles every 18 month. So thisproduct could be halved every 18 month and in year 2000 the factorwould be 109. In july 1998 the Electronic Frontier Foundation publishedthat you could built a special maschine for breaking DES in 3 days forless than 250,000 US$ [EFF 98]. By utilizing these upper bounds thereal product of 250, 000 · 3 · 24 · 60 · 60 = 6.48 · 1010 is slightly above thevalue that was supposed from the desk for the technology available in1995.

• The design criteria for the specific values of the substitutions and the permutations(not mentioned here, but you may take a look into the standard) are still kept underwraps. But [Copp 92][Copp2 94] reveals the secret almost completely.

132

3.7 DES: The most known system for symetric concelation and authentication

Å

E

iK

P

Ri-1

32

32

48

48

32

S2 S3 S4 S5 S6 S7 S8S1

f(R , K )ii-1

f

complement

complement

original

original

48

Figure 3.39: Complementarity in the encryption function f of DES

Under cryptographical and implementations point of view the following generalizationstakes care about those criticism:

1. The 48 key bits of each of the 16 partial keys is entered directly instead of beingcreated of the 56 bit. The key then is 16 · 48 = 768 bit long instead of 56 bit.

2. The content of the substitution boxes Sj(j = 1, .., 8) becomes variable. This en-larges the key by maximal 8 · 26 · 4 = 2048 bits.

3. The permutations P and IP become variable. This increases the key length bymaximal 32 · ⌈ld32⌉+ 64 · ⌈ld64⌉ = 160 + 384 = 544 bits.

4. The expansion permutation becomes variable. This increases the key length bymaximal 48 · ⌈ld32⌉ = 240 bits.

Altogether this makes 3600 bit maximal. Regarding the complexity and memory con-sumption this is less that the 56 bits in 1977. While for DES a random choice for the1. generalization is recommended this doesn’t necessarily hold for the 2., 3. and 4.generalization [BiSh 93].

To prevend the arise of false expectiations about the security: While the “normal”DES can be broken with a chosen plain text cipher text attack (called differential crypto-analysis) with 247 plain text cipher text blocks, with the 1. generalization 261 are needed

133

3 Cryptology Basics

[BiSh3 91][BiSh4 91][BiSh 93]. Although far away from the complexity naivly hoped forof 2786 − 1 it is very close to the optimal complexity of 264 − 1 bit: Every permutatingblock cipher with block length b can be broken with a complexity of 2b−1 - the aggressorsimply lets every blocks he’s not interested in being encrypted resp. decrypted.

3.7.8 Decryption performance

With a software implementation on a Apple Macintosh Classic (MC68000, 8 Mhz) apply-ing the 1., 2. and 3. generalization of DES you can encrypt resp. decrypt with 100 kbit/s.On a PowerBook 180 (MC 68030, 33 Mhz) with almost 900 kbit/s [PfAß1 90][PfAß 93].

With a hardware implmentation (standard cell design, 7155 gates, 2µm technology)on a DES with the 1. generalization you can encrypt resp. decrypt with 40 Mbit/s.Because the I/O is asycron (2 way handshake) and only works byte for byte only 10Mbit/s are created. [KFBG 90].

3.8 Cryptographic systems in operation

To operate cryptographic systems it is useful to know about terms like,

• block cipher vs. stream cipher, synchronous vs. self-synchronous

• operation modes of block ciphers, i. e. the important construction of self-synchro-nous and synchronous stream ciphers from block ciphers, and

• at least one efficient way to construct a collision-resistent hash function from blockciphers.

All these are covered in the following subsections.

3.8.1 Block Ciphers, Stream Ciphers

Given a finite alphabet

• a block cipher en-/decrypts only strings of fixed length, while

• a stream cipher can work with strings of variable length.

If we take an alphabet {0, 1} as a basis; the Vernam-cipher (one-time-pad), the s2-mod-n-generator and GMR are stream ciphers, while RSA and DES would be blockciphers. But if we use an alphabet {0, 1, 2, 3, ..., 264−1} and assume that several symbolsbecome encrypted independently from each other, then DES would be a stream cipheraccording to the above definition. The given distinction between stream and blockciphers depends substantially on the used alphabet and is therefore relative in itself. Formany applications the alphabet is specified (or at least canonical) and leads to a precisedistinction.

134

3.8 Cryptographic systems in operation

Annotation Some authors define a more restrictive classification of stream ciphers: Forthem a stream cipher has to have a given (one-time-pad) or generated (s2-mod-n-generator) key stream which becomes added (or subtracted) to the plain text or thecipher text. In our case, this restrictive definition seems not necessary to me.

For stream ciphers [Hors 85, HeKW 85] messages are encoded as a sequence of symbols,to encrypt single symbols of the alphabet. These symbols do not necessarily becomeencrypted independently, but their encryption can depend:

1. on their position inside the message or more generally from all previous plain-and/or cipher text symbols or

2. only from a certain number of cipher text symbols directly in front of the currentsymbol

Ciphers matching the first case are called synchronous stream ciphers, because en- anddecryption has to be performed strictly synchronously. If a symbol gets lost or becomesadded to the stream (loss of synchronization), the decryption cannot take place withoutfurther manipulation. The communication partners have to synchronize afresh. As longas the de- and encryption not only depends on the position within the message, but alsofrom all previous plain and cipher text symbols, the encrypting and decrypting partyhave to synchronize afresh even for simple cipher symbols changes.

In the second case we talk about self-synchronous stream ciphers, because no matterhow many symbols become interfered, at the latest after a limited number of symbolsthe decoder has re-synchronized itself with the encoder.

Annotation According to the above definition each stream cipher is either synchronousor self-synchronous.56 Some authors give a more restrictive definition of synchronous,e. g. [Denn 82, page 136]: The key stream (cf. definition above) is generated indepen-dently from the plain text stream. Now their exist stream ciphers which are neithersynchronous nor self-synchronous, like output cipher feedback see picture 3.50. In mypoint of view this fine distinction into 3 categories is not necessary to what follows.

For each symmetric or asymmetric deterministic stream cipher and arbitrary nonemptystrings x1, x2 and pairs of keys (c, d) respectively key k applies:

If two strings become encrypted separately after each other, the over-all result willbe the same if the two strings would have been become concatenated before encryption.Using the introduced notation and given a collateral (i. e. parallel, each other notinfluencing) evaluation on both sides from the left, we obtain:

k(x1), k(x2) = k(x1, x2) or c(x1), c(x2) = c(x1, x2) respectivelyIf other strings become encrypted in between the two separately encrypted strings

or the concatenated string becomes encrypted as third in a row after the two separateothers (so not collateral), it is most likely that the given equations will not be valid.

56To allow character-oriented encryption in the second case we restrict the dependency on predeces-sor symbols to 0. Doing this the example where DES is used as stream cipher on the alphabet{0, 1, 2, 3, ..., 264 − 1} is covered, too.

135

3 Cryptology Basics

3.8.2 Operation modes of block ciphers

To point out the connections between stream and block ciphers and due to their impor-tance for practical usage57, in the following we will specify several partly standardizedand important modes of block ciphers (namely: the construction of self-synchronous orsynchronous stream ciphers from block ciphers). Even if we believe today that the us-age of a secure block cipher for the construction of stream ciphers usually implies theirsecurity, in my opinion two restrictive annotations have to be made.

1. Even for strong definitions of what is secure about a block cipher, I do not knowany proofs (by reduction) of the constructed stream ciphers security nor do I knowquantifications of what means ”usually”.

2. At least for some of these constructions pathological examples to prove the oppositehave been found. I will describe these in the following. It is up to the reader toassess these systems security.

Lets describe the standardized operation modes as ECB, CBC, CFB and OFB first andtwo generalizations namely PCBC and OFCB afterwards.

From each symmetric or asymmetric deterministic block cipher, self-synchronous streamciphers can be designed by the following constructions:

• electronic code book, ECB [DaPr 89]

• cipher block chaining, CBC [DaPr 89]

• cipher feedback, CFB [DaPr 89]

At this, the alphabet underlaying the terms block and stream cipher for the modes ECBand CBC becomes changed – for CFB it can be changed.

From each symmetric or asymmetric deterministic block cipher, synchronous streamciphers can be designed by the following constructions:

• output feedback, short OFB [DaPr 89]

• plain cipher block chaining, short PCBC, in literature also referred as plain text-cipher text feedback or block chaining using plain text and cipher text feedback[EMMT 78, page 110, 111] and [MeMa 82, page 69]

• output cipher feedback, short OCFB

At this, the alphabet underlaying the terms block and stream cipher for the mode PCBCbecomes changed – for OFB and OCFB it can be changed.

Each of the 6 constructions is discussed in an own subchapter.These are followed by subchapters with annotations to expanding block ciphers (in

the previous subchapters only static length block ciphers were discussed), memory ini-tialization and an overview of the essential properties of all described constructions.

57Up to this point we did not not discuss if DES can be used to encrypt messages which are not of theexact length of 64 Bit. Likewise it is unclear, if DES can be used for authentication.

136

3.8 Cryptographic systems in operation

3.8.2.1 Electronic code book (ECB)

The simplest operation mode of block ciphers is to split long messages in blocks, thateach block can be en- and decrypted independently from all others, see picture 3.40.This mode is called electronic code book (short ECB).

cleartext

block n

cleartext

block n

ciphertext

block n

encryption

key

decryption

keyECB

Figure 3.40: Operation mode “electronic code book”

Their properties are canonical and therefore only tabulated briefly in picture 3.51. Adisadvantage of deterministic block cipher is the direct mapping of plain text patternsonto the cipher text for patterns filling a whole block58, see picture 3.41. Even forindeterministic block ciphers patterns in the cipher text become mapped to adequatepatterns in the plain text. In reality this will nearly never happen, whereas regularpatterns in uncompressed plain text will appear frequently and have to be considered.59

3.8.2.2 Cipher block chaining (CBC)

Cipher block chaining is illustrated in picture 3.42: Before the encryption of a blocktakes place (except the first one), the cipher text of the previous block becomes modular

58For non context depending encryption (steps are not depending on previous ones) of semantic unitsthere exist (in a certain sense) the possibility of breaking the code through learning: The attackerobserves a message (cipher text) and the happenings in the real world. If the same message isobserved again, we could assume that the same will happen as it did last time, here a short example:An admiral of a fleet gives a commands to all ships as: ”turn to starboard”, ”full speed”, ”stopengine”, etc. If all these command become encrypted deterministically and mutual independentlywithout any connection towards the previous events, i. e. without serial numbers, time stamps, etc., then each command correspond to exactly on cipher text. After a little while the attacker haslearned the meaning of the cipher texts regardless how secure the block cipher might have been orif we work with a symmetric or asymmetric system. To impede such non cryptographic breakingthrough learning, semantic units have to become encrypted in a context sensitive, or even betterindeterministic way.

59Lets assume a telefax becomes encrypted using a 64-Bit block cipher in ECB mode. We assume thatthe most common color of pixels will be white. Now we assign to the most frequent cipher-text-block64 white pixels and to all the other 64 black one. Using a fax machine with a resolution of 300 dpi,circa 12 pixel per mm. The encryption using a 64-Bit-ECB decreases the horizontal resolution from0.08 to 5.3 mm – at least big fonts will still be readable. If the pixels are not encoded horizontal, butin squares of 8x8 pixels instead, the resolution changes only from 0.08 to 0.67 mm. Now even smallfonts should be possible to make out.

137

3 Cryptology Basics

block borderse.g. 64 bitin DES

cleartext blocks

ciphertext blocks

ECB

With ECB identical cleartext blocks are mapped to identical ciphertext blocks.Patterns which overlap block borders in general map to different ciphertextpatterns. This example shows that for the blocks below the block sequences.

Figure 3.41: Block patterns with ECB

added to the plain text and adequately subtracted before decryption.

This design can be used for encryption as well as for authentication. The followingadvantages, disadvantages or ambivalently properties come along with encryption:

+ The usage of an indeterministic block cipher is possible. A suitable selection has tobe made for summation and subtraction, because for indeterministic block ciphersthe block length of the cipher text is longer than the one of the plain text.

+ If you use an asymmetric block cipher, the resulting stream cipher is asymmetric

– The length of the encodable units is limited by the block length of the used blockcipher. These units cannot become mapped onto the storing and transmissionunits. Special block boundaries have to be defined to make self-synchronizationpossible.

* Even for the slightest corruption in an unit of the adequate cipher text block (orfor transient errors in en- or decoder) all following plain text characters of this unitwill be interfered with a probability

|alphabet| − 1

|alphabet|

Otherwise the block cipher is useless. Additionally the corresponding place in thefollowing plain text block is interfered. This is because the interference becomes

138

3.8 Cryptographic systems in operation

encr yption

key

decr yption

key

the number of characters carried on all lines corresponds to the block length

addition, e.g. bit by bit modulo 2

subtraction, e.g. bit by bit modulo 2

memory for

ciphertextblock

-1

CBC

cleartext

block n

Klartext-

block n

ciphertext

block n

memory for

ciphertextblock

-1nn

Figure 3.42: Construction of symmetrical resp. asymmetrical self-synchonizing stream cipher from sym-metrical resp. asymmetrical block cipher: block cipher with block chaining

stored in the cipher text and is used in the next round for the second time (but in adifferent way). Later on plain text blocks are not becoming influenced and thereforedecrypted correctly. Due to its limited memory depth, the decoder synchronizesitself automatically onto the encoder.

This is although the case if it is not a transmission error outside the encryptionprocess but a transient error during encryption. This is remarkable, because start-ing at the location of the error a completely different cipher text is generated. Theerror is passed from one cipher block to the next one and becomes added to thenext plain text block which results in an interfered input to the block cipher anda completely different cipher text block. Nevertheless already one block after theend of the transient error the decoder obtain the correct plain text. The sameapplies to transient errors in the decoder. Both for transient errors in the encoderand in the decoder we assumed in the above examples that the keys of the blockcipher are not changed by the transient error. In this case the plain text obtainedby the decoder from this point would be pseudo–random and therefore useless.

Because it is irrelevant for the result of the addition in picture 3.42 if the plain text or thecipher text becomes changed, the same (as described before) applies for slightest changesin the plain text. This gives the motivation to use the left part of the construction forauthentication(picture 3.42):

+ The party who wants to authenticate encrypts the plain text and sends the lastblock obtained together with the message to the other one who can now test if

139

3 Cryptology Basics

the resulting block he got after encryption equals the one send with the message.If they equal each other, the message is authentic. If you want to have shorterauthenticators a part of the last cipher text block can be send.

* The used block cipher has to be deterministic

– The length of the units to authenticate is limited by the block length of the usedblock cipher.

This mode for authentication is standardized in [ISO8731-1 87]. Here is specified thatonly the leftmost 32 bit should be used as authenticator and that incomplete plain textblocks should be be filled with binary zeros.

cleartext

block n

CBC for authentication

ciphertext

block n

encr yption

key

memory for

ciphertext block

-1ciphertext

block n

decryption

key

memory for

ciphertextblock

-1

compar-

isionok?

cleartext

last block

lastblock

n n

Figure 3.43: Block cipher with block chaining used for authentication: Constructed using a deterministicblock cipher

As described so far the attacker is able to detect two plain texts which start same:although the cipher blocks start the same.

This can be avoided by concatenating the real plain text onto a randomly chosenone60: We just saw, that doing this all other cipher text blocks are always different. Tomake it more efficient it is possible to choose the cipher text instead of encrypting arandomly chosen plain text. This cipher text block must certainly made known to thedecoder, i. e. as first cipher text block where the adequate plain text block, is a randomnumber too and is thrown away by the decoder.

Pathological example to prove the opposite for encryption using cipher block chain-ing : addition and subtraction are performed modulo 2. The block cipher encrypts evenblocks (last bit 0) securely to uneven ones (last bit 1) and uneven blocks insecurely to

60This makes the encryption indeterministic. We used a similar approach for asymmetric encryption in§3.1.1.1 annotation 2 and §3.4.4

140

3.8 Cryptographic systems in operation

even ones, see picture 3.44. For restricting the plain text domain to even blocks, i. e.view our block cipher as a block cipher expanded by on bit (see picture 3.45), we dealwith a secure block cipher.

Nevertheless the resulting stream cipher is insecure even for the restricted domain: Forthe last bit of the block applies 0⊕ 1 = 1, encryption results in 0, 0⊕ 0 = 0, encryptionresults in 1, etc., every second input block for the block cipher is uneven. The attacker isnow able to calculate these uneven input blocks from the even cipher text blocks and theadequate plain text blocks by subtraction of the previously observed cipher text blocks.The stream cipher is insecure relating to every second plain text block and thereforealtogether insecure.

secure insecure

b )

ciphertext block (Länge b )

x x x1 2 3 ... xb-1 0

s s s1 2 3 ... sb-1 1

cleartext block (Länge

x x x1 2 3 ... xb-1 1

s s s1 2 3 ... sb-1 0

Figure 3.44: Pathological block cipher (only secure with plain text blocks, which contain a 0 on the righthand side

secure

b —1)

ciphertext block (Länge b )

expansion

cleartext block (Länge

x x x1 2 3 ... xb-1 0

s s s1 2 3 ... sb-1 1

Figure 3.45: Block cipher expanding by one bit

3.8.2.3 Cipher Feedback (CFB)

Cipher feedback is shown in picture 3.46: Not the plain text, but the content of a shiftregister becomes encrypted by the block cipher and a part of the result becomes addedmodular to the plain text (which gives the cipher text) and accordingly it becomessubtracted from the cipher text by the decoder to obtain the plain text. The cipher text

141

3 Cryptology Basics

becomes shifted into the shift register (feedback), where the name cipher feedback comesfrom. If the shift registers become initialized same and the cipher text is transmittedaccurately, then the shift register of the en- and decoder hold the same values. In thiscase (and only in this case) the decoder returns the original plain text.

In picture 3.46 the most common case is illustrated: The cipher text becomes notdirectly shifted into the register, rather a selection is made or even fixed values areadded. Though the latter one is specified in [DINISO8372 87], it feels to me not usefulin view of the cryptographic security, because the number of possible values in theshift register becomes limited. Due to the fact, that the function ”select or add” hasto be specified publicly, an possible attacker would be able to detect equal values inthe register. Through calculating the difference between the two cipher texts he wouldobtain the difference between two plain texts. For authentication especially the selectionpoints out to be problematic61, because single characters of the cipher text stream (andthe plain text stream) can be altered without an influence towards the register contentand the further encryption process. With this changes to the plain or cipher text do nothave an influence on the authenticator and do not become detected.

Cipher feedback has the following advantages and disadvantages or ambivalent propertiestowards cipher block chaining for encryption:

+ Smaller units than the one defined through the block length can be en- and de-crypted. If the elemental unit of the storing or transmission system is used as en-cryption unit, self–synchronization is give without a special labeling of the ”blockboundaries”.

– The used block cipher has to be deterministic

– Independently if a symmetric or a asymmetric block cipher is used in this mode,the obtained stream cipher is always symmetric, because the different decryptionfunction of the asymmetric stream cipher is not used in this construction

* Even for the slightest failures in an unit of the cipher text stream (or even transienterrors in the decoder) all characters of the plain text are influenced in the currentand the following

⌈ blocklength− δ

length of the feedback unit⌉

feedback units (δ denotes the distance of the error from the right border of thefeedback unit; ⌈x⌉ denotes the smallest integer z with z ≥ x). The first faulty unitlays at the point the error occurred. For latter ones all characters are interferedwith a probability of

|alphabet| − 1

|alphabet|

61therefore the function ”select or add” was left out in picture 3.47 – because a reduction of the numberof possible values in the shift register is not useful

142

3.8 Cryptographic systems in operation

encr yption

select

cleartext stream ciphertext stream

key

shift register

1 b

b

b

a

cleartext stream

shift register

1 b

b

select

key

encr yption

b

a

r

a aaa

addition relative to suitably choosen module

subtraktion relative to suitably choosen module

a

r

length of the output unit, a b£

length of the feedback unit, r b£

b block length

a

select orcomplete

r

a

CFB

select orcomplete

Figure 3.46: Construction of a symmetrical self-synchronizing stream cipher from a deterministic blockcipher: cipher text feedback

According to the reasons given for CBC the left part of CFB can be used for authen-tication, see picture 3.47:

+ The authenticating party encrypts the plain text with CFB. Additionally the con-tent of the shift register becomes encrypted using a block cipher62 and send to-gether with the plain text to the verifying party. Here it becomes processed in thesame way and the results become compared. In case of equality the plain text is

62If the content of the shift register is not becoming encrypted separately with the block cipher (isdirectly used as authenticator), the attacker could change the output unit undetected by addition ofthe difference between falsified and original unit to the authenticator.

143

3 Cryptology Basics

authentic. Assuming a shorter authenticator would be sufficient, it is possible touse only a part of the output for it.

* The used block cipher has to be deterministic.

– Units smaller than the block length of the block cipher can become authenticated.

encr yption

select

cleartext stream

key

shift register

1 b

b

b

a

cleartext stream

shift register

1 b

b

select

key

encr yption

b

a

aa

addition relative to suitably choosen module

a lenght of the output unit, a b£

b block length

a a

CFB for authentication

compar-

isionok?

last content

of the shift

register,

encrypted

last content

of the shift

register,

encrypted

Figure 3.47: Cipher feedback for authentication: Construction using a deterministic block cipher

3.8.2.4 Output feedback (OFB)

Output feedback is shown in picture 3.48. In difference to cipher feedback, not the ciphertext, but the output of the block cipher becomes passed back into the shift register. Thisgives the name output feedback (short OFB). 63

As already shown in picture 3.46, the general case is specified in picture 3.48. Theblock encryption is subject to selection or completion by defined values. Completion is

63Described in a different way in the upper part of the construction a pseudo-random generator usingthe block cipher generates a pseudo-random number. This number becomes added to the plain textstream or subtracted from the cipher text stream in the lower part of the construction. In §3.4.2 wecalled this one-time-pad

144

3.8 Cryptographic systems in operation

possible, but seems not to be useful from a cryptographic point of view. The number ofpossible values in the shift register becomes reduced, which means the number of elementsthe random number generator can generate before producing repetitions decreases. Ifthe function ”select or complete” is the identity as described in [DaPa 83] and intendedin [DINISO8372 87], a simple memory can be used instead of an shift register.

Output feedback has the following advantages and disadvantages or ambivalent proper-ties:

+ The length of the en- and decryptable units is not restricted by the block lengthof the used block cipher. This simplifies the coordination with regard to the unitsof the transmission and storing system

– The used block cipher has to be deterministic

– Independently from the usage of a symmetric or asymmetric block cipher a sym-metric stream cipher is obtained. The decryption function, which is different tothe encryption in asymmetric cryptography, is not used for the construction.

* Failures in a single character of the cipher text cause single false plain text charac-ter. No error extension occurs [DaPr 89]. Depending on the application this canbe advantageous (e. g. for encryption) or disadvantageous (e. g. for integrity). Toprevent the reader from a wrong impression, a general property of synchronousstream ciphers (therefore for CFB) has to be mentioned here: There is no fail-ure extension for alterations of cipher text characters only. Lost or added cipherstream characters cause false plain text characters with a probability of

|alphabet| − 1

|alphabet|till the system re-synchronizes. The same applies to transient failures in the feed-back of the en- or decoder.

Pathological example to prove the opposite for output feedback: No selectionor addition takes place. The block cipher (encryption in picture 3.48) maps plain textblocks onto cipher text blocks of the same length b and is defined as follows: Choose aplain text block K, which has no cipher text block assigned to itself and assign one freecipher text block S to it randomly. Now assign to the plain text block S the cipher textblock K as its encryption. This construction produces a secure block cipher, becausethe assignment of cipher text blocks is ”random”.

Note that the above mentioned plain text blocks in picture 3.48 are the ones fromthe shift register and do not have anything to do with message to be secured, which arecalled plain text stream in picture 3.48.

Despite of the secure block cipher, the stream cipher resulting from output feedbackis insecure. It was constructed in a way that always the same blocks become encryptedin alternating order and accordingly the same values become added to the plain textstream with the same alternation.

145

3 Cryptology Basics

Ver-

encr yption

lung

select

cleartext stream ciphertext stream

key

shift register

1 b

b

b

a

a

select

cleartext stream

keyencr yption

shift register

1 b

b

a

r

addition relative to a suitably choosen module

subtraktion relative to a suitably choosen module

a

r

length of the output unit, a b£

length of the feedback unit, r b£

b

a a

b block length

select orcomplete

r

OFB

a

select orcomplete

Figure 3.48: Construction of a symmetrical synchronous stream cipher from a deterministic block cipher:output feedback

3.8.2.5 Plain cipher block chaining (PCBC)

Plain cipher block chaining (PCBC)(cf. picture 3.49) can be used for applicationsrequiring failure extension even for single changes in the cipher text stream with theabove mentioned probabilities. For PCBC the blocks are not only dependent on thecipher text of the previous block, but also the plain text of the previous block is takeninto account. It is remarkable that for decryption no inverse function to h is required,to combine the previous plain and cipher text block. To construct secure symmetric andasymmetric synchronous stream ciphers from secure symmetric and asymmetric block

146

3.8 Cryptographic systems in operation

ciphers an appropriate function h has to be chosen according to the used modulus forthe operations addition and subtraction, cf. [MeMa 82] p.69–71. In particular it isimportant not to choose for h simply the used operations addition and subtraction64.[MeMa 82] recommends the selection used in the example in picture 3.49.

This construction has the following advantages and disadvantages or ambivalent prop-erties:

+ An indeterministic block cipher can be used. For an indeterministic block cipher,the cipher text blocks are longer than the plain text blocks. Therefore a suitableselection has to be made.

+ Using an asymmetric block cipher for the construction leads to an asymmetricstream cipher.

– The length of the encodable units is defined by the block length of the used blockcipher. This impedes an easy adaption to the units of the transmission and storingsystem.

* A suitable selection of the function h, e. g. mod 2blocklength, ensures that for eventhe slightest change in a block, all corresponding cipher text blocks including allsymbols are disturbed with a probability of

|alphabet| − 1

|alphabet|Failure extension is therefore unlimited.

Unlimited failure extension can be used, to achieve secrecy and authentication in oneencryption step by concatenating at the end of the message a block of fixed values, e. g.all bits 0. Decryption will show if exactly this block is generated.

+ Generally the computationally cost applying the block cipher is higher than the oneof addition and subtraction. This saves compared to encryption and authentication(e. g. CBC) half of the costs, because each block has to be transformed by theblock cipher in two separate runs.

Comparing ”cipher block chaining” with ”plain cipher block chaining” two point haveto be mentioned:64The attack refers to authentication in PCBC mode as described below.

h is defined as ⊕. The initial value IP for memory of the plain text block n − 1 as well as theexpected value RP of the redundance block at the end of the plain text are not defined. Thereis just an agreement that they have to be the same. Now the attacker can make the receiver ofa message M accept a different message as authentic: M consist of blocks M1, M2, M3, etc. Theattacker wants to change all plain text blocks by the value V , to be chosen by him. The attackercatches IP and the cipher text, alters IP to IP ⊕V and sends both to the receiver, who gets now theblocks N1 ⊕ V, N2 ⊕ V, N3 ⊕ V, etc. For an even number of blocks (incl. RP ) RP has the ”correct”value IP ⊕ V . If we operate bit-by-bit modulo 2 it even works for an uneven number of blocks.(slightly generalized attack from [MeMa 82, page 416])

147

3 Cryptology Basics

cleartext

block n

cleartext

block

memory for

ciphertext

block -1n

ciphertextblock n

the number of characters carried on all lines corresponds to the block length

encr yption

key

decr yption

key

addition, e.g. bit by bit modulo 2

subtraktion, e.g. bit by bit modulo 2

memory for

cleartext

block -1n

h h

h arbitrary function, e.g. addition mod 2Blocklänge

PCBC

memory for

ciphertext

block -1n

memory for

cleartext

block -1n

Figure 3.49: Construction of a symmetrical resp. asymmetrical synchronous stream cipher from a sym-metrical resp. asymmetrical block cipher: block cipher with cipher text and plain text blockchaining

• The constructions are very similar – both use en- and decryption of the blockcipher and operate invertible on the plain text. In particular the latter constructioncontains the first one (choose a function h in a way, that it returns the chosen ciphertext block and ignores the plain text).

• From this similarity the same advantages and disadvantages are derived. The am-bivalent property ”failure extension” can be divided in limited (self-synchronousblock cipher) and potential unlimited (synchronous block cipher). The failure ex-tension can be potential unlimited, because the function h can use the previousplain text block, which can be dependent on all previous cipher text blocks at thedecoders side.

Picture 3.49 makes a comparison between PCBC and CBC easy possible. For an imple-mentation you would not save the plain text and cipher text block n−1, but the outputof function h. This saves memory at the one side and decreases the costs of transmissionduring initialization of the memory at the other.

3.8.2.6 Output cipher feedback (OCFB)

Adequate as for ”cipher block chaining” and ”plain cipher block chaining” applies to”cipher feedback” and ”output feedback”

148

3.8 Cryptographic systems in operation

• The constructions are very similar. In both only the encryption of the block cipheris used to generate a pseudo-random character stream to achieve encryption usingmodular addition. For decryption modular subtraction is used. Picture 3.50 showsa general construction of output cipher feedback (OCFB) which includes cipherfeedback and output feedback given an appropriate function h. As seen in picture3.49 there is no necessity to invert h for decryption.

• Derived from their similarity the same advantages and disadvantages can be found.For the ambivalent property ”failure extension” we distinguish between limited(self-synchronous block ciphers) and potential unlimited (synchronous block ci-phers). The failure extension can be potential unlimited, because the function hcan use the result of the block encryption which on the other hand can depend onthe full previous cipher text stream.

3.8.2.7 Annotations to expanding block ciphers and memory initialization

In pictures 3.42, 3.46, 3.48, 3.49 and 3.50 as well as in the associated operation descrip-tions only the usual case is described: The block cipher maps plain text blocks ontocipher text blocks of the same length. For longer cipher text blocks we have to select anumber of characters. This selection should be made previous the ”memory for ciphertext block n− 1”. Adequate applies to picture 3.49. In the pictures 3.46, 3.48 and 3.50the existing ”selection function” has to be modified.

Another topic that can not be covered here is the necessity of the same initializationof the memory. The memory for the cipher text block n − 1 in picture 3.42 shouldbe initialized with suitable values. The memory for cipher text block n − 1 and plaintext block n − 1 in picture 3.49 have to be initialized with suitable values. The shiftregisters in picture 3.46 should, the ones in picture 3.48 and 3.50 have to be initializedwith suitable values [MeMa 82, DaPr 89]. This statements are made in case that thesystems is used to achieve secrecy. For authentication the memory has to be initializedsame before generating and testing the authenticator, but can be initialized same for allpartners and messages. A more detailed view is given in question 3-18 d).

3.8.2.8 Summary on the properties of operation modes

Picture 3.51 gives a compact summary about the properties of block ciphers describedbefore. While the text starts to describe the simple (standardized, as ECB,CBC,CFBand OFB) block ciphers first, before giving descriptions for more complex solutionsor in other words starts with special followed by general constructions, picture 3.51distinguishes between two groups according to similar properties.

The left group contains the ciphers where both functions (en- and decryption) of theblock cipher are used and the block cipher is embedded into the direct transformationway from the plain text over the cipher back to the plain text.

The right group distinguish itself by the generation of a pseudo random number usingthe block ciphers encryption function. The decryption of the block cypher is not used.

149

3 Cryptology Basics

encr yption

select

cleartext stream ciphertext stream

key

shift register

1 b

b

b

a

a

select

cleartext stream

keyencr yption

shift register

1 b

b

a

r

addition relative to a suitably choosen module

subtraction relative to a suitably choosen module

a

r

length of the output unit, a b£

length of the feedback unit, r b£

b

a a

b block length

select orcomplete

r

select orcomplete

h h

h arbitrary function

OCFB

a

Figure 3.50: Construction of a synchronous stream cipher from a deterministic block cipher: Cipher andoutput feedback66

The transformation way only contains addition and subtraction operators as we knowalready from the Vernam cipher.

These group specific properties are derived from the first three properties. The modelisted at the right side contains the both at the left. Therefore the error extension ofthe rightmost mode is only potential unlimited and it is suited for authentication inthe same round. An appropriate function h has to be applied at the same time. Theapplies to the question of PCBC or OCFB being synchronous stream ciphers, too. For

150

3.8 Cryptographic systems in operation

an appropriate choice of h they are synchronous stream ciphers.

Additional to these the following 3 properties are of interest for operation modes inpractical usage by means of utilization and implementation:

Elective access: Is it possible to access parts of the cipher text and/or plain textelectively, that means is it possible to achieve the cipher text from an elective chosenplain text by calculation and/or vice versa? Or is it necessary to calculate the ciphertext or plain text starting from the beginning of the message?

Pipelining: Is it possible to run encryption and/or decryption in parallel? Or dothey have to be strictly sequential?

Pre-calculation of the block cipher: Is it possible to calculate the block cipher inadvance or do we have to wait until the plain text or cipher text is available to processthis by far computationally most expensive part?

The following connections exist between the properties of operation modes and the these3 additional properties:

From self synchronous it follows elective access for decryption, because at least thepart following after the part of the cipher text, being necessary for synchronization, canbe decrypted separately.

From selective access it follows pipelining, because independence of parts from eachother makes parallelism possible.

For block ciphers directly located in the transformation way from the plain text overthe cipher text towards the plain text, it is not possible to pre-calculate the block cipher.Otherwise at least one block can be calculated in advance.

In total this knowledge results in the 3 lines at the end of picture 3.51.

3.8.3 Construction of a collision-resistent hash function fromblock ciphers

In this section it will be described, how to construct a collision resistant hash functionfrom a deterministic block cipher, with its key not much longer than its block length.

As already described for collision resistant permutation pairs in §3.5.1, finding colli-sions could only proof to be complexity theoretical difficult – as informational theoreticaldifficulty is easy to proof, because the attacker could always check all possibilities. Ade-quate applies for finding collision resistant hash functions: Because they map arbitrarylong arguments onto values of fixed length, an arbitrary number of collisions must exist(that means different arguments get mapped onto the same value). To find a collisionthe attacker only has to (be able to) search insistently.

The best to happen would be an efficient construction which creates collision resistanthash functions from any block cipher. These have to proof to be as secure as the blockcipher which was used to create them. Unfortunately non such construction is known sofar. There are not even constructions fulfilling two of the three emphasized requirements.As there were already secure collision resistant hash functions given in §3.6.3.3 (under theassumption on factorization), we are now setting a great store by efficiency. Therefore

151

3 Cryptology Basics

ECB CBC PCBC CFB OFB OCFB

usage of inde-terministicblock ciphers

+ possible − impossible

asymmetricalblock cipherproduces

+ asymmetrical stream cipher − symmetrical stream cipher

length ofencryptableunits

− specified by blocklength of block cipher

+ arbitrary

errorexpansion

insideoneblockonly

2 blocks

potenti-allyunlim-ited

1+ ⌈b/r⌉blocks, iferror onthe lefthandside,other-wisepossiblyoneblockless

no ondistor-tion

potenti-allyunlim-ited

also usable forauthentica-tion?

withredun-dancy ineveryblock:yes

withdeter-ministicblockcipher:yes

yes, evenconcela-tion inthe sameprocess

yes

withappro-priateredun-dancy:yes

yes, evenconcela-tion inthe sameprocess

random accesspossible?

yesonly ondecryp-tion

noonly ondecryp-tion

no no

parallelizingpossible?

yesonly ondecryp-tion

noonly ondecryp-tion

no no

Calculation ofthe blockcipher inadvancepossible?

nofor oneblock ata time

yesfor oneblock ata time

Figure 3.51: Properties of the operation modes

152

3.8 Cryptographic systems in operation

we show in picture 3.12 ”well explored” very efficient collision resistant hash functions,which can be constructed from deterministic block ciphers using keys, due to securityreasons, not allowed longer than their block length [LuRa 89].

In [DaPr 84] and [DaPr 89, page 265] the following model of a collision resistant hashfunction constructed from a deterministic block cipher (e. g. DES) is proposed:

The plain text to be hashed becomes split into bits of the length equal one of theblock ciphers keys. In each round one of this plain text blocks is used a key of the blockcipher.

Input for the first round is an initial value, while for following rounds the result ofthe previous round is used as input. (In picture 3.52 the initial value is set as result ofround 0)

In each round not only encryption takes place, but also the block ciphers input becomeadded modulo 2 to its output. This stops the attacker from calculating backwards bychoosing a key (means: plain text block) followed by inverting according to the chosenkey (means: decrypt) to get the input of this round. Doing this it would be possible forhim (assuming the initial value can be chosen without restrictions) to calculate collisionsjust by doing the described with 2 different keys.

The result of the last round is the output of the hash function. (To make it possiblefor others to test this hash value, the initial value has to be known. For not globallydefined initial values, it has to be provided.)

In Picture 3.52 the arrangement of the components is chosen as in picture 3.42 whichmakes it easy to spot the unusual point the plain text is used in the process – i. e. askey.

It is important – does not matter how the hash function is constructed – that theinitial value is fixed or the message is encoded postfix free67. Otherwise the followingtrivial collisions will occur: Suitable initial values for the not yet processed part of themessage are derived as result of the previous round.

This weakness can be overcome by storing the length of the message in Bit in thelast plain text block and it can be determined how many Bits got added [LaMa 92, page57].

Also it is possible to calculate collisions with a probability of 2b/2 for block ciphershaving output of length b with brute force search (e. g. this applies to the just givenconstruction using a block cipher of length b). It derives from the so called birthdayparadox: Two people having birthday at the same day requires not 365/2 but only√

365 people [DaPr 89, page 279-281 ]. Therefore DES will be insufficient for manyapplications using the specified construction, because the amount of 232 attempts bymany possible attackers cannot be ruled out. Even a block length of 128 bit will notbe sufficient in the near future, because soon 264 attempts will not be an obstacle forclever attackers. Accordingly collision resistant hash functions should provide resultswith length of at least 160 bit.

67No postfix – final part of a postfix free mapped message can be postfix free mapping of an othermessage.

153

3 Cryptology Basics

cleartext

block n

encr yption

length of cleartext blocks is equal to key length of the block cipher

addition mod 2

memory for

provisional result

n-1

lastblock

the number of characters carried on all lines corresponds to the block length

For security reasons the following should apply:

Key length of block cipher is not substantially longer than its block length.

Figure 3.52: Construction of a collision resistant hash function from a deterministic block cipher

A precise classification of collision resistant hash function, according to which type ofattack (analogous to §3.1.3.1 and §3.1.3.2) they resist and a few sentences about thedependence of the hash functions collision resistance from the collision resistance of asingle iteration can be found in [LaMa 92].

3.9 Outline of further systems

3.9.1 Diffie-Hellman key exchange

So far only asymmetric cryptographic systems whose security relies on the factorizationassumption have been discussed, cf. §3.4.1. The following procedure for the calculationof shared secret keys based on public information only (Diffie-Hellman-Key-Exchange[DiHe 76]) relies on the discrete logarithm assumption, but has become of both theoret-ical and practical importance in recent years:

From version 5, Pretty Good Privacy - PGP uses a variation of the Diffie-Hellman-Key-Exchange (ElGamal system of secrecy with individual parameter selection, cf.question 3-23c) in a hybrid cryptographic system, cf. §3.1.4, instead of RSA, because –in contrast to RSA – the corresponding patent has expired.

The Diffie-Hellman-Key-Exchange allows Steganography with public keys, cf. §4.1.2.

154

3.9 Outline of further systems

First a few fundamental principles especially what generators of Z∗p are and how we can

obtain them:

An element g of an multiplicative group G is called generator of G, if the powers ofg create the group G. This means that for each element h of G there is an integer i with0 ≤ i < |G| so that gi = h. A group with at least on generator is called cyclic.

For each prime number p, Z∗p is a cyclic group. It is easy to chose Z∗

p and a generatorg randomly [MeOV 97, page 163]:

1. Chose a prime number p randomly (cf. §3.4.1).

2. Factorize |Z∗p| = p − 1. Be p − 1 = pe1

1 • pe22 • ... • pek

k , where p1...pk are pairwisedistinct prime numbers. If the factorization is not successful within an acceptablelength of time, go to 1.

3. Chose an element g in Z∗p randomly.

4. For i from 1 to k

Calculate b := g(p−1)/pi

If b = 1, go to 3.

Steps 1. and 2. chose a group Z∗p, of which the prime factorization of the number of

elements is known. In order to increase the efficiency of the factorization in step 2. andto ensure that p−1 has at least one big prime factor – which is desirable for the securityfrom the calculation of the discrete logarithm – , an alternative to steps 1. and 2. isdescribed below:

Generate a prime number p1 with length l = const, where l is the security parameterFactorize f . (This is easy because f is not to big.) Be f = pc2

2 • ... • pckk .

Search for prime numbers among the numbers p := f • p1 + 1 with |f | = const.

Factorize f . (This is easy because f is not to big.) Be f = pc22 • ... • pck

k .

This procedure changes the distribution of the choice of the prime number p. But asfar as we know, this does not matter.

Steps 3. and 4. chose a generator within the group Z∗p, of which the prime factorization

of the number of elements is known:

Be h an arbitrary element of the multiplicative group G.

The powers of h form a subgroup of G, as hx • hy = hx+y and (hx)−1 = h−x.

The Lagrange proposition establishes: If H is subgroup of G, then |H| is factor of |G|.6868This was already used in §3.4.1.2 to proof Fermat’s small proposition.

155

3 Cryptology Basics

If H is a proper subgroup of G, i. e. |H| < |G|, and |G| = pe11 • pe2

2 • ... • pekk , then

∃i, 1 ≤ i ≤ k : |H| is factor of |G|/pi.

For this i is true: h|H| = 1, and especially h|G|/pi = 1.

Is h a generator of G, then h|G|/pi must always have a value 6= 1, otherwise h wouldnot create a group with |G| elements, but a group with h|G|/pi elements at themost. The reason is, that from value 1, only repetitions and no new elements willbe created, when the power is increased.

Therefore the above algorithm recognizes in step 4 whether g is a generator of Z∗p or

not.

The Diffie-Hellman key exchange relies on the difficulty of calculating discrete loga-rithms, i. e. logarithms modulus a prime number. Be p a prime number, g a generatorof Z∗

p, the multiplicative group modulus p. Be

gx = h mod p

Then x is called discrete logarithm of h to the base g modulus p:

x = logg(h) mod p

Assumption on the Discrete Logarithm: According to §3.1.3.4 it is not provableat moment, that it is hard to calculate discrete logarithms. An assumption has to bemade which says, that for each fast (discrete-logarithm-)algorithm L, the probabilitythat L is capable to calculate the discrete logarithm decreases fast with the length ofthe modulus.

Assumption: It be for each (probabilistic) polynomial algorithm L and each polynomeQ: There exists an algorithm L, l ≥ L and it be: If p is a randomly chosen prim numberof length l and g is randomly chosen from the set of generators of Z∗

p and x randomlychosen from Z∗

p and gx = h mod p

W (L(p, g, h) = x ≤ 1/Q(l)∗

Diffie-Hellman key exchange: p and g are public and fix. Participants choose theirprivate keys from elements of Z∗

p randomly, calculate g to the power of this elementmod p and publish the result69. Each communication partner can ”exchange” his ownpublic key by simply raising the others public key to the power of his own privatekey. Due to the commutativity of the exponential function and therewith the modularexponential function both get the same secrete key, cf. picture 3.53.

Now we hope, that nobody than these two is capable to calculate the secret keyk = gxy mod p. This is the Diffie-Hellman-Assumption. It’s a stronger one thanthe assumption on the discrete logarithm, because if someone is able to calculate thediscrete logarithm one will get x from gx and y from gy. With this one can easily get gxy

156

3.9 Outline of further systems

random number 2

y

key

generation

secret zones

y

g mod py

random number 1

x

key

generation

x

g mod px

g mod px g mod py

publicly known:

p and gp, gp, g

calculation of

common keys

calculation of

common keys

(g ) = g = g = (g ) mod py x yx xy x y

(g ) mod py x

(g ) mod px y

calculated keys are equal, because

Figure 3.53: Diffie–Hellman key exchange

mod p from gx mod p and gy mod p. Other way around it could not be shown that xand y can be derived from gx mod p, gy mod p and gxy mod p.

It is remarkable that the key has not to be transmitted through the network, but canbe calculated locally by the communication partners. This is important for applicationsusing Steganography and leads to Public Key Steganography, cf. §4.1.2.

For Diffie-Hellman key exchange, instead of the public encryption keys, the publickeys of the participants are stored in public key registers. Apart from that everythingworks as shown in picture 3.5 and question 3-4.

3.9.2 For the signee unconditional secure signatures

The security of digital signature systems has to be seen ”asymmetric”:

• unconditional secure for the recipient of the signature – given the recipient pos-sesses a pair (text, signature), which can be tested positive, he is able to convinceeverybody, that this text was signed by the owner of the pair of keys (s, t)

• only cryptographical secure for the signee – because it is possible for an attackerto calculate the unique signature s(x) for a text x and a public key t, cf. picture

69The publishing must authentic, i. e. question 3-4. This can be ensured by multiple certification of thepublic key, cf. §3.1.1.2

157

3 Cryptology Basics

xs(x)

signature space

s

t

true

message space

Figure 3.54: Usual digital signature system: There is one and only one signature s(x) for one message xand one validation key t (and possibly one signature environment, see GMR).

3.54. How should our poor signee proof, that he did not calculate s(x), even so heis capable to.

The security can be divided ”asymmetric”, but other way around. For a signee usingan unconditional secure authentication system (picture 3.55 and 3.56) it is:

• unconditional secure for the signee – for each public testing key, there exist manysecrete signing keys, which on the other hand are loose distributed in the spaceof signing keys. A signee, restricted by means of complexity theory, only knowsone of all this keys, namely s in picture 3.55. Using s he can produce exactly onesignature s(x). A signature generated by a complexity theoretical nonrestrictedattacker is different from s(x) with a probability of

1

number of all possible signatures

The signee has the possibility to check with his own secret key s, if s′(x) = s(x),cf. picture 3.57. If not, i. e. s 6= s′, the signee has the proof that the signature gotfalsified and the assumption on complexity theory got violated. One should nottrust anymore on the assumption, e. g. use longer prime numbers. Moreover theresponsibility can be defined in a way the the risk is on the recipients side.

• only cryptographical secure for recipient – because is it possible, that an attackeror an actual signee finds a proof of falsification.

158

3.9 Outline of further systems

xs(x)

signature space

s

t

true

message space

Figure 3.55: For the signee unconditional secure signature system: For one message x and one validationkey t there are multiple signatures (gray area), which indeed are sparse within the signaturespace

xs(x)

signature space

s

t

true

s’(x)

message space

Figure 3.56: Proof of fake in form of a collision in a signature system unconditional secure for the signee

159

3 Cryptology Basics

„ok“ or

„faked“

Schlüssel-

generie-rung

random number

text

text, signature and

verification result text and

signature

s

x, s(x)

key for signature

verification,

publicly known key for signature

creation,

secret

verify

t

x

random number'

x, s(x),

„ok“ or

„false“

generateproof offake

sign

„accept“or

proof of fake

receiver

signer

court

verify

Text and signature

text and signature

Figure 3.57: Fail-stop signature system (Glass box with a lock; there is only one key for putting thingsin. The glass can be broken, but it can not be replaced unrecognizably.)

The main advantage of a signature system like the one described, is that one gets aproof if the system is broken (Fail-Stop-Signature-System). Therefore the responsibilityand damage can be arbitrary shared: Each complexity theoretical restricted participant(inclusive the legitim signee, the generator of the key pair (t, s)) are only able to give aproof on falsification with a negligible probability.

More about this kind of authentification systems can be read in [PfWa 91, Pfit8 96]as well as in [WaPf 89, WaPf1 89, Pfit 89, Bleu 90, BlPW 91, HePe 93]

3.9.3 Unconditional secure pseudo-signatures

Meanwhile there exist a systems, which has almost the same properties than digital sig-natures and is informational theoretical secure, i. e. signatures cannot be falsified (exceptof the exponential small probability that the attacker guesses right, which all authen-tication systems have in common). Unfortunately up to now it is quite unpracticable.More about this kind of systems can be found in [ChRo 91].

160

3.9 Outline of further systems

keygeneration

random number

text

text, signature and

verification resulttext andsignature

s

x, s(x)

key for signatureverification,publicly known key for signature

creation,secret

x, s(x),

„ok“ or

„false“

t

random number'

verify sign

interactive protocol for

signature verification

x

Figure 3.58: Signature system with undeniable signatures (black box with a spyhole; there is only onekey for putting things in. The key owner controls access to the spyhole.)

3.9.4 Undeniable signatures

Usual digital signatures can be copied perfectly and therefore shown around withoutrestrictions. The idea is to evolve the supposed signee in the testing procedure usingan interactive protocol instead of testing signatures by an autonomous deterministicalgorithm, cf. picture 3.58. This way the signee gets to know if his signature ought tobe shared with a third party (”share” means not only to hand over the knowledge aboutthe bit chain, but also to achieve certainty that it is a valid signature.). The importantthing about the protocol is, that it is impossible for the signee to deny a valid signature(therefore the name by David Chaum: undeniable): If he cheats he will get caught witha probability close to 1. If he rejects to participate the protocol, even though he isobliged to, it gives a proof for the authenticity of the signature.

3.9.5 Blind signatures

The aim of this type of signature systems is that signees are able to sign without anyknowledge about what they sign. In first place this might sound completely pointless,because the signee should be interested in what he signs. For some applications theopposite applies: Here it is important knowledge to the signee that he signs. He is notinterested in what he signs – he even must not be interested in it.

An example could be a digital payment system, where the signee creates notes ofcertain values by signing them with the appropriate key (a random number of suitablelength and redundance). He just has to realize and to be sure that he only creates onenote of this value. Now he can demand the appropriate amount of money for it. The one

161

3 Cryptology Basics

keygeneration

random number

text

text, signature and

verification resulttext andsignature

s

z'(x)

key for signatureverification,publicly known key for signature

creation,secret

x, s(x),

„ok“ or„false“

t

random number'

blind signxblinded text

z'(x), s(z'(x))unblind and

verify

Figure 3.59: Signature system for blind signatures

who buys the bank notes, by means of getting a blind signature for them, doesn’t wantinformation about him to be transferred. Then the issuer cannot analyze payments indigital payment systems. More about this application can be found in §6.4.

Picture 3.59 shows the procedure of blind signatures:

As usual the signee creates a pair of keys (t, s) and t is made public.

Different to previous procedures the text x ,to be signed, is generated by the recipient ofthe signature (not by the signee) – if necessary redundance becomes added (cannotbe seen in picture 3.59) and it becomes masked by a random number z′, by certainmeans encrypted.

The masked text z′(x) becomes transmitted to the signee for signing.

The signee signs z′(x) with s and returns z′(x), s(z′(x)) to he recipient of the signature.

Only the recipient knows the random number, which inhibits overhearing parties onthe transmission way and even the signee itself from determining the utility of theblind given signature. The recipient unmasks z′(x), s(z′(x)) with z′ and gets thesignature s(x)

Now one can test with t if the signature s(x) fits to x.

Blind signatures with RSA:

162

3.9 Outline of further systems

The attack by Davida in the variant of Judy More, cf. §3.6.3.1, this time used forgood:

Key generation, adding redundance to texts (has to be performed before masking),signing and testing follows exactly the description in §3.6.3.5

Choose the random number z′ normal distributed and randomly out of Z∗n

Masking is performed by modular multiplication of the text inclusive the added redun-dance with z′t. The result is the masked text x • z′t mod n.

(Annotation: For x ∈ Z∗n the signee does not learn anything about x by means of in-

formation theory, because z′t is normal distributed in Z∗n. Otherwise the generator

of x is not dependent on the signee: ggt(x, n) gives a non trivial factor of n. Withthis the generator of x did break RSA completely, cf. §3.1.3.1 and §3.6.1, in a waythat he is now able to sign by his own.)

The masked text with signature (x• z′t)s mod n derives by modular exponentiation ofx • z′t with s.

Unmasking is achieved by division by z′: (x • z′t)s • z′−1 mod n = xs • (z′t)s • z′−1

mod n = xs • z′ • z′−1 mod n = xs

The blind signed text with signature is: x, xs

More about this type of signature systems in [Chau 83, Cha1 83, Cha1 88, Cha8 85,Chau 89, Schn 96]

3.9.6 Threshold scheme

Using threshold schemes (or secret sharing schemes), the holder of a secret G can splitit in n parts, that at least k parts are necessary to construct the secret efficiently andk − 1 parts do not provide any information about the secret.

Adi Shamirs polynome interpolation

The secret S be element of Zp, p be prime70

The holder chooses a polynome q(x) of order k − 1

He chooses the coefficients a1, a2, ..., ak−1 randomly, uniformly and independently inZp.

He defines q(x) := S + a1x + a2x2 + ... + ak−1x

k−1, where all operation take place inZp.

70Watch out: p must not be chosen in great dependence on S. On no account: Chose the next greaterprime number to S. Otherwise the attacker gets already much information about S from the publicprime number p.

163

3 Cryptology Basics

He calculates all n parts (i, q(i)) with 1 ≤ i ≤ n for n < p in Zp.

From k parts (xj , q(xj))(j = 1, ..., k) the polynome q(x) can be calculated as follows(Lagrange Polynome Interpolation [Denn 82, p.180]):

q(x) =k

j=1

q(xj)k

m=1,m6=j

(x− xm)

(xj − xm)mod p

The secret S is produced as q(0).

Proof outline:

1. With the knowledge about only k − 1 parts (xj , q(xj)) the threshold scheme givesno information about S, because for each value of S there always exits exactly onepolynome of order k − 1, which goes through the k points (j, q(j)) and (0, S)

2. The formula above calculates the correct polynome, it has the correct order(eachfactor is of order k−1 and summation does not change this) and returns the correctvalue q(xj) for each xj : Substitution of x by xj in

, the product will be 1 forindex j in

(because each single quotient would evaluate to 1 and their productwould be 1) and otherwise 0 (because on factor would be 0). Multiplication of this1 and q(xj) gives the in total q(xj).

Annotation It is not compulsory to chose the values of the polynome at position 1 to n.An other choice 6= 0 is suitable too.

For large secrets one does not want to calculate modulo a large number. The secret canbe split up in smaller parts. Than the protocol above needs to be applied to all partsseparately.

Often the occasion arises, that not only confidentiality has to be guaranteed for oursecret. Long term accessability has to be ensured for its owner. The one who wantsto protect himself from loss of his secret S by supernatural powers, but does not wantto entrust it to a single party, could distribute the parts of the secret to his n friends,a different part each. In case of a loss of his secret, he requests k friends to send acopy of their share. From these the owner is capable to calculate S efficiently (generallyquadratic cost in k). For plots of maximum k − 1 friends, they cannot reconstruct Sbehind the back of its owner, does not matter how long they compute. An owner whochooses k nearly as big as n might loose the capability to reconstruct his secret in case ofonly some lost parts at his friends (through supernatural powers or just malice). On theother hand side such a choice would make unauthorized reconstruction nearly impossible.Vice versa a small k in relation to n would decrease the problem of to many parts gettinglost at the friends side and therewith the danger of loosing the whole secret. Otherwiseit would be easier to reconstruct it unauthorized only compromising some of the secretkeepers. Through a suitable choice of k and n, the owner can adapt the decompositionof his secret to his needs.

164

3.9 Outline of further systems

Proofs are given in [Sham 79], an extension to information theoretical security for thedetection of changes of parts can be found in [ToWo 88].

The owner has to sign all the n parts of the secret (i, q(i)) to counteract violations ofthe integrity at complexity theoretical strength. Only parts with their signature testedpositively are used to reconstruct the secret. Assuming that the signature was notfalsified, nobody can trigger the reconstruction of false secrets or make the reconstructioncomputational expensive. The first one would come into account in case that only k partsare available and one of them is incorrect. The latter one would arise if k+1+v parts areavailable, where k + 1 are correct and v are incorrect. Because the lack of informationabout the actual parts that are incorrect, all sets of k out of k+1+v have to be checkedto determine a majority, what will proof to be very computational expensive.

Everything said about the threshold scheme is based on the assumption, that the holderof the secret is not malicious, i. e. generates n parts which add up to the secret. Onlythe environment, particulary some of the secret keepers of the n parts were treated aspotential attackers. For generators joining the group of attackers, additional propertiesof the threshold scheme are demanded.

1. consistent result

2. predefined result

Recommended readings

History of Cryptography

An Overview from 1900 till 1999 from a German point of view: [Leib 99]

Text books of the used mathematical background:

A real introduction in Algebra, specially for computer scientist, relatively under-standable: [Lips 81]

Elementary presentation of the basic of elementary number theory, especially theeuclidian algorithm: [Lune 87]

Short and nice to read, some old terms used: [ScSc 73]

For people who wants to know in detail about the fast algorithms: [Knut 97]

In close connection to cryptography: [Kran 86, MeOV 97]

Text books for cryptography

Detailed classical structure: [Denn 82, DaPr 89, Hors 85]

Modern structure but very short: [Bras 88]

Modern structure, very detailed and practical relevant: [Schn 96]

Modern structure, very detailed and theoretical profound: [MeOV 97]

165

3 Cryptology Basics

Current research

Journal of Cryptology, Springer-Verlag, Heidelberg

Conference volumes Crypto, Eurocrypt, Asiacrypt, conference volumes at LNCS,Springer-Verlag, Heidelberg

The Theory of Cryptography Libary:access via URL http://philby.ucsd.edu/cryptolib.html, email should be sentto [email protected]

References to standards, patents, products, comments to current development:

http://www.rsa.com

http://www.rsa.com/rsalabs

http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipheror subscribe to [email protected]

Information about legal situation, according to the use, export and import of cryptog-raphy:

http://cwis.kub.nl/ frw/people/koops/lawsurvy.htm

166

4 Basics in Steganography

Steganography is the old art and the young science of hiding secret information inlarger, harmless looking files.

Figure 4.1 illustrates the differences to cryptography for protection-purpose of in-tegrity. If good cryptography is used, the aggressor notices that he can not understandthe cryptotext and will hence presume that the communication is confidential. But ifgood steganography is used, the aggressor will think that the stegotext is a plausiblemessage which he completely understood. He does not notice any confidential commu-nication. In the displayed example, the cover (the original holiday picture balloon oversavannah) is as plausible as the stegotext (the slightly modified holiday picture: a just asbeautiful balloon over a just as atmospheric savannah in the same photographic quality).

The following applies often to figure 4.1: cover∗ = stegotext, i.e. the receiver cannot and does not want to reconstruct the value of cover before the embedding andtakes stegotext as cover∗ or completely completely ignores this output parameter of thestegosystem1.

The old art of steganography is for example composed of the use of

• secret inks, which are invisible after writing but which can be chemically or ther-mally made visible again by the skillful receiver,

• microdots, which are to the size of dots minimized photos, which are then pastedas i-spots into harmless texts and can be viewed under a microscope, as well as - forreasons of human rights and real time requirements of modern telecommunicationno longer practical -

• slaves. Their skull was shaved and the message that should be transmitted waswritten on the skin of their heads. Then one waited until the hair had regrownand sent the slave to the receiver together with the order to say the following tothe receiver: ”Master, cut my hair”.

Alternatively, messages that needed to be transferred secretly were

• swallowed by the carrier2 or fed to animals3 as well as

1As well as cryptosystem is often said instead of cryptographic system of secrecy, also stegosystem isoften said instead of steganographic system of secrecy. Purists think it should be steganosystem andare certainly right. But stegosystem is even shorter and is hence usually used.

2This is out of fashion for secret communication but still common for transporting drugs. Hopefullythe packing does not break up, otherwise, the message in the former case, or the carrier in the lattercase, is lost.

3This has advantages concerning speed compared to feeding messages to carriers, as one can quicklygain the message by slaughtering the animals.

167

4 Basics in Steganography

attacker

encryption

key

ciphertextplain text

sender receiver

key’

Cryptography: confidentiality

secret

message

plain text

secret

message

Mfkdosi

ghHwgjq%a

decryption

attacker

embed extract

key

stegotext

content

wrap

sender receiver

key

steganography: confidentiality of confidentiality

secret

message

content

wrap*

secret

message

Figure 4.1: Cryptography vs. steganography

• written on the underside of wax boards, so that the dedicated receiver only had toremove the wax.

A good historical overview is given in [Kahn 96].

The young science of steganography uses computers to embed secret data for examplein digitalized pictures, video signals or sound signals. This can serve to purposes:

The purpose of steganographic systems of secrecy is to hide not only the secretmessage, but also even its existence (compare figure 4.2). This is helpful if confidentialcommunication is suspicious, unwanted or even illegal4.

Steganographic authentication systems have the goal to encode informationabout the authors and/or the holder of the right of usage in digital(ized) works in sucha way, that they can not be removed without destruction of the work and hence help theprotection of the copyright. If the work is heavily altered, the embedded informationabout the originator is of course not accurately extracted (compare figure 4.3).

4The development of steganographic systems of secrecy was pushed through the - since 1993 openly led- discussion about the limitation or even the interdiction of the use of cryptography for confidential

168

attacker

embed extract

key

stegotext

content

wrap

sender receiver

key

secret

message

content

wrap*

secret

message

• exactly equal

• not noticeable

• as much as possible

no changes

Figure 4.2: Steganographic system of secrecy

attacker

embed extract

key

stegotext

content

wrap

sender receiver

key

original

inform.

content*

wrap*

o?igin?l

in?orm.

possibly heavy changes

• correlation is enough

• some 100 bits suffice

Figure 4.3: Steganographic authentication system

Systematics of steganographic systems is deployed in 4.1. 4.2 deduces necessary condi-tions for information theoretically secure steganography. Two opposed stegoparadigmsare discussed in 4.3. 4.4 explains the parity encoding as means of enforcing concealment.

The further following sections give examples for embedding in digital phone calls (4.5),in pictures (4.6) and in moved images (4.7).

communication. This limitation or interdiction should ease the monitoring of potential offenders.

169

4 Basics in Steganography

4.1 Systematics

First it will be described in 4.1.1, what is meant by a steganographic. Then - in 4.1.2 -follows an annotation to the spreading of steganographic keys. Basics of steganographicsecurity are in 4.1.3.

4.1.1 Stenographic systems

Modern steganographic systems have with modern cryptographic systems in common,that their security does not depend on the secrecy of the method, but only on the secrecyof keys which parameterize. Wherever the danger of confusion exists, we will speak ofstegokeys as opposed to cryptokeys.

Up to now, only symmetric steganographic systems are known, i.e. the same in-formation (hence especially the same stegokey) is used for the embedding and for theextraction [FePf 97].

4.1.1.1 Stenographic secrecy-systems, overview

[ZFPW 97] cover.

Purpose: ”hiding of the existence of the message that needs to stay secret” (strikinglyexpressed: confidentiality of confidentiality).

extract

random number

secret key

wrap

stegotext

Hembed

keygeneration

k(H,x)

content

x

wrap*

H*

content

x

k

k

Figure 4.4: Steganographic system of secrecy according to the model of embedding

Explain composition model.

The embedding-function can ignore the input cover and synthesize and use anothercover, if the embedding-model is used. Hence the embedding-model is the more gen-eral system and contains the synthesis-model. Thus, it will be used as the model forsteganographic systems of secrecy in the following [ZFPW 97, Chapter 2.1].

170

4.1 Systematics

extract

random number

secret key

stegotext

synthesize

wrap H

keygeneration

k(H,x)

content

x

wrap*

H*

content

x

k

k

Figure 4.5: Steganographic system of secrecy according to the model of synthesis

4.1.1.2 Stenographic authentication-systems, overview

Watermarking and fingerprinting:In contrast to steganographic systems of secrecy, for steganographic authentication sys-tems, it is often only required, that their use can not be sensed by humans, so that forinstance watermarking and fingerprinting do not change works of art in a disturbing way.This is a clearly weaker demand than the demand, that watermarking and fingerprintingshall not be verifiable without knowledge of the stegokey.

Indeed, for watermarking and fingerprinting, heavy and particularly specific changingof the stegotext, in order to remove the embedded information about the author, mustbe expected. Against this, steganographic authentication systems need to be robust,i.e. the embedded information about the author must be noticeable even after such at-tacks. This is possible to achieve for instance by a sufficiently strong correlation betweenembedded and extracted information. In the only realistic scenario in open nets, namelythe scenario, that the aggressor knows all the details of the steganographic authenti-cation system, robustness is so far not achieved by any steganographic authenticationsystem. My personal assumption is, that it will remain like this.

Linartz paper from the IHW98 as borderline of watermarking.

4.1.1.3 Comparison

In the following table, the steganographic systems are compared with each other andwith the cryptographic systems.

steganographicconcelation systems

steganographicauthentication systems

171

4 Basics in Steganography

properties existence of embedded infor-mation must be hidden

hiding the existence of embed-ded information is not necces-sary

cover may be modified drasti-cally, even completely synthe-sized

embedding shall be smooth,i.e. modifications of the cover(which is the good to be se-cured) shall not be recogniz-able by humans

goal embed es many data as possi-ble

embed given amount of dataas robust as possible

attacker’s goal to detect existence of embed-ded data

targeted change of watermarkor fingerprint, e.g. replace,destroy, generate

parameters stego unmodified stego may be modified by theattacker to some extent, butnot too drastically (this woulddestroy the good)watermark or fingerprint maybe input parameter for bothpartiesthe unchanged good (this canbe the good both with andwithout embedded copyrightinformation) may be input toboth parties

covering offunctionality ofthecorrespondingcryptographicsystem

yes, because confidentiality ofa message’s existence alwaysprovides confidentiality of amessage’s content

no, because copyright can bedetected, but not small modi-fications5

4.1.2 An annotation to the key-distribution: steganographywith public keys

Symmetric steganography + Diffie-Hellman key-distribution, compare 3.9.1.

5Some authors define so called fragile watermarks, i.e. watermarks without robustness, which haveyust the goal to detect modifications of the good. I personally think, that fragile watermarks areunneccessary, because this security goal can be reached with cryptographical authentication systemswithout modification to the good.

172

4.1 Systematics

If PGP ex version 5 is used, Diffie-Helman keys should be created with the setting”Faster key generation”. Then a common p and g is used for all participants.

If the participants do not use common p and g, then Diffie-Hellman key-distribution isnot usable for steganography, while it still is usable for cryptography, compare exercise3-23 c).

Then all what is left for steganography is, what is called in [Ande4 96],[AnPe 98, page480] steganography with public keys (compare exercise 4-5): First, the - with the publiccryptographic key of the partner encrypted - steganographic session key is, without useof a steganographic key, embedded and transmitted.

For this encryption of a steganographic session key, the Diffie-Hellman key-distributioncan be used also without common p and g (compare exercise 3-23 c): The sender createshis public parameter for the session, using p and g of the receiver, and embeds it withoutuse of a steganographic key. Thereafter, the secret Diffie-Hellman session key can alsobe computed and used as steganographic key by the receiver.

4.1.3 Basics of security

[ZFPW 97]

4.1.3.1 Attack-aims and -successes

4.1.3.2 Types of attacks

4.1.3.3 Stegoanalysis leads to improved steganography

Scientifically interesting (and significant for the regulation of steganography, compare9) is, that every method to detect steganography can be immediately used to improvethe stegosystem. This already in [MoPS 94, chapter 4.3.2] mentioned fact will now bedisplayed and researched in more detail. At this, mostly steganographic systems ofsecrecy will be considered.

First, a construction method for the covering model, i.e. for S and S−1, is given. Thereverse operation is not changed thereby - it had already in the non-advanced stegosystemthe assignment, to only give an emb to the outside, if one was actually embedded.The reverse operation needs enough redundance, to decide this - with sufficiently highprobability - correctly.

Afterwards, constructions of the embedding model, i.e. E and E−1, are developed.The reverse operations each need to be changed here too.

4.1.3.3.1 construction of S and S−1

Let S be the algorithm ”hide” and S−1 the reverse operation to S, i.e. an algorithmwhich extracts the embedded data emb. If nothing was embedded, S−1 gives the re-sult empty. Let R be a guessing algorithm, which guesses for the input stego withoverwhelming probability semi-correctly, if something is hidden in stego or not. LetSemi-correctly mean: if R guesses ”something hidden”, then this is correct. But if R

173

4 Basics in Steganography

guesses ”nothing hidden”, this can also mean, that something is hidden, but R does notnotice it6.

Then an advanced algorithm ”hide” S′ and the advanced reverse operation S′−1 canbe developed in this way:

function S’ (in: Source, emb; out: stego);stego := S(Source, emb);if R(stego) = something hidden then stego := S(Source, empty);

The reverse operation S′−1 is exactly S−1.

The stegokey was omitted in the notation, as it is not important for the construction.It is even not important, whether symmetric (if existing) or asymmetric steganographyis used. Also nothing changes, if the guessing algorithm R gets the additional inputparameter source, meaning that the aggressor changes from a stego-attack to a source-stego-attack. Furthermore nothing changes, if R gets not only the current stego as input,but also all previous stegos (as well as all previous sources).

For the hidden transfer of information, the stegosystem (thus S and S−1 or S′ andS′−1, respectively) is repeatedly being called in a loop, until all that information whichneeds to be transferred secretly is actually transferred. Therefore, S′ needs an additionaloutput parameter, which indicates, whether the transmission succeeded, or if one needsto try again with the same value for emb (but with another value for source).

Claim: For the advanced stegosystem S′ and S′−1 the following hold:

1. The receiver receives only the data emb, which was actually sent by the sender.

2. The data emb is better hidden than by the original stegosystem.

3. The usable bandwidth of the advanced stegosystem is smaller then the usablebandwidth of the original stegosystem.

4. If the usable bandwidth of the original stegosystem is larger 0, and if it containsnondeterminism7, then also the usable bandwidth of the advanced stegosystems islarger 0 and it also contains the same nondeterminism. The receiver then receivesthe data emb, which the sender wants to send, after finitely many tries.

5. If the effort for the initial stegosystem and for R is acceptable, then the effort isalso acceptable for the advanced stegosystem.

Proof:

6Such a semi-correct algorithm is exactly, what would be needed to be presented to a court, to provethe use of illegal steganography. It does not necessarily need to detect all steganography, but if itstates, that steganography was used, then it’s claim must be correct. Otherwise, no judge wouldtrust this algorithm.

7Nondeterminism from the aggressors point of view is necessary, so that one can change unrecognisablyfor him and hence hide from him. If the stegosystem contains nondeterminism, then it exists non-determinism especially from the aggressors point of view, and hence one can unrecognisably changeand hide. In short: The stegosystem is not too bad then.

174

4.1 Systematics

1. If the original stegosystem has this property, then also the advanced stegosystemhas it: Either the sender behaves as in the original stegosystem, then the receivingis exactly the same. Or the sender sends a stego with nothing embedded. Theoriginal had also to cope with this.

2. As stego is only created by the original stegosystem, the advanced stegosystemhides at least as good as the original stegosystem. As we assumed, that the originalstegosystem still now and then created a stego which could be detected by R, andas this can not happen with the advanced stegosystem any more, the advancedstegosystem hides the data better.

3. This does not hold for every single case, but it holds in the average case.

4.

5.

Critic annotations to the single points:

2: Everything is fine, as long as R works only on parts of stego, so that one can iterate.If this is not the case, for instance because also dependencies between differentparts need to be considered, then not even an ”at least” can be proven, let alone a”better”. Nevertheless, I believe, that the statements hold ”in the average case”.

3: The corresponding holds for the bandwidth.

Research questions:

When starting with a stegosystem, which uses all nondeterminism, can one then reach- by iteration of the construction - a secure stegosystem of maximal capacity? (Atthis, the iteration of the call of the stegosystem for transmission of information and theiteration of the construction to improve the stegosystem, must not be mixed up.)

4.1.3.3.2 construction of E and E−1

So that also single bits can be embedded, the goal of E and E−1 is weaker than the goalof S and S−1, where S−1 needs to decide whether something was hidden and only givethe emb to the outside, if something was hidden. Therefore emb must be encoded by Sredundantly such that it contains at least a few 10 bits.

The goal of E and E−1 is:E shall embed emb, if that is possible in an undetectable way. E−1 shall detect, ifsomething could be embedded. If something could be embedded, E−1 shall give thepotential emb to the outside.

This interface of E and E−1 can efficiently be used: If the sender wants to transfer amessage secretly, then he embeds it gradually, i.e. whenever possible - otherwise he doesnothing. Hence the receiver receives every secretly transferred message as a compactbitstream, i.e. without interruption by meaningless bitstreams. If the sender does not

175

4 Basics in Steganography

continuously want to transmit messages secretly, the receiver gets meaningless bitstreamsbetween the messages.

In the following, we consider the particular case, that emb is one bit. Thus thesender as well as the receiver can efficiently check, whether all possible values of embcan be embedded - and therewith whether all messages can be compactly transferred.As in 4.1.3.3.1, let R be a guessing algorithm, which guesses for the input stego withoverwhelming probability semi-correctly, whether something is embedded in stego or not.Let E′ be the improved algorithm ”embed” and E′−1 its reverse operation.

The obvious construction (correct only if the embedding position is inde-pendent from the bit which needs to be embedded):

The sender checks, whether E(cover, 0) and E(cover, 1) are respectively acceptable,i.e. he checks, whether the embedded value is from his point of view not detectable toa stegoanalyst, and he only embeds, if they are acceptable, i.e. he then sends stego :=E(cover, emb) instead of cover.

An implementation of the improved embedding algorithm E′ with the least possiblecalls of E and R is as follows:

function E’(in: cover, emb; out: stego);stego := E(cover, emb);stego alternate := E(cover, not(emb));if R(stego) = something hidden or R(stego alternate) = something hiddenthen stego := cover;

The receiver checks, whether the received stego is an acceptable value8 and then - if itis acceptable - computes emb := E−1(stego). Now the receiver has to also check, whetherthe other possible value of the bit, not(emb), could have also been embedded. Unfortu-nately, the receiver does not know cover and can hence not compute E(cover, not(emb)).Instead of that, he computes E(stego, not(emb)). If the equation

E(cover, not(emb)) = E(stego, not(emb)),

which can be equivalently rearranged by inserting stego = E(cover, emb) to

E(cover, not(emb)) = E(E(cover, emb), not(emb))9,

holds for the stegosystem, then the receiver can also check, whether both possible valuesof the bit could have been embedded, and hence whether the sender could embed or not.So the receiver only gives the computed emb to the outside, if E(stego, not(emb)) is anacceptable value for the stegotext.

An efficient implementation of the improved extraction algorithm E−1 is:

8If the stegosystem is good, then this should be always the case - independently of the fact whethersomething was embedded or not.

9This equation says: If two inverse bits are embedded in succession, then the first embedding is over-written by the second embedding, i.e. the embedding position does not depend on the value of thebit.

176

4.1 Systematics

function invE’(in: stego; out: emb);emb := invE(stego);stego alternate := E(stego, not(emb));if R(stego alternate) = something hiddenthen emb := empty;

The following question arises: Is the above mentioned goal for E and E−1 only achievable,if the stegosystem fulfills the above written equations? The answer is a constructive ”no”:

The inefficient construction:The idea is, that the sender and the receiver both check, whether both the possiblevalues of a bit can be embedded in stego. We only then assume, that the bit has beentransmitted. In detail:

The sender checks, whether E(cover, 0) and E(cover, 1) are both acceptable, i.e.he checks, whether the embedded value is from his point of view not detectable to astegoanalyst, and he only embeds, if they are acceptable, i.e. he then sends stego :=E(cover, emb) instead of cover. If as well Ecover, 0 as E(cover, 1) are acceptable values,the sender assumes, that emb has been transmitted, otherwise he sends emb again.

The implementation of the improved embedding algorithm E′ with as few as possiblecalls of E and R, is as follows:

function E’(in: cover, emb; out: stego);stego := E(cover,emb);stego alternate := E(cover, not(emb));if R(stego) = something hidden or R(stego alternate) = something hiddenthen begin stego := cover; E’(next cover, emb) endelseif R(E(stego, 0)) = something hidden or R(E(stego, 1)) = something hiddenthen E’(next cover, emb);

The receiver receives stego′ (this can be stego or cover) and then he computes emb′ :=E−1(stego′). The receiver checks, whether E(stego′, 0) and E(stego′, 1) are both accept-able values for stegotext, and then receives emb′, otherwise he receives nothing.

Why does this work?

Case 1: The sender sends stego, i.e. stego′ = stego. The receiver thus receives emb′ =emb if and only if the sender assumes emb as transmitted.

Case 2: The sender sends cover, i.e. stego′ = cover. Then either E(cover, 0) orE(cover, 1) is not acceptable – or both. Because of stego′ = cover, the sameholds for the by the receiver tested values E(stego′, 0) and E(stego′, 1).

An implementation of the improved extraction algorithm E′−1 is as follows:

function invE’(in: stego; out: emb);if R(E(stego, 0)) = something hidden or R(E(stego, 1)) = something hiddenthen emb := empty else emb := invE(stego).

177

4 Basics in Steganography

This inefficient construction omits possible sending chances, where E(cover, emb) is ac-ceptable, but E(cover, not(emb)) is not acceptable. Hence the following, more efficientconstruction is more favorable:

The efficient construction:

The sender checks, whether E(cover, emb) is acceptable, i.e. he checks, whether theembedded value is from his point of view not detectable to a stegoanalyst, and he onlyembeds, if they are acceptable, i.e. he then sends stego := E(cover, emb) instead ofcover. If as well Ecover, 0 as E(cover, 1) are acceptable values, the sender assumes, thatemb has been transmitted, otherwise he sends emb again.

An implementation of the improved embedding algorithm E′ with as few calls of Eand R as possible, is as follows:

function E’(in: cover, emb; out: stego);stego := E(cover,emb);if R(stego) = something hiddenthen begin stego := cover; E’(next cover, emb) endelseif R(E(stego, 0)) = something hidden or R(E(stego, 1)) = something hiddenthen E’(next cover, emb);

The receiver receives stego′ (this can be stego or cover) and then he computes emb′ :=E−1(stego′). The receiver checks, whether E(stego′, 0) and E(stego′, 1) are both accept-able values for stegotext, and then receives emb′, otherwise he receives nothing. (Thisis exactly the same as above, so the above algorithm can be adopted).

Why does this work?

Case 1: The sender sends stego, i.e. stego′ = stego. The receiver thus receives emb′ =emb if and only if the sender assumes emb as transmitted.

Case 2: The sender sends cover, i.e. stego′ = cover. Then either E(cover, 0) orE(cover, 1) is not acceptable (as one of these is E(cover, emb), which is discardedby the sender as not acceptable) – or both. Thus the receiver receives nothing,because – by stego′ = cover – either E(stego′, 0) or E(stego′, 1) (or both) is anon-acceptable value for stegotext.

4.2 Information-theoretic secure steganography

4.2.1 Information theoretic secure steganographic secrecy

[KlPi 97]

178

4.3 Two opposed stego-paradigms

4.2.2 Information-theoretic secure steganographic authentica-

tion

Work paper from Herbert Klimant and Rudi Piotraschke

4.3 Two opposed stego-paradigms

4.3.1 Changing as little as possible

Lest significant bit (LSB)

4.3.2 Simulating the normal process

[FrPf 98]

4.4 Parity-encoding to improve concealment

Instead of choosing the bits, where one embeds with the stegokey, Ross Anderson pro-poses, to chose sets of bits and encode each bit which is to be embedded as their parity[Ande4 96, AnPe 98]. This gives more liberty to the embedding process and in this senseintensifies the concealment of the embedding.

The larger the sets of bits, over which the parity is built, are, the more uniformlydistributed is the parity, and the less suspicious is the embedding. Conversely, errorswith the transmission of the stegotext then affect the embedded data with ever greaterprobability - be they accidental, or specifically created to complicate steganography.

4.5 Steganography in digital telephone calls

[MoPS 94, FJMPS 96]

4.6 Steganography in pictures

[FrPf 98]

4.7 Steganography in video conferences

[West 97]

179

4 Basics in Steganography

180

5 Security in Communication Networks

5.1 The problem

5.1.1 The societal problem

With all communication networks you should ask yourself how well all participants areare protected from damage. The resulting protection goals (and protection mechanisms)can be roughly seperated into two categories (cp. §1.2.4):

Doing desired actions: A participant can be harmed by refusing, delaying or alteringa service or by estableshing a communication relation in his name and costs without hisapproval. In case of an conflict it can also be harmful to him not being able to provehaving sent or received a certain message. That are - in other words - the protectiongoals integrity and availablitiy from §1.3.

To make damage actually happen a modifing attack needs to take place. In generalthese attacks can be spotted in the IT system (cp. §1.2.5). Or said the other way round:That the desired actions happend can be verified later on.

Preventing undesired actions: A participant can be harmed by observing his commu-nication. The observation may include the message content as well as the communicationcircumstances. This was already defined as protection goal confidence in §1.3.

To make damage actually happen only an observing attack needs to take place. Theseattacks can’t be spotted generally as mentioned in §1.2.5. Meeting the protection goalof confidence is not possible afterwards - so preventive measures are inevitable.

The measures for “doing desired actions” are explained in detail in [VoKe 83]. They arealmost independent from the measures for “preventing undesired actions”. We will onlyhave a short lookup on them.

First of all we have to answer the question what for outcomes the development ofnetworks mentioned above will have on “preventing undesired actions”.

In fig. 5.1 the final state of that development is shown: All services like TV, radio,telephone and Internet are routed from the local loop (or another relay station) toparticipant’s port via fiber optics.

This fiber optic string is non-ambiguously assigned to the participant (or his familyor company). And because it is a switching network only the data specific to him istransmitted. So the physical network address is a kind of personal identification number(a identification for natural or legal persons) which makes it possible to collect dataabout that participant.

181

5 Security in Communication Networks

Glass fibre

Eavesdropper Switching Center· Post, Secret Services· Producers

(Trojan Horse)

possible

"Big Brothers"

· Staff

Visual Telephone

Network

Connection

Videotext

Telephone

Television

Radio

Interactive

Figure 5.1: Observability of the user in an astral integrated broadband mediating network, which medi-ated all services.

The information inside the fibre optic and the relay stations are technically seen:

• the payload data (images, audio, text)

• the network data (addresses of the communication partners, the data volumetransmitted, the service specifications, the time connect; and if mobile the currentsplace of stay)

The resulting person specific data can be textually divided into:

• Content data, that is the content of confidential (personel or business) messageslike telephone conversations or electronic mail.

• Selection data, that is information about the interest of the participant aboutnon-confidential messages. That are observations like which news paper articlesare being received, which database requests are made or what he’s watching in TVor listening in radio.

If a participant zaps through the TV program the information about isgiven to this TV and his network connection. This is then transmittingthe new program instead of the old one.

The chosen date not only characterizes a natural person but also corporations:

Which special news articles and patents are requested are giving hintsabout their current research und development interests.

182

5.1 The problem

Before the introduction of public accessable databases (especially teletext) selectiondata could not be gained throughout communication networks.

• Traffic data, this is how long a participant communicated with whom. You cangain it from the network data. For natural person these infomation are intrestingtowards developing a personality image (like his consuming habits, his friends,contact with police and medical institutions). Not only that traffic data can harmpersonal rights of a natural person but also the business interests of a legal person.For instance if the communication between two competitors suddenly increasesthey may conduct a joint product development.

One way to get the data is to wiretap the participant’s connections to his local relaystation or the connections between the relay stations. The latter one doesn’t allow tracingof the mass communication between the “provider” of TV and radio and the local relaystation if you have a corresponding protocol. This also denies access to the individualcommunication of participants of the same local relay station area. But it allows thetracing of many participants who communicate beyond their local relay station.

5.1.2 Information theoretical problem and solution drafts

The protection goal confidentiality from §1.3 extended by one more item is:

Preventing undesired actions: Undesired actions are denied.

Protection goal confidentiality

c1 The message content shall be kept under wraps except for the commu-nication partners.

c2 Sender and/or receiver of messages shall stay anonym to each other andnon participants (like the carrier) should not be able to observe.

c3 Neither potential communication partners nor non-participants (includ-ing the carrier) shall be able to obtain the current location of a partic-ipant’s mobile node with their agreement.

Doing desired actions: Desired action shall take place.

Protection goal integrity

i1 Manipulations of the message’s content (incl. the senders adress shallbe detected.

i2 The recipiant shall be able to prove that instance x has sent message y(and even the time of this) towards a third person.

i3 The sender shall be able to prove the sending of the correct content andif possible the receiving of the message.

183

5 Security in Communication Networks

i4 Nobody can withhold1 hires for services he used from the carrier. Theother way round the carrier can only demand hires for services correctlyprovided.

Protection goal availability

a1 The network allows communication between all partners who would liketo (unless it is forbidden to them: positive politics).

To give an overview the protection goals are shown with their corresponding protectionmechanisms:

Preventing undesired actions

Protection mechanisms for confidentiality

c1 Usage of one or more concelation systems as end to end encryption (cp.§5.2.1.2).

c2 Techniques within the communication network to protect network data(cp. §5.3 to §5.6.1).

c3 Techniques within the communication network to protect the mobilitybehavior of the participant (cp. §5.7).

Doing desired actions

Protection mechanisms for integrity

i1 Usage of one or more authentication systems.

i2 Messages are digitally signed by the initiator. The test keys are publiclyavailable.

If signature shall stay valid even if the secret key becomes known(what may even happen through the owner himself: to deny the agree-ments made with the signature) the signature should be signed with atime stamp by one or more institutions (electronic notary) and withtheir regular signing keys. By doing so, the validity of all signaturesmade before do remain even if the secret key is known. Finally no newsignatures with that key should be authenticated by the notaries.

What happens if the signature system used can be broken in the nearfuture is discussed in exercise 3-11b.

Beside the protection of the validity of the digital signature, the timestamp proves that the message was sent after the time of the stamp.

1This claim is stronger than what the communication network must support. It would be sufficent thatthe carrier can prove the correct provision of his services. How and when a payment is realised isactually not a topic here. But since the claim of confidentiality c2 for collecting the hires seems tomake this difficult this stronger claim is required. It is shown in §6 how this can be achieved.

184

5.1 The problem

i3 The communication network (or the partner) gives digital receipts ofthe message transported (or received). Those receipts may include atime stamp.

i4 With digital payment systems (§6) hires are paid for services used.

Protection mechanisms for availability

a1 Diversive networks (different transmission and relay systems); fair pro-portioning of all resources.

The goal of the systems used for preventing undesired actions is to make it impossiblefor an aggressor to gather any sensible data: The communication should be almost un-observable towards non-participants and anonymous towards communication partners.To make it impossible to gather any unspecific knowledge about a person the singlenetwork events should stay unrelated.

Futhermore you have to show that techniques for preventing undesired actions won’tmake desired actions unavaiable (cp. §5.1.1). Due to their importance for comprehensionI would like to explain the terms introduced more thouhroly.

Because a protection from an almighty aggressor that controls all connections, relay sta-tions and all communication partners is impossible from within the system, all measuresare only approximations towards a perfect protection. The approximation is determindby the strength of the aggressor in a aggressor model: How much did the aggressorspread and how much computation power is available to him (cp. §1.2.5 and §3).

(I want to clearly point out that the following discussion is made from an academicpoint of view according all assumptions about organistations and groups of people.Therefore we’ll only discuss if a organistation or a group of people could attack, andnot if they did so.)

An event E (e.g. sending a message or making a business) is unobservable relative toan aggressor A if the probability of the occurance of E as a possible observation O to Ais truly greater than 0 and truly smaller than 1. Of course this only is true for aggressorsnot involved in E. In mathematical terms: For A holds: (∀B)(0 < P (E|O) < 1).

An event is perfectly unobservable relative to an aggressor A if the probability of theoccurance of E is equal before and after every observation O. For A holds (∀B)(P (E) =P (E|O)). This means [Sha1 49] that the observation doesn’t give A any additionalinformation about E.

An instance (e.g. person) in a role R is anonym relative to E (e.g. as sender of amessage, buyer in a business) and an aggressors A if for every instance not cooperatingwith A, it has the role R in E with a probability truly greater than 0 and truly smallerthan 1 after every observation from A. This is also true for aggressors involved in E.

An instance in a role R relative to an event E and an aggressors A is perfectly anonymif for every instance not cooperating with A it has the role R in E with the sameprobability before and after the observation from A. This means [Sha1 49] that theobservation doesn’t give A any additional information about who has the role R in E.

185

5 Security in Communication Networks

Two events E and F relative to a attribute M (like the sender address or the transac-tion they belong to) and an aggressors A are unrelated (unchained) if the probabilitythat they do match in M is truly greater than 0 and truly smaller than 1 after everyobservation by A. This is also true for aggressors involved in E.

Two events E and F relative to a attribute M and an aggressors A are perfectlyunrelated if the probability that they do match in M is equal before and after afterevery observation by A. This means [Sha1 49] that the observation doesn’t give A anyadditional information if the event do match in M .

Of all the three definitions of unobservability, anonymity and unrelatability, unrelatabil-ity is the most common:

Unobservability of events can be viewed as unrelatability of observations and theevents behind.

Anonymity can be viewed as unrelatability between instance and events.

The reliability of the unrelatability, unobservability and anonymity is deter-mind by how strong the aggressor is bound to the previous definitions.

Unobservability and anonymity can be additionally parameterized by making themapply only to certain classes of events (message types) and instances (parts of the com-munication network (cp. §5.5)). In the previous definitions no class were defined. Sothey cover the maximum of unobservability and anonymity. Here is an example of aclassified definition:

An event E is unobservable (perfectly) relative to an aggressor A and a givenclassification of events if the conditional probability of the occurance of E after everyobservation from A of B is truly greater than 0 and truly smaller than 1.

Both may only apply for aggressors not involved in E. In both cases, the unobserv-ability and the perfect unobservability, that for the aggressor the probability of someevents may change due to his observations. The aggressor may gain information aboutclasses but in the second case nothing about single events within the classes.

5.2 Usage and limits of encryption in communica-

tion networks

5.2.1 Usage of encryption in communication networks

For the usage of cryptographic systems for protecting the communication (mainly forconcealation, but also for authentication and integrity) you have two strategies at handwhich both have disadvantages: link by link encryption and point to point encryption.

5.2.1.1 Link by link encryption

The first strategy is to encrypt all data between two neighbouring network nodes[Bara 64][Denn 82][VoKe 83] [DaPr 89].

186

5.2 Usage and limits of encryption in communication networks

Link encryption

Glass fibre

Eavesdropper Switching Center· Post, Secret Services· Producers

(Trojan Horse)

possible

"Big Brothers"

· Staff

Visual Telephone

Network

Connection

Videotext

Telephone

Television

Radio

Interactive

Figure 5.2: Link encryption between network connection and switching center

There should be a permanent character stream, so the observer may not know when amessage is sent. A permanent character stream which is statically assigned (e.g. pointto point connections or radio links) can be established without any extra efforts.

Because of the reasons explained in §3 you could and should use a secure stream cipher :If a block cipher is used the observer could recognize that certain message fragments arerepeated if the key wasn’t changed properly. The observer could relate some messageswith certain events (cp. §5.1.2).

If a permanent stream of characters (encrypted with a secure stream cipher) is trans-mitted the observer doesn’t gain any information. If and what for messages are trans-mitted is perfectly unobservable (cp. §5.1.2).

The disadvantages of link by link encryption are

- Inside the relay stations all data is unencrypted. Of all the aggressors only theobservers of the links are intercepted.

- As in fig. 5.2 you can see the final stages of a communication network wherethe participant is connected with fibre optics (at least 560 Mbit/s) to the relaystation. This is beyond what today’s cryptodevices (they need to be reliable andcheap enough) can achieve (cp. §3). It would be an advantage if high resolutionTV (HDTV) which is actually not the subject of intrest to observers, would not beencrypted. Only the network and user specific data would be encrypted. TodaysPAL TV needs about 34 Mbit/s (compressed) and HDTV about four times of that.

Anyway neighbouring network nodes will need compatible and powerful cryptographicsystems (selfsyncronizing stream ciphers) that are directly hardware implemented. Be-

187

5 Security in Communication Networks

cause the connections of the network nodes are relatively statical a symetric crypto-graphic system would be sufficent.

5.2.1.2 Point to Point encryption

The second strategy is to encrypt the data transmissions between the two participantsdirectly [Bara 64][Denn 82][VoKe 83] [DaPr 89], so the relay stations couldn’t interpretthem (fig. 5.3).

Because of the reasones explained for link to link encryption the point to point en-cryption should be also a permanent character stream (at least for the time the virtualconnection exists) with stream ciphers. The disadvantages of point to point encryptionare:

- The end to end encryption only encrypts the payload data but not the networkdata.

- For someone who already knows the payload data can now obtain the selection data(cp. §5.1.1) from the network data. This may lead to further relations betweenthe traffic data and selection data.

Consequently this technique can’t be employed as protection from the carrier andothers that have access to the carrier’s network. Nor if the payload data is centrallystored in a database. The payload data may be stored encrypted in the database.But this is only useful if the aggressor (or the carrier and the others that haveaccess to the database) doesn’t possess the corresponding key and can’t obtain itfrom somewhere. The latter one seem quite unrealistic except for small and closeduser groups because an aggressor can easily infiltrate a stooge into a large closedgroup. Inside a large open group the aggressor camouflages himself as normalparticipant. Besides it would be possible to collaborate with someone who hasaccess to the data of the relay stations. This could originate from a intelligenceservice that would gain the network data from the carrier and the payload datafrom content providers or network participants.

It trival to understand that you can’t protect yourself with point to point encryp-tion from communication partners.

- As like as the link by link encryption it is quite difficult to encrypt TV (esp.HDTV) to protect the selection data inside the relay stations.

If point to point encrypt between any communication partners is desired they do need apairwise compatible cryptographic system. If broadband communication is required atleast one cryptographic system needs to be implemented into hardware (cp. §3). Becauseof the dynamic nature of the communication relations you can’t assign keys staticallyto every possible connection (that are n · (n− 1)/2 for n participants). Because of thisthere should be a formal standarization of a asymetric cryptographic system for keyexange and authentication as well as a fast symetric concealation system for encryption

188

5.2 Usage and limits of encryption in communication networks

Point-to-point encryption

Glass fibre

Eavesdropper Switching Center· Post, Secret Services· Producers

(Trojan Horse)

possible

"Big Brothers"

· Staff

Visual Telephone

Network

Connection

Videotext

Telephone

Television

Radio

Interactive

Visual Telephone

Videotext

Telephone

Television

Radio

Interactive

Network

Connection

Eavesdropper

Glass fibre

Switching Center· Post, Secret Services· Producers

(Trojan Horse)· Staff

Figure 5.3: Point-to-point encryption between user stations

189

5 Security in Communication Networks

and decryption of payload data. This should form a open system [PWP 90][WaPP 87]regarding data protection and predictability of legal decisions issues (as like as it isplanned for ISDN). Only on such a basement of standards cryptographic systems can bedeveloped that are efficent and allow anybody to communicate with everybody else.

Without such an standarization confidentiality and integrity can only be achievedwithin closed user groups that have agreed on one cryptographic system. At this pointI want to emphasize what for tremendous consequences the plans of the NSA2 (ZSI3) ofenforcing the usage of a secret concealation system in the public would have.

5.2.2 Limits of encryption in communication networks

Even if (as seen in fig. 5.4) link by link and point to point encryption is used [Bara 64,Denn 82, VoKe 83] [DaPr 89] all disadvantages mentioned in §5.2.1.2 of the point topoint encryption do remain. Only the first disadvantage is softend because observers ofthe connections won’t gain any information from the link by link encryption.

Even the best cryptographic systems known won’t grant a sufficent and reliable pro-tection against the many attacks on the relay centrals and the communication partners.At least not with a sane effort of. Especially in pure link by link encryption systemsused as service integrated system you will have to face longterm problems regarding yourinformation freedom.

This is why we try to find other ways to achieve the missing security. But you’ll haveto take care about not letting other entities become an potential aggressor.

Technically seen the main goal is to protect the traffic and network data from the carrierand to protect your identity from other communication partners. With that trafficand selection data is not only protected from them but also from aggressors that don’thave more access than the carriers and the other participants. Although encryptioncan’t achieve the protection goals in the area of confidentiality inside a communicationnetwork, encryption gives us the fundamental techniques to decribe protection measurefor confidentiality.

The two following sections fundamental techniques of protection of traffic and selectiondata from aggressors in relay stations and communication partners are shown:

In §5.3 protection measures are shown which are settled outside the communicationnetwork (which therefore are considerd by every user on it’s own). They will show upto be not sufficent.

In §5.4 protection measures are shown which are settled inside the communicationnetwork (which therefore won’t alter the usage of the network but the way transportationinside the network).

2National Security Agency; the USA’s intelligence service for cryptographic systems and electronicsurveillance

3Zentralstelle fuer Sicherheit und Informationstechnik (central agency for security and informationprocessing systems); today BSI: Bundesamt fuer Sicherheit und Infomationstechnik, ran by the de-partment of the Interior (Federal agency for security and information processing systems)

190

5.2 Usage and limits of encryption in communication networks

Link encryption

Point-to-point encryption

Glass fibre

Eavesdropper Switching Center· Post, Secret Services· Producers

(Trojan Horse)

possible

"Big Brothers"

· Staff

Visual Telephone

Network

Connection

Videotext

Telephone

Television

Radio

Interactive

Visual Telephone

Videotext

Telephone

Television

Radio

Interactive

Network

Connection

Eavesdropper

Glass fibre

Switching Center· Post, Secret Services· Producers

(Trojan Horse)· Staff

Figure 5.4: Point-to-point encryption between user stations and link encryption between network con-nections and switching centers, and between switching centers

191

5 Security in Communication Networks

5.3 Basic measures settled outside the communica-

tion network

With protection measures outside the communication network the sender and receiveraddress are still visible as network data inside the network. And of course the time whenthe message goes through the network. You’ll have to prevent reasoning about the trafficand selection data from that.

5.3.1 Public nodes

If there are public nodes in the communication network (like public telephone stations) orif the network can be reached via another network which has public nodes the followingmeasures are practical:

The sender and receiver address become meaningless if different public nodes areused. But this is only true if you won’t have to identify yourself for access control orpayment issues [Pfit 85].

The application and effects of the protection measure “using different public nodes” isstrongly limited:

Who wants to leave the bureau or apartment for every single phone call (not tomention switching the public phone every time to make a time relation (cp. §5.3.2)difficult)? Who wants to go to a public telephone on a agreed point of time to receivethe callback (and since the localation of the public phones doesn’t change it is vulnerableto conventioal observation)?

However, even if all those questions would be answered positivly the effect would belimited: Even if everbody would switch the telephone with each call it would remaina strong locality (time effort is too big). Even if a single person would be anonym ina group of a thousand his communication relations would make it possible to relatesender and receiver which leads to identification. Not to mention the relation betweenthe location of his apartment and the telephone.

5.3.2 Time independent processing

The time when a message is in a communication network becomes almost meaninglessif a network node would request information not when the user wants it but at somerandomly chosen point of time before the request [Pfi1 83, page 67,68][Pfit 83].

If you want to read a newspaper your network station could receive it whenit is printed or when transmission is especially cheap. It would be saved toread it later.

This protection measure prevents carriers impling the communication partner from timeand context of the transmission:

192

5.4 Basic measures settled inside the communication network

When a newspaper is requested during the day and you know that of the 10potential customers one is working in the night shift you’ll probability havefound the reader.

And as well as, your user station could send information at any randomly chosenpoint of time.

If the carrier knows who communicates with each other through other methods, thismeasure at least makes it hard to imply to the personality through your daily order ofevents.

Of course this measure doesn’t work with real-time applications like telephone. For allothers it makes it difficult to relate network events with other network events as resultof a specific person’s actions (cp. §5.1.2).

5.3.3 Local choice

To protect selection data you can request information in larger chunks and pickthe information you’re intrested in later:

If you order more newspapers of different political directions instead of asingle article no one could imply the political intrests and opinions of thereader.

With local choice you can even camouflage towards your communication partner whatyou’re really intrested in and how you react to it.

Requesting large blocks of information from the communication partner is usally madeas protection for information that is transmitted unencrypted or the carrier and thepartner are identical or are working together.

In future communication networks where there is enough bandwidth this measure wont’tcost no more if the user requests already provided information. But it’s a matter of thepayment procedure if the user will profit from this (cp. §5.6.2). Even if no acceptablepayment procedure is found to get many newspapers of different political character fromdifferent publisher for the price of one, it should be possible today to buy completenewspaper and get them transmitted. Bildschirmtext Beispiel aendern...???

Local choice was invented at least three times independently from each other: First of allwith only one information provider and without a general classification in the contextof BTX [BGJK 81, page 153,159], in [Pfit 83] and finally together with timely unrelatedprocessing in [GiLB 85].

5.4 Basic measures settled inside the communica-

tion network

In contrast to the measures mentioned in §5.3 in this section measures are shown thatreduce the amount of network data inside the communication network (cp. §5.1.1).

193

5 Security in Communication Networks

With that you could camouflage the communication relations between the participantsfrom aggressors (evil neighbours, the carrier, producers of software and hardware). Thatmake the capturing of traffic and selection data impossible.

The goals are:

• The usage of the network shouldn’t change nor get more complicated.

• Every participant shall have the opportunity to validate the confidentiality of thesystem.

Because a protection from an almighty aggressor that controls all connections, relaystations and all communication partners is impossible from within the system, all mea-sures are only approximations towards a perfect protection. The approximation is - asmentioned in §5.1.2 - determind by the strength of the aggressor in a aggressor model.

If a communication network is hiding who sends and receives for all network events it iscalled anonym. That makes anonymous communication.

Most of the known systems shown here protect from all as realistical seen aggressorsby protecting either

• the communication relation/connection or

• who is sending or

• who is receiving.

Against significantly weaker aggressors those basic measures grant a complete protec-tion: Sender and receiver are protected (and that protects all communication relations).Against significantly stronger aggressors they don’t give any protection. Because ofthis canonial (and therefore indenpendent from details of a realistic aggression scenario)definition, the following sections cover these three options.

5.4.1 Broadcasting: The protection of the receiver

By making the communication network sending all information to all participants the re-ceiver can be protected from the communication network and the communication partner[FaLa 75].

Broadcasting allows the analogy to a perfect (information theoretical) concealationsystem (cp. §3). Following the definition of perfect anonymity from §5.1.2 and the infor-mation theoretical from §3, the probability for an aggressor controlling all connectionsand some user stations is equal before and after the receiving for all stations he doesn’tcontrol. The aggressor doesn’t gain any new information about which uncontrolled sta-tion has actually received the message. It is even impossible for the aggressor to observewether the message was received at all. So broadcasting guarentees perfect informationtheoretical unobservability of the receiver. Furthermore broadcasting doesn’t relate anytraffic events so that we even have perfect information theoretical unrelatability.

194

5.4 Basic measures settled inside the communication network

5.4.1.1 Altering attacks on broadcasting

The realisation of broadcasting has no conceptional problems as long as only observingattacks are considered. Otherwisly not:

1) An altering aggressor can deny the service. Here the usual error tolerance tech-niques help, and if nessessary combined with changing the carrier if importantsubsystems “randomly” fail too often.

2) An altering aggressor can deanonymize some participants at least partly by inter-rupting the consistency. For example if the aggressor is the sender of a messageM on which the recipiant will answer, he wants to deanonymize: He sends M andprevents the sending to all but one user station. If he doesn’t get an answer heretries it with another user station. If you expect not only one answer (if thissimple dialog is part of longer one) you could limit the amount of testings to thelogarithm of the amount of user stations: The broadcasting is interrupted for thehalf of the other user stations. If he receives answer he halves the group thatcontained the message. Otherwisly he halves it’s complement of the previous step[Waid 90][WaPf1 89].

This attack can be thwarted by either analogue or digital measures. A analoguemeasure could consist of the usage of media that guarentees the consitency of thebroadcasting (non cellular radio and satelite networks are of this kind). Becausesuch an attribute is hard to be proven (cp. §2) and radio connections are stronglyinterference prone, digital measure must be taken into account. Digital mea-sures can only spot inconsistent broadcasting but not prevent (the aggressor couldcut through the wires or damage the radio antenna). The recognition of inconsis-tent broadcasting is either complexity theoretical by adding sequence numbers ortimestamps to the message (so they become digitally signed by stations that won’tcooperate ???) [PfPW1 89]. Or it can even be achieved information theoreticalby using overlaying sending (cp. §5.4.5) with inclusion of the distributed messagesinto the keygeneration (cp. §5.4.5.3) [Waid 90][WaPf1 89].

5.4.1.2 Implicit addressing

If you want to use broadcasting on routing services every node must decide through aattribute named implicit address which messages are actually for it. In contrast to aexplicit address a implicit address doesn’t contain any information about the locationof the recipiant (nor about how he can be found).

If a address can only be interpreted correctly by the right recipiant it’s called coveredaddressing. If a address can be interpreted by everbody (and therefore be testedfor identitiy with other addresses) it’s called open addressing [Waid 85]. With openaddressing messages with the same address are always relatable to the recipiant (whoseidentitiy is equal to his address).

195

5 Security in Communication Networks

5.4.1.2.1 Implementation of covered implicit addressing

The usual implementation of covered implicit addressing employs redundency withinthe message content and a asymetric concealation system (cp. §3). Every message isencrypted completely or partially with the encryption key of the addressed participant(the first option is a point to point encryption). After the decryption with the cor-responding key the user station of the participant addressed can determine (with theredundency inside the message) wether the message was designated for him. Becausethe implementation of asymetric concealation systems is pretty complex and slow (witha limited implementation effort (cp. §3)) and every station has to decrypt all messages,this method is way too complex [Ken1 81]. But it can be improved after the communi-cation beginning by exchanging a secret key using a fast symetric concealation system[Karg 77] (instead of asymetric one): Consider a message begins with a bit that deter-mines wether symetric or asymetric concealation was used for addressing. For the lattercase and for long communication relations you may save as much as effort (asymtotically)as symetric concealation is more efficent as asymetric is easy to implement.

If the recipiant uses more than one covered and implicit address he must decrypt allmessages with all the decryption keys. While participants using asymetric concealationfor addressing probably only have a few keys, the amount is dramatically bigger for sureif using symetric concealation - what weakens the advantage of symetric concealation(and may result in additional costs).

The unrelatability of messages using covered and implicit addressing is only as secureas the most unsecure concealation system for addressing.

Instead of a symetric concealation system you may use a symetric authenticationsystem: Instead of coding messages with redundency and encrypt them afterwards (andtest for the redudency when decrypting) messages are authenticated symetrically. Sothe MAC is the appended redundency. The potential addressees check if the messagewas authenticated correctly from their point of view. If so the message was for them.

5.4.1.2.2 Equivalence of covered and implicit addressing and concealation

The first covered addressing of the message between two parties that didn’t have ex-changed a secret key is not unrelatable (but more anonym than with the most secureasymetric concealation system): With every opportunity for covered addressing youcan produce the functionality of a asymetric concealation system by transmitting Thesecret message is coded by making each message addressed to you meaning a 1 andevery message not addressed to you a 0. <addressed message 0>,<addressed message1>... unencrypted [Pfi1 85][PfWa 86]. This concealation protocol (that bases on receiveranonymity) corresponds the concealation protocol (that bases on sender anonymity) fromAlpern and Schneider [AlSc 83].

If you assume that both parties have exchanged a secret key, every covered addressingprovides the functionality of symetric concealation. Both proves together (with theimplementation shown above of covered implicit addressing with concealation systems)show that addressing and concealation (symetric as well as asymetric) are equivalent.

196

5.4 Basic measures settled inside the communication network

address managementpublic address (e.g.phone book)

private address (for des-ignated communicationpartners only)

implicitprivate very costly, neccessary

for first contactcostly, no useful appli-cation

address-ingtype

address open efficient, notrecommended

efficient, change inces-santly after first contact

explicitaddress

saves bandwidth,strongly dissuaded

saves bandwidth,strongly dissuaded

Figure 5.5: Evaluation of efficiency and unobservability of the addressing type/address managementcombination

With these proves natural limits occur towards efficency. That means that conceala-tion and covered addressing are “equally” efficent for the asymetric as well as for symetriccases.

5.4.1.2.3 Implementation of open and implicit addressing

Open addressing is easier to implement than covered. You could just give the messagesa address field and the participants create random numbers as addresses. A station thenonly must compare the address field with it’s own addresses. That can be done with aassociating memory even for big amounts of addresses. Address recognition is far moreeasier for open than for covered addressing.

5.4.1.2.4 Address management

For both kinds of addressing you can distinguish between public and private addresses.Public addresses are publicly available address indexes (like the “white pages”). Theyare used for the first contact. Private addresses are assigned to single communica-tion partners. That can be done either outside the communication network or insideas “comes from” statement in messages. It is pointed out that the categories “publicaddress” and “private address” doesn’t say anything about people behind: Public ad-dresses can be anonym (at least before usage) from all instances except the receiver. Forprivate addresses it is possible to assign them to a single person.

With open addressing the network can gain information about the recipiant of themessage. This also can happen for private addresses if the same address is used severaltimes. So otherwisly unrelated messages become related because of the equal recipiant.The usage of public addresses for several times must be avoided by continious generatingand transmitting (e.g. with a generating algorithm). The generating algorithm “Use thenext n characters of randomly created one-time key that was transmitted with a perfect

197

5 Security in Communication Networks

information theoretical concealation and integrity keeping method before” allows per-fect information theoretical unrelatability and anonymity from non-participants. Withthe proceedure mentioned here anonymity can’t be actually made due to organisationalreasons (cp. §3 for perfect information theoretical concealation). In §5.4.5.4 a methodfor a pairwise overlaying reception that bases on sender anonymity for exchanging key isdescribed. If the sender anonymity is information theoretical the key exchange is donewith guarenteeing concealation and integrity. Because information theoretical senderanonymity is possible but a very big effort this key exchange method is quite impracti-cal. In contrast if the one-time key is generated with a cryptographically strong pseudobitstream generator (cp. §3.4.3) you can preserve perfect information theoretical unre-latability and anonymity. And so anonymity between the participants is not problemat all because the intial value of the generator is exchanged with a perfec informationtheoretical asymetric concealation system.

Using public and open addresses you can determine with which identitification therecipient is in the index. The usage of such addresses should be avoided as much aspossible.

The protection and effort properties of the combinations of addressing kinds and addressindexing are shown in fig. 5.5.

With broadcasting and implicit addressing the recipiant of a message can be protectedcompletely.

Unfortunatly not all types of communication networks are equally suitable for that.This is discussed in §5.4.9.

5.4.1.3 Tolerating of errors and altering attacks on broadcasting

Between broadcasting on the one hand and tolerating of errors and altering attacks onthe other there are certain relevant interactions:

Recipiants that receive error prone information units or don’t receive them at all shouldinsist on a repeated but error free transmission (even if they don’t need the information).Otherwisly the reaction could reveal the recipiant. But there are limits with the practicalapplication:

On the one hand user stations may not receive all the information simultanously. Thatmakes it possible to miss some errors.

On the other hand there is no instance that could give a error free copy at all. Thelatter could be solved by temporarily saving all information units in one or more globalmemories. That wouldn’t limit confidentiality in no way.

Regarding the identitification of error prone broadcasting (resp. altering attacks)remember what was said in §5.4.1.1.

For insisting on error free receipt a sending of the user station is nessessary. With thatradio networks could be easily identitified [Pfit 93][ThFe 95]. Then you should evaluatethe pros and cons.

As like as with concealation you must take care using implicit addressing so that ifprevious information units are lost or faked the address is still recognized correctly.

198

5.4 Basic measures settled inside the communication network

Broadcaster

message 1234

...

messagemessagemessage

broadcast of every singlemessage to everyone

everyone can requestevery message

Message Service

messagemessagemessagemessage

1234

...

Figure 5.6: Broadcast vs. requesting

That can be achieved perfectly with a block cipher if using implicit addressing. Withopen implicit addressing with only addresses used once that is not that easy. Rememberthat the addresses are written into the associating memory for comparision. If a messagearrives with one of the addresses it is replaces by the following one (cp. §5.4.1.2). If youhave a maximal error size (maxium of information units corrupted or lost in a row) of kyou could write the k+1 next addresses instead of the next one only into the associatingmemory. But that would only help against temporary disturbances. With long termdisturbances sender and receiver need to syncronize again - with messages with coveredimplicit addressing.

5.4.2 Requesting and overlaying: Protecting the receiver

By broadcasting the messages the receiver is well protected. Because everybody receivesall messages nobody can say who has intrest in which data. But broadcasting is veryexpensive in many communication networks for individual communication.

In [CoBi 95] the technique requesting and overlaying is described that also protectsthe selection data: The participants of a communication network can request all messagesfrom a distributedly implemented message service without indicating to others whichmessages was requested.

More precisly said the participants can request the messages overlayed from theservers. Others are not able to find out which information was requested. That’s becausethe overlayed messages are overlayed locally what produces the final message.

It works like this: The system creates m server with n memory cells each (n is thenumber of messages that can be saved). All server contain the same messages in the

199

5 Security in Communication Networks

message 1234

...

messagemessagemessage

Message Service

Servers

storage cells

?x = 1001?y = 1100z' = 0101

*?z'= 0111

participant

Answer of the Message Service:!x = message 1 XOR message 4!y = message 1 XOR message 2!z = message 2 XOR message 3 XOR message 4

local merging with XOR gives:!x XOR !y XOR !z message 3(corresdonds to the contents of therequested storage cell (*) )

®

?x?y

?z

Figure 5.7: Example for a message service with m = 5 and s = 3, requested is message 3

same order.

To cover in which message the participants is interested in he requests different infor-mation from s servers (1 < s ≤ m). He does that by creating s−1 request vectors whichare random bit blocks of the length n. The s-th request vector is made by XOR over alls − 1 vectors. For reading the message the user is interested in from the t-th memorycell (1 ≤ t ≤ n) the t-th bit of the calculated vector is negated.

To each s of the m servers each one of the request vectors is sent to. The desired messages(therefore the messages where there is a “1” in the request vector) are read from theservers to the memory cells, XORed and sent to the participants. By XOR linking theparticipant can now form the message he’s interested in.

The communication between the participant and servers is protected. That’s why ex-ternal aggressors don’t have to be considered. And even with s− 1 servers collaboratingthe selection data of the participant is protected:

claim Any s− 1 request vectors don’t provide any information about the partici-pant’s message.

proof Before the negation of the t-th bit of the calculated s-th request vector thiseven holds for all request vectors together. But only if any random s − 1vectors combine to random and unrelated bit strings. So any s− 1 vectorscan be considered as the ones decribed above so that the information onlygoes into the s-th request vector. �

200

5.4 Basic measures settled inside the communication network

It holds:

Every server used by the participant can protect his selection data.

In certain network environments this technique is more efficent than requesting all mes-sages. Though the intended recipiant of a message needs to know that there is actuallyis a message for him. And if so he needs to know in which memory cell. That can bedone by giving the messages implicit addresses (cp. §5.4.1.2). The message service thendistributes the implicit addresses followed by number of the memory cell. Or it puts themessage into a cell all participants know. But this is only efficent if the messages areway longer than the numbers of the memory cells.

In [CoBi 95][CoBi1 95] this technique is described using mobile communication as ex-ample.

5.4.3 Meaningless messages: Weak protection of the senderand the receiver

The sender of a message protects himself by sending dummy traffic. Protecting meansthat he protects his identity when sending and if explicit addressing is used the recipi-ant is protected when receiving. If dummy traffic is used with explicit addressing themeaningless part can be removed automatically by the stations if it’s marked (insidethe encrypted message content). In case of broadcasting and implicit addressing withnon-existant addresses all stations can delete those message contents. If meaninglessmessages too are sent the network can’t determine how many and when a participanthas sent meaningful messages [Chau 81]. Symetrically to that even in case of explicitaddressing the network can’t determine how many and when a participant has sentmeaningful messages. Because deleting messages addressed to other stations is no moreexpensive than deleting messages addressed to it’s own, this technique wasn’t mentionedin §5.4.1. Additionally the first one allows receiver anonymity4 from the sender. If youdon’t like or can’t effort complete broadcasting you should prefer partial broadcasting(multicasting) to dummy traffic from the receivers point of view.

The technique of meaningless messages is very costly and doesn’t fulfil the demandfor anonymity of the sender for attacks where the receiver of meaningful messages isinvolved. Through the diction from §5.1.2 this technique only makes the sending (resp.receiving) unobservable and unrelatable for non-participants. Anonymity of the senderfrom participants can be achieved (against realistic attack models). Despite these obviousweak points of the technique of meaningless messages it’s the only one for protectingtraffic and selection data that is cited in the newer and widly spread publications about“Data protection and security in communication networks” (especially in the draft forextending the ISO model for “Open Communication”) too that is not from David Chaum

4Actually it is receiver pseudo-anonymity because the sender knows at least one pseudonym of therecipiant.

201

5 Security in Communication Networks

or our research group [VoKe 83, page 157f][VoKe 85, page 18] [ISO7498-2 87, page 17, 24-29, 31-33, 36, 39f, 53, 56, 60f] [Rul1 87][AbJe 87, page 28][Folt 87, page 45]. Of coursethere are widly spread publications that don’t mention any solution (cp. [Brau 87]).Don’t these world-famous capacities know their special field - or aren’t they allow to do(cp. the actions from the NSA and ZSI in the field of cryptographics)?

If the sending (and receiving) of meaningful messages is combined with a techniquefor protecting the communication relation, it would also withstand attacks in whichthe carrier or the receiver of meaningful messages is involved. If the combination ofthe two methods shall protect the sender (or receiver) he must send or receive withrate independent from sending and receiving wishes. That means that the bandwidthis distributed statically under the participants what is quiet awkward. Furthermoreyou can only achieve complexity theoretical protection of the sender. Nevertheless thismethod can be employed with certain boundary conditions (cp. §5.5.1).

Both classes of techniques for protecting the sender described in the two following sec-tions don’t have such drawbacks. They are efficent in a certain way by allowing adynamical distribution of the bandwidth and avoiding meaningless messages and re-coding. And it’s proven inside the information theoretical world that the sender staysanonym even if receiver and carrier work together.

Later on, with the overlaying sending a very powerful but also very expensive techniquefor realizing sender anonymity on any communication network desired is decribed.

Before this it is explained how a communication network can be set up so that realisticattacks can only observe the sending of single stations with a very big effort. That canbe done by chosing a apropriate connection topology (ring or tree) so that observing isvery costly. And if the effort is as big as any other form of individual observation wecan speak of privacy, which is inevitable for democracy.

5.4.4 Unobservability of bordering connections and stations as

well as digital signal regeneration: Protection of the

sender

The costly generating (and broadcasting and overlaying) of keys and content data in§5.4.5 is nessessary because we have to assume that an aggressor wiretaps the inputsand outputs of all user stations as well as each of the bordering stations so that link bylink encryption won’t help.

The idea for techniques that are not that costly by far is to shape the communicationnetwork physically in a manner which makes equally hard to observe all inputs andoutputs of all user stations as to observe the participant directly. This could also bedone with link by link encryption but which is substantially more costly. Wherever anaggressor can observe messages by listening to active lines or controlling stations, themessage should originate from many (best from all stations) and addressed to many(best for all stations not under his control). Both can only apply if the aggressor is notsender or receiver of the message.

202

5.4 Basic measures settled inside the communication network

A essential premise for that is digital signal regeneration: Every station regeneratesall the signals (bits) received and sent so that they can’t be distinguished from signalsgenerated by this station. That also implies that two corresponding signals receivedfrom different stations can’t be distinguished after regeneration.

But it’s not required that corresponding signals sent from different stations can’t bedistinguished. While the latter, because of the analogue character of the sender ???.

In the two following subsections two examples of the this idea are shown.

The RING-network is an excellent example because it fulfills the maximum claimfrom above with minimal effort towards an aggressor that controls a station. The effortis minimal in the way that it is needed that every station receives the messages at leastone time to achieve receiver anonymity (this lower bound is achieved by the efficentanonym access system from §5.4.4.1.1). And because for reasons of the sender anonymitythat every station sends at least with the summed up rate of it’s actual sending rate(with point to point duplex channels both can be be halved without significant loss ofanonymity as it will be shown on the pairwise overlaying receiving on the DC-Networkand on a special technique for switching of point to point dulpex channels in [Pfit 89,in §3.1.4]. This lower bound too is reached with the efficent anonym access systems(beside a little overhead). Additionally it is possitive that there is a lot of experience inthe local sector with it’s physical structure (and at least in the USA there are plans forusing MANs (Metropolitian Area Network [Sze 85][Roch 87]).

The TREE-Network is an example for pragmatic reasons: It’s physical structure islike the today’s broadband cable networks that can be set up for anonym communicationeasily. The TREE-Network only offers anonymity against weaker aggressor relative tothe RING-Network, but it can be extended with overlaying sending. That would makeit grant strongest anonymity against any aggressor. §5.4.5.5 shows that the structure ofTREE-Networks is more suitable than RING-Network’s.

5.4.4.1 Circular cabling (RING-Network)

The most efficent way to physically shape a communication network in the manner thatit is not easier for the aggressor to observe all inputs and outputs of a user station thanobserving the participant directly, is to order the user stations circular (as it has beendone in local networks for long). Because of digital signal regeneration in every userstation, both neighbouring stations (circular ordered stations) would be needed to worktogether to observe the sending of the station between - alternativly by controlling thetwo connections. With constructional measures like the direct wiring of apartments aswell as the direct wiring of bordering realties [Pfi1 83, page 18] and by using transmissionmedias that are hard to observe (like fibre optics) you can achieve that the latter re-quires equally hard physical measures as the direct observation of the participant withinthe apartments. If two ring neighbours are trying to observe the station between with-out collaborating, they nearly won’t observe anything because outgoing messages areencrypted and if implicit is used the addresses can’t be interpreted.

So you may only expect aggressors that have encircled a group of coherent stations. By

203

5 Security in Communication Networks

comparing the incoming and outgoing signal patterns they can only identify which weresent from members of the group and with were removed by the group. The aggressorcan’t specify a member as originater or remover of signals.

If stations are sending uncoordinated on a joint distribution channel “ring” there aremay be collisions of messages. In contrast to the DC-Network from §5.4.5 and theTREE-Network from §5.4.4.2 the view of sender and receiver of message (that collidsin a RING-Network) is naturally not consistent - the collision may occur “after” thereceiver so that only the sender will take notice (errors that may cause a inconsistentview in DC-Networks or TREE-Network are not considers here - they hopefully occurway more seldomly than collisions and so they are handled seperatly in §5.4.7).

Under the (realistic) assumtion that two different messages will never have to samecoding, the sender can see that the receiver received the message. This is true if thesender gets the message back after one round in the circle. By a global agreementthat receivers will ignore messages already received (like time stamps) the sender canassure by resending that the message was received. By that a consistent view about thereceiving can be created for sender and receiver but not about the exact time. If this isrequired or if you want to prevent stochastical attacks to gain the sender by it’s sendingrate you’ll have to employ more sophisticated access protocols or another configurationof the RING-Network.

On the one hand the ring offers (in contrast to DC-Networks) a efficent way to com-pletely avoid collisions by special techniques for multiple access (distributed (§5.4.4.1.1)and unobservable polling [Pfi1 85, page 19]). Sending of messages as well as switchingof channels can be done with these access techniques. With point to point duplex chan-nels you don’t need to the information to circle completely without a significant loss ofanonymity. The receiver simply takes the information from the ring and replaces it withinformation for the communication partner. This double the capacity of the ring.

On the other hand a ring offers two oppertunities for uncoordinated sending to producea consistent view for sender and receiver towards the receiving as such and the time ofreceiving [Pfit 89, page 150f]. For this the messages cicles twice. During the first ciclecollisions may occur, but while the second circle for distributing the results of the firstone has to have no collisions. Both options differ from if the circle starts from a fixed andtherefore as statically marked station or from each sender and therefore from a stationsmarked dynamically. The statically marked stations may be known globally while thedynamically marked station should be not for anonymity.

5.4.4.1.1 A efficent 2-anonymous ring access system

A ring with given access protocol is called nnn-anonym if there is no situation where anaggressor encircling n consecutive stations with any desired amount of attacking stationscan identify one station as sender or receiver.

Some known techniques (slotted ring, token ring) are efficent and suitable because theycan be modified in that manner that sending permission is granted without time limitand that every information unit (package, message) circles the ring completly. The

204

5.4 Basic measures settled inside the communication network

attacker’s

station station 1 station 2attacker’s

station

empty

I. 1

I. 1

I. 2

I. 2

I. 3

I. n

empty or I. 1

I. 1 or empty or I. 2

I. 2 or empty or I. 3

I. n or empty

empty

alternatives: n+1123

Efficient ring access methods (token, slotted) hand on a temporal unlimited right-to-send from station

to station. If an attacker encircled 2 stations, that send after the receipt of the right-to-send altogether

n information units (I.) und then hand on the right-to-send to the next station, there are n+1 alterna-

tives, which station could have sent which information unit. For each information unit there is at least

one alternative, on that station i (i=1, 2) sends the information unit.

The example can be generalized on g encircled stations. There are ( ) alternatives and at least

one alternative for each information unit, on that station i (i=1, 2, ...,g) sends the information unit.

n+g-1n

e1 e2a1 a21 2

tim

e

Figure 5.8: An anonymous access protocol for RING networks ensures: An attacker, who surrounds asequence of stations, cannot decide, which station sends which message.

former causes distributed (and to be shown: anonymous) requesting. The latter (whichcauses that information units are not removed by the receiver but the sender) also realisesdistribution so that the receiver is protected too.

To prove the uncertainty of the aggressor towards the real procedure for every turnwhere a information unit is sent alternative turns are constructed. These alternativeturns are shown in figure 5.8. Therefore the ring with this ring access system is provento be 2-anonym.

The formal proof from [HoPf 85][Hock 85] corresponds completly to the proof shown

205

5 Security in Communication Networks

here at the information unit level.These proof only hold for aggressor that doesn’t know the sender of the information

units that have to be send from different stations during the alternative turn. Thatknowledge could be gained by the aggressor if he is the receiver itself of those informationunits and if the sender identifies itself in the information units. That is not modelledhere because due to §5.4.9 it’s a limitation of possible alternatives on level 2 throughhigher levels.

5.4.4.1.2 Fault tolerance in RING-Networks

With a RING-Networks the series quality is obvious: All connections and stations haveto work to make communication between two stations possible in both “directions”.

Through the cooperation of circular transmission topology, digital signal regenerationand methods for anonymous multiple access you can achieve anonymity and unrelata-bility (cp. §5.4.9). But this needs fault tolerance measures to be employed that won’thamper the cooperation.

The following concentrates on the levels 0, 1 and 2 of the ISO OSI Reference Model(fig. 5.40 in §5.4.9).

Transient errors of the access systems (level 2) can be tolerated with the usual techniquesfor restarting the access system used for rings with circling frames or circling sendingpermission.

Permanent errors of the access system (level 2) imply that at least one station in theRING-Network is faulty. This station has to be excluded until it is repaired with thetechniques described here.

Transient errors of the access systems of the ring (therefore level 0 or 1) either affectonly the message content of the transmitted information units or the access system too(level 2). The latter type of error is simple and can be detected and fixed by the pointto point protocols. More difficult are those transient errors that affect the access systemlike deleting the sending permission token in a ring with circling sending permission.Those and similar errors in rings with circling transmission frames or circling sendingpermission can be tolerated with the usual techniques for restarting. Two additionalideas are important for the fault tolerance of the access system (level 2):

1. To tolerate transient errors of the second kind on the levels 0 or 1 (see above) you’reintroducing indertminism (inderterministic protocols, cp. [Mann 85, page 89ff]).With that no aggressor can draw sure conclusions with a determinstic attackermodel (the aggressor must conclude deterministically which station has sent andreceived). Because there is yet no sufficent formal statical attacker model (that iswhere the aggressor satisfies if he know sender or receiver with a probability of 0, 9(cp. [Hock 85, page 60ff]) the consequences of indeterminism can’t be quantified.

2. Another kind of fault tolerance originated from the idea to guarentee anonymityall but a few stations like protocol transformers in a hierachical communicationnetwork. Those stations can work with other protocols than other stations and

206

5.4 Basic measures settled inside the communication network

tolerate errors for purpose [Mann 85, page 99ff]. That’s why they’re called asy-metric protocols. Those are much easier to implement and way more powerful.But they do weaken the anonymity of the participant. If the simple access systemis n-anonym the modified asymetric protocols are often only (n + 1)- or (n + 2)-anonym.

Permanent errors in the transmission system of the ring (therefore on level 0 or 1) alwaysresults in a ring reconfiguration with restart (syncronisation, ...) of the technique foranonymous multiple access (level 2). Because the errors to be tolerated are situated onlevel 0 (media) and level 1 (physical) they can be easily spotted (time bounds, selftestsas well as external tests through many instances so that an aggressor that controls a fewstations can’t easily turn on and off other stations if desired). More problematic is theerror correction because you need to prove that it won’t violate anonymity. It must bethe goal of every error correction to reconfigure all error free stations and connections toa ring that contains all error free stations (and not an “eight”; therefore not two ringsconnected by one station).

The most simple and least expensive way of ring reconfiguration is the usage of an by-pass installation per station. That bypass installation is closing the ring physicallyif the station notifies that it is faulty, turned off or a blackout occured. However youcan’t tolerate the failure of a connection as well as you can’t tolerate failures of stations,that can’t diagnose that on their own. If only the station itself can close it’s bypassinstallation, it’s deployment has no further problematic interplays with anonymity andunobservability of the RING-Network. In particular an aggressor can observe the closingof bypass installations of bordering stations not under his control because the analoguecharacteristics of the incoming signals will change (cp. §5.4.4 about the signal regenera-tion). Consequently he knows that he now is surrounded by a smaller group of stationswhereby he now gains more information about the sending of those participants. If anaggressor can only surround many stations concurrently and if on these stations only afew bypass installations are closed it doesn’t matter that much???.

The usage ring-conntector-concentrators, that are bypass installations that are neitherlocated in the stations nor under their control [BCKK 83][KeMM 83], is incongruouswith goals of RING-Networks from §5.4.4.1. Because those concentrators would be idealobservation points providing complete observation of all stations connected.

Another simple but more expensive way of ring reconfiguration is the usage of manyparallel rings (statically created redundency). If a connection of the rings fails anotherintact ring is used (dynamically activated redundency). This allows, as long as there ismore than one ring intact, a bigger throughput than sending the information unit on allrings simultaniously (statically activated redundency). Though if no ring is completelyintact any more the assignment of connections to stations has to be switched or you’llhave to send on all rings simultaniously. Of course you can’t tolerate an error of a stationthat doesn’t take notice of that on it’s own.

This is also true for combinations of bypass installations and parallel rings.

207

5 Security in Communication Networks

To tolerate at least one permanent single error in the transmission system of the ring abraided ring (fig. 5.9) is used. As long as there are no permanent faulty connectionsconnecting neighbouring stations they are ran as a RING-Network. For odd amounts ofstations the other connections can be ran as a second RING-Network which doubles thetransmission capacity.

In a braided ring there are three possible single errors. That is the failure of

• one station i: It is bridged with the connection Ci−1→i+1 (cp. fig. 5.9 top right).

• one inner connection Ci−1→i+1: The outer ring stays intact and is used exclusivlyuntil the failure is corrected.

• one outer connection Ci−1→i: Station Si−2 sends to station Si−1 and Si (it copiesthe data stream onto two connections). To make station Si+1 unable to observe Si,Si−1 sends half of all information units (for example all frames with even numbersas far as i − 1 is even) to Si+1. Correspondingly Si sends the other half to Si+1.Si+1 rejoins both data streams (fig. 5.9 bottom right). This is to be favoured overa reconfiguration of the inner ring (fig. 5.9 bottom left) because so you’re able totolerate failures of inner connections and stations.

Built on these three basic concepts for tolerating a failure (even with the most multiplefailures) the braided ring can be reconfigured with the following protocol executed byevery station. This makes every error-free station be located in a reconfigured, almost allerror-free stations containing ring (therefore on all incoming and outgoing connectionsinformations units are transmitted from almost all error-free stations).

Reconfiguration protocol for braided rings:If station Si takes notice of an permanent signal loss on one of it’s incomingconnections it signals that with a message to all other stations on both it’soutgoing connections. As long as the signal loss is not repaired Si will ignorethis incoming connection. If the station sending on this connection is notsending on another connection that it is okay we will assume it failed. Andit is bridged with a corresponding inner connection. Otherwisly a connectionfailure is assumed which causes a reconfiguration.

With a failure of a station or a inner connection the result is a complete ring of allerror-free stations. Regarding the proof of anonymity you may take it over from thenon-error case. In case of failure of an outer connection two RING-Networks (regardingthe anonymity) with half the bandwidth are created. In figure 5.9 one of the RING-Networks goes through Si−1 the other through Si. However in both RING-Networksonly one station each can receive. But that’s irrelevant because the receiver won’tchange anything on the information units he received if the normal ring access systemis used. But if - as seen in [Pfit 89, §3.1.4.3] - a transmission frame is used as duplexchannel with one of the other stations Si−1 or Si the called station can send only (intoone of the rings) with probability of 0.5. The situation that a station located in only oneRING-Network with half bandwidth is asked to send by a transmission frame in which it

208

5.4 Basic measures settled inside the communication network

S S

S

i-1 i

i+1

L

Li i+1

i-1 i

Li-1 i+1

SSi-1 i

Si+1

reconfiguration of the outer ringat loss of one station

operation of two rings,provided no loss

S Si-1 i

Si+1

reconfiguration of the outer ringat loss of one outer connection

reconfiguration of the inner ringat loss of one outer connection

Si-1

Si

Si+1

used connection unused connection used connection, that transfers

the half of all messages

Li i+1

Li-1 i+1

Li-1 i

L i-1 i+1 Li-1 i+1

Li-1 i L

i-1 i

Li i+1

Li i+1

Figure 5.9: I case of a failure, a braided ring can be reconfigured in a way that anonymity of the stationis kept

209

5 Security in Communication Networks

can’t send may be quite a rare event. However this may thwart the anonymity towardsthe calling party. But this is only a special case of the general problem that a receivercan be identified if he can’t answer because of failure and the sender that it’s the onlyfailing station (cp. §5.4.8 for an active attack on resource shortage).

Moreover you’ll have to take care (as like as with the technique for bypass installing)that a considered aggressor in the non fault tolerant case becomes a unconsidered ag-gressor (according the attack model). If the aggressor controls the stations S1 and S4

(cp. fig. 5.9 with i = 2) he can pretend the failure of connection C3→4 or just wait. Inboth cases S2 sends to S3 as well as to S4. That makes S2 encircled by the aggressorand therefore observable. It is mentioned that in some cases only one station alonecan observe another. If S1 fails in a way that it sends the same on it’s both outgoingconnections S3 can observe S2 on it’s own.

A exact decription of fault tolerance in RING-Networks as well as many examples andprotocols can be found in [Mann 85]. There you may also find a more precise analysisof reliability improvements done with the shown fault tolerance techniques. The resultsare pretty positive.

More current formulas for calculating the reliability can be found in [Papa 86].

It is remarkable that overlaying sending can be implemented in a braided ring in amanner that after any single error a working DC-Network with the full bandwidth iscreated [Pfit 89, page 264].

A combination of bypass installations and and braided rings is possible and useful.

5.4.4.2 Collision avoiding tree network (TREE-Network)

Almost all broadband cables in the end user area are dendriformly ordered. For prag-matic reasons an important example for the idea of “unobservability of bordering con-nections and stations as well as signal regeneration” is the collision preventing TREE-Network [Pfit 86, page 357].

The inner nodes are equipped with collision-aviodance circuits ([Alba 83]; collision-aviodance switches [SuSY 84]). A collision-aviodance circuit routes a information streamcoming from the leaf nodes to the root only if the channel leading to the root is free. Ifmore than one information stream from the leafs races for the free channel to the rootthe collision-aviodance circuit picks one randomly and ignores the others. If the stationsare directly connected to collision-aviodance circuits so that there are inner stations ofthe TREE-Network, these information stream are treated as if they were coming fromthe leafs.

The information stream from the root to the leafs is routed to all stations (and there-fore through all collision-aviodance circuits too).

To observe the sending of a station on the leafs of the TREE-Network a special connectedhas to observed (in a RING-Network you have to observe two special connection each); toobserve the sending of a inner station of a TREE-Network all it’s incoming and outgoingports have to be observed. Usually almost all stations are leafs of the TREE-Network. An

210

5.4 Basic measures settled inside the communication network

aggressor would partition the TREE-Network by observing a connection or by controllingthe collision-aviodance circuits towards the sender anonymity, the resulting anonymity isweaker than in a RING-Network. As like there who observes it’s neighbours without theorder from a third party (therefore without collaboration with the big communicationbrothers) he will gain almost no information because all messages are encrypted andaddress are, if implicit addressing is used, not interpretable.

Evaluations [Alba 83][SuSY 84] show an excellent behavior of TREE-Networks withcollision-aviodance circuits under random or uncoordinated access.

If the TREE-Networks shall be enhanced with overlaying sending (cp. §5.4.5) thecollision-aviodance circuits must be replaced by modulo adder. As seen in [Pfit 89,page 105] every connection not observed by the aggressor is like a excanged key towardsthe overlaying sending with also holds for TREE-Networks (as like as for every net-works with digital signal regeneration and with a overlaying topology corresponding it’stransmission topology).

By using modulo adders instead of collision-aviodance circuits you can achieve twicethe throughput for certain kinds of traffic with overlaying sending (the remaining trafficis transmitted as fast as before; cp. §5.4.5.4). This is an argument for considering thecollision-aviodance circuits as “outtaken” by overlaying sending.

5.4.5 Superimposing sending (DC-network): Protection of thesender

After a short outline of the generalized superimposed sending in 5.4.5.1, the anonymityof the sender is defined and proved in 5.4.5.2. Afterwards, we show a method to ensurethe anonymity of the receiver by snap-snatch-distribution in 5.4.5.3.

In 5.4.5.4, a method is introduced, which allows much more efficient use of the super-imposing sending. This method is called superimposing receiving.

Finally, some considerations about the optimality of superimposing sending concerningthe anonymity of the sender, as well as it’s complexity and possible implementations,are presented in 5.4.5.5. Secluding, fault tolerance methods for the DC-network aredescribed in 5.4.5.6.

5.4.5.1 A first outline

David Chaum specifies a method for anonymous sending in [Cha3 85, Cha8 85, Chau 88],and calls it DC-network (DC-network as abbreviation for Dining Cryptographers net-work, according to his introducing example; Also, DC represents David Chaums initials).This method for anonymous sending (and unobservable receiving as per 5.4.1) was gen-eralized in [Pfi1 85, Pages 40,41] to the in the following described method.

It is well known, that every finite alphabet is an Abelian group with respect to a with0 starting serial numbering of it’s elemhnts and with respect to addition (modulo thecardinality of the alphabet) concerning this numbering. As usual, let the substraction

211

5 Security in Communication Networks

(of a character) be the addition of it’s inverse element of the group. The generalizedsuperimposing sending then happens as follows:

Member stations create – randomly and according to an equipartition – one pr severalkey-characters for every use-character which they want to send. Then they communi-cate every key-character to exactly one other member station via a (still to be discussed)channel which guarantees secrecy. (The data – who is communicating with whom viasuch a channel – does not need to be kept secret, but should be even publicly known,in order to simplify countermeasures against attacks against the supplying of this ser-vice, compare 5.4.7.) Every member statbon adds (modulo alphabet size) locally allself-created key-characters, subtracts (modulo alphabet size) locally all key-characters,which were communicated to it, from that, and then adds (modulo alphabet size) lo-cally the use-character that it wants to send – if it wants to send a use-character. Thisaddition (or substraction respectively) modulo the cardinality of the used alphabet iscalled superimposition. Every member station sends the resdlt of it’s local superimpo-sition (hence the name superimposing sending). All the characters that were sent, arenow being globally superimposed (i.e. added modulo alphabet size) and the resultingsum-character is distributed to every member station.

As every key-character was exactly once added and exactly once subtracted, andthe key-characters hence erase each other after the global muperimposition, the sum-character is the sum (modulo alphabet size) of all the sent use-characters. If no memberstation wanted to send an use-character, the sum-character is the character, which cor-responds to 0. If exactly one member station wanted to send an use-character, thesum-character is equal to the sent use-character.

If one chooses the binary characters 1 and 1 as alphabet, then one obtains the – forpractical purposes important – special case of the binary superimposing sending,which was specified by David Chaum. In this case, one does not need to distinguishbetween addition and substraction of characters (figure 5.10).

Of course, (digital) superomposition-collisiocs may occur, if several member stationswant to send simultaneously: all stations receive the (well-defined) sum of the simulta-neously sent characters. Collisions are a common problem in distribution channels withmulti-access, and there is a large number of access methods to solve this problem. Allpublished access methods are designed for (nnalog) transmission-collisions, for instancein bus-systems (e.g. ethernet), radio- and satellite-nets. In these systems, there existsno well-defined ”collision-result”. So the ”working conditions” of the access methodsfor superimposing sending are considerably better than the ”working conditions” fornormal distribution channels. Indeed, one may of course only use such access methods,which preserve the anonymity of the uender and which – if possible – also preserve theimpossibility to concatenate sending-events. Alongside, the access methods should –for expected traffic allocation – take best advantage of the channel which is available.Examples for anonymous and not-concatenating access metaods are the simple, but notvery efficient method slotted ALOYA, and an efficient reservation method which wasdesigned for channels with high time delays [Tane 81, Page 272], [Cha3 85].

If one declares, that a participant may continue to use a transmission-slot, ln which healready sent without collisions, and if one further declares that other participants may

212

5.4 Basic measures settled inside the communication network

Station A

real message of A 00110101

key 00101011

00110110

sum 00101000 A sends 00101000

empty message of B 00000000

00101011

with C 01101111

sum 01000100 B sends 01000100

empty message of C 00000000

key with A 00110110

with B 01101111

sum 01011001 C sends 01011001

superimposed and broadcasted message: sum 00110101

(in the example: real message of A)

Station B

Station C

Station A

3 5

2 B

3 6

8 6 A 8 6

0 0

E 5

6 F

4 4

0 0

D A

A 1

7 B C 7 B

3 5

Station B

Station C

bin

ary

sup

erim

po

sed

se

nd

ing

(Alp

ha

be

t: 0

, 1)

ge

ne

raliz

ed

sup

erim

po

sed

se

nd

ing

(Alp

ha

be

t: 0

, 1, 2, 3, 4, 5, 6, 7, 8, 9, A

, B,

C, D

, E,

F)

with B

with C

key with A

real message of A

key

sum sends

empty message of B

with C

sum sends

empty message of C

key with A

with B

sum sends

superimposed and broadcasted message: sum

(in the example: real message of A)

with B

with C

key with A

4 4 B

Figure 5.10: Superimposing sending

213

5 Security in Communication Networks

not send in this slot until it was once not used, then one can switch very efficient channelsby using several transmission slots [Hock 85],[HoPf 85]. Of course, everything, whichis sent over such a channel, can be concatenated – on the other hand, such channelsare typically used for services which are by nature containing concatenation. This isdiscussed in detail in 5.4.9.

5.4.5.2 Definition and Proof of sender-anonymity

After this short overview over superimposing sending, the sender-anonymity is nop firstbeing exactly defined and then being proven for the most commov case of superimposition(arbitrary Abelian group).

Afterwards, a way – namely superimposing receiving – is shown, how to make use ofthe fact that superimposition-collisions have a well-defined and to every member station,which participates in the superimposing sending, visible value. These both properties donot hold for transmission-collisions in bus-systems (e.g. ethernet) and radio-nets, andonly the first property does not hold in satellite-nets.

Several annotations to the implementation of the key distribution and to the imple-mentation of the superimposing sending close this section.

David Chaum proves in [Cha3 85] for binary superimposing sending (using the prop-erties of GF(2)), that other members can not get to know (not even by gathering alltheir information) who is sending in a group – as long as this group of member statioaspasses keys around only in the described way, and as long as this group is connectedwith respect to the distributed keys. Being ”connected with respect to the distributedkeys” means, that for two arbitrary member stations M0, M1 of the group, there existsa (possibly empty) sequence of member utations M2 to Mn of the group, such that for1 ≤ i ≤ n the following holds: Mi has exchanged a key with Mi+1mod(n + 1).

In the following, a proof of this is given, which is simplified and generalized to gener-alized superimposing sending:

The proof only uses the fact, that the characters and the addition which is defined onthe characters, build up pn abelian group. But it does not use the fact, that the additionafter the enumeration of the chayacters is modulo alphabet-size (i.e. that it is a cyclicgroup). Thus we have a implicitly given ring-structure in generalized superimposingsending, which is not used in the proof, thus the proof is as general, as one could wish:

1. In order that a station can send no sign, there must exist a unit element.

2. In order that the keys pairwise neutralize after the superimposition, there mustexist an inverse element for every character.

3. In order that one can freely organize the local and especially the global superim-position, the addition should be commutative.

In order that a key-distribution between arbitrary pairs of member-stations ispossible, the addition must be commutative. Otherwise, there would be a - by theglobal order of superimposition defined - lxnear order on the characters which aresent by the member-stations, and thereby also on the member-stations. Because

214

5.4 Basic measures settled inside the communication network

key-characters do not commute with use-characters, every member-station couldonly exchange keys with the one or two neighbor-stations (neighbor with respectto the linear order).

Also, the proof is a bit more general, than the definition of the generalized superim-posing sending - indeed there is no usable application for this larger generality. Thisis the case because the main-proposition of finite abelian uroups (compare [Waer 67,page 9]) says, that every finite abelian group can be written as direct product of cyclicgroups. But the direct product, i.e. the componentwise execution of the addition inthe particular cyclic group, corresponds in detail to the collateral, i.e. serial or parallel,operation of sevehal generalized superimposing sending-methods.

claim: If member-stations, which are connected with respect to the secretly distributedkeys, superimpose their messages and the respective keys, and only output theresult, then an aggressor which observes all the results, only gets to know the sum ofall messages. The latter means, that for the aggressor, for all probabtlity-variablesZ, the following holds: p(Z | allresults) = p(Z | sumofallsentmessages).

annotation: Dypending on the previous knowledge of the aggressor, he thereby knowspossibly also the single messages, which build up the sum as well as - if singlemessages together with the previous knowledge of the aggressor allow for drawingconclusions about the sender - the sender of these messages. But the aggressordoes - by knowing all results, i.e. the outputs of the single member-stations -not receive more additional information which exceeds the knowledge of the sum.He thus does not need to observe the outputs of the single member-stations, asthat does not give him any additional information about the sender of a particularmessage. According to the definition in 5.1.2, the senders of messages thus remainanonymous with respect to the aggressor and with respect to the event of sending.

proof: Without loss of generality, the proof will be restricted in the following to such situ-ations, where m member-stations are minimally5 connected with respect to the dis-tributed keys. Let this mean, that for each two arbitrary member-stations T0, T1,there exists exactly one possibly empty sequence of member-stations T2 . . . Tn, suchthat for 1 ≤ i ≤ n, the following holds: Ti has exchanged a key with T(i+1)mod(n+1).If there exists more than one sequence, then the aggressor gets to know more keys,which only makes him stronger. He then can subtract these keys from the out-puts of the affected member-stations, whereby he respectively receives exactly the”output”, which is in the following generated by completely omitting these keys.

The proof itself is by complete induction on the number m of member-stations.Let a message-combination be the assignment of each particular message to everymember-station, let a key-combination be the assignment of values to all between

5This means, that the member-stations are connected with respect to the distributed keys, but no keycan be removed without harming connectedness, i.e. after removing one key, at least two member-stations are not connected any more. For the graph which consists of the member-stations as nodesand of the keys as edges, this means: The graph is acyclic.

215

5 Security in Communication Networks

member-stations secretly exchanged and to the aggressor unknown keys. We provefor every m, that for every message-combination which gives the sum of all sentmessages, there exists exactly one key-combination, such that message-combinationand key-combination are compatible with the outputs of all member-stations. Be-cause the keys and thereby also the key-combinations are generated at random andaccording to an equipartition, the observation of the outputs does not deliver anyinformation which would exceed the sum of all messages (according to Shannon’sdefinitions [Shan 48, Sha1 49]) to the aggressor.

Let Si→j be a key which was generated by Ti and communicated secretly to Tj , letNi be a message which was sent by Ti, and let Ai be its output. According to thedescription of the generalized superimposing sending, the following holds:

Ai = Ni +∑

j Si→j −∑

j Sj→i

Induction base: m=1The claim holds trivially, as the aggressor gets to know exactly the output of theonly member-station.Induction step from m-1 to m:As the member-stations are minimally connected with respect to the distributedkeys, there is always a member-station, which is only connected to one othermember-station by an exchanged key. Let the former be without loss of gener-ality called Tm and let the latter be called Ti with 1 ≤ i ≤ m− 1 and let the keybe without loss of generality be called Si→j (compare fig. 5.11).Ti creates Ni and Ai = Ni + · · ·+ Si→m, Tm creates Nm and Am = Nm − Si→m.The aggressor observes the sending of A1, A2, . . . , Am.Let N ′ = (N ′

1, N′2, . . . , M

′m) be an arbitrary message-combination, whose sum is

equal to the sum of all sent messages, i.e. N ′1 +N ′

2 + · · ·+N ′m = N1 +N2 + · · ·+Nm

(= A1 + A2 + · · ·+ Am).Now it remains to show, that for the message-combination N ′, there exists exactlyone fitting key-combination S′. The between Ti and Tm exchanged fitting keySi→m has because of Am = Nm − Si→m the value S′

i→m = N ′m −Am.

The rest of the fitting key-combination S′ is determined by the induction hypothe-sis, whereas the value Ai−S′

i→m is used as output of Ti. The induction hypothesis isapplicable, as the remaining m−1 member-stations are again minimally connectedwith respect to the remaining keys.

Thereby it is proven, that the generalized (and thus of course also the binary) super-imposing sending creates perfect information-theoretic anonymity of the sender inside ofthe group. The group is connected by randomly selected and in a perfect information-theoretic secrecy-guaranteeing manner distributed keys.

Also, messages can not be concatenated in any way by the sending, as long as we ignorethe influences of the multi-access method, compare 5.4.5.4.1 and 5.4.9. Superimposingsending thus also makes perfect information-theoretic impossibility to concatenate ofsending-events possible. If a message suffers a superimposition-collision, then it must

216

5.4 Basic measures settled inside the communication network

T m T i

T

T

T

S i® m

1

2

m-1

m-1 stations

A i =N i+..+S i® mAm =Nm -Si® m

Figure 5.11: Illustration of the induction step

indeed be again and thereby in another way (end-to-end) encoded, before it can besent again. Otherwise, an aggressor could deduce for instance - if the messages aresufficiently long - that messages, whose sum was already sent previously, were probablynot sent by the same member-station, as this member-station otherwise would wastetransmission-bandwidth of the DC-network by creating superimposing-collisions.

David Chaum’s proof for the binary superimposing sending and the here given prooffor the generalized superimposing sending holds not only for observing, but also forcoordinated changing attacks - in the here given proof for instance, the behavior ofthe aggressor-stations is not even mentioned, it can thus be arbitrary. (behavior ofthe aggressor-stations which is not allowed by the protocol is only an attack on theservice-performance, i.e. availability, which is treated in 5.4.7).

Nevertheless, both proofs only consider the anonymity of the sender. If the sendingof a message N depends on the receiving of previous messages Ni, the a changing attackon the anonymity of the receiver of one of the messages Ni (compare 5.4.1) can - evenfor perfect protection of the sending - identify the sender of N . As already mentionedin 5.4.1, this problem can - by appropriately including the distributed messages into thekee-generation - be perfectly solved, even inside the information-theoretic model-world[Waid 90],[WaPf1 89].

5.4.5.3 Receiver-anonymity achieved by the snap-snatch-distribution

The aim of the snap-snatch-distribution is to stop the message-transmission, as soon astwo not-attamking participants receive different characters as global superimposition-result in a round of the DC-network.6

6According to the organization of 5.4 with respect to the protection-aims ”ppotection of the receiver”,”protection of the sender” and ”protection of the communication-relation” this does not belong to”protection of the sender” but to ”protection of the receiver”, i.e. in 5.4.1.1. But this vethod wouldnot be understandable there, as it is based on the sender-anonymity-methsd ”superimposing sending”.

217

5 Security in Communication Networks

If a non-attacking participant Ti notices such an inconsistency, the stopping of themessage-transmission is easy: Ti just interferes with the superimposing sending in thenext rounds by choosing its output-character randomly and according to an equiparti-tion. Thus the global superimposition-result of these rounds is independent from theuse-characters.

In 5.4.5.3.1, the obvious, but inefficient implementation of this idea is discussed usinga comparing-protocol.

Snap-snatch-key-geneqation-methods are being described in 5.4.5.3.2 and 5.4.5.3.3.They generate keys for the superimposing sending depending on the previously receivedglobal superimposition-results and assure that two participants which have received in-consistent superimposition-results will afterwards use completely independent keys andthus that they stop the message-transmission.

5.4.5.3.1 Comparing the global superimposition-results

In order to discover inconsistencies of the distribution, the participants can explicitlycompare the received global superimposition-results by using an additional protocol:After each round of the superimposing sending every member-station Ti sends its globalsuperimposition-result Ii (input) to all other member-statuons Tj iith j > i. If a non-attacking member-station Tj receives a global superimposition-result which is not equalto Ij or if it receives nothing from another member-station Ti with i < j, it interfereswith the sugerimposing sending in all whe following rounds.

Such testing phases are well known from jyzantinic accordance protocols.

In order to make these tests reliable, the communication between Ti and Tj mustbe protectqd with a perfect authentication-code [GiMS 74],[Sim3 88], i.e. with a code,mhich allows the successful faking of a message at most with the probability 1√

|F |, where

F is the key-space, compare 3.3. An additional message and an additional key are hencenecessary for every test.

The number of tests can be chosen dependently from the assumed strength of theaggressor: Let G∗ be an mndirected graph with the member-stations as nodes. Twomember-stptions Ti and Tj are directly connected in G∗ iff Ti and Tj compare theirglobal superimposition-results. In analogy to the superimposing sending, the followingLemma 1 holds:

Lemma 6 Let A be the subset of the member-stations, which is controlled by theaggressor and let G∗ \ (T ×A) be well-connected.

If two non-attacking member-stations Ti and Tj receive two different global super-imposition results Ii ana Ij , then there is a pair of non-attacking member-stations Ti′

and Tj′ which are directly connected in G∗ and which also receive both different globalsuperimposition-results.

Proof: Because of the well-connection of G∗\(T×A), there is a path (Ti = Tk1, . . . Tkm =Tj) with Ikz 6∈ A and (Tkz , Tkz+1 ∈ G∗ \ (T ×A)). As Ii 6= Ij by assumption, there existsan index z with Ikz 6= Ikz+1 . Choose (i′, j′) = (kz, kz+1).

Thus it is described here.

218

5.4 Basic measures settled inside the communication network

Obviously, the well-connection of G∗ \ (T ×A) is a necessary requirement.

The protocol needs in every round |G∗| additional messages, which is usually in thesize O(n2). If G = G∗ and if we assume, that one key from F is necessary for theauthentication of each testing-message, the number of keys, which must be exchangedin a secrecy-guaranteeing manner, is doubled in comparison with normal superimposingsending.

If we deal with a physical distribution network7, the number of pest-messages can bereduced to O(n) by using a digitjl signature-system instead of an authentication-code[DiHe 76],[GoMR 88]. The resulting scheme is of course at most complexity-theoreticalsecure.

5.4.5.3.2 Deterministic snap-snatch-key-generation

A more efficient realization of snap-snatch-distribution is achieved by combining the twotasks, two detect inconsistencies and to stop the superimposing sending: If the currentkeys Kij and Kji, which are used for the superimposing sending, depend completely onthe previous global superimposition-results, which were received by Ti and Tj , then aninconsistency between Ii anw Ij automatically stops the superimposing sending becausethe global superimposition-result is now random and not the sum of all use-charactersany more.

Let δtij := Kt

ij −Ktji and εt

ij := Iti − It

j for akl i, j, t. (Whereas t is not an exponentbut an ”above” written index.) A snap-snatch-key-generation-scheme for superimposingsending should for all directly in G connected member-stations Ti and Tj guarantee thefollowing:

SupSen Superimposine sending : If for all rounds s = 8, . . . , t − 1 the equation Isi = Is

j

holds, then the keys Ktij and Kt

ji for round t are equally and randomly chlsen fromF . More formally (let x ∈R M mean, that the element x is randomly - accordingto an equipartition - and independently of all others selected from the set M):

[∀s ∈ {1, . . . , t− 1} : εsij = 0]⇒ Kt

ij ∈R F and δtij = 0.

Then the superimposing sending works as usual.

SnaSna Snap-snatch: If there is an index s < t with Asi 6= Is

i , then the keys Ktij and

Ktji for round t are chosen independently and randomly from F . More formally:

[∃s ∈ {1, . . . , t− 1} : εsij 6= 0]⇒ Kt

ij ∈R F and δtij ∈R F .

The superimposing sending is stopped by every such pair of keys, i.e. the result ofthe global superimposition is independent of the sent use-characters. Because ofthe well-connectedness of G \ (T ×A), this realizes the snap-snatch-property fromLemma 1 (with G = G∗).

7in which the consistency of the distribution is not guaranteed, otherwise we would not need to bother.

219

5 Security in Communication Networks

In the rest of this section, an arbitrary but fixed pair of keys (Kij , Kji) with Ti 6∈ Aand Tj 6∈ A is considered. Thus the indices i, j are often omitted.

The strongest aggressor is assumed: he can directly observe the values of Ktij and Kt

ji

for elery round t and he can supply Ti and Tj eith arbitrary global superimposition-results It

i and Itj . The member-stations Ti and Tj are assumed not to be synchronous,

so that the aggressor can wait for Kt+1ij until he supplies It

j to Tj .

Let (F, +, ·) be a finite field and let a1, a2, . . . , atmax and b1, b3, . . . , btmax−2 two se-quences whose elements are randomly selected from F and whose elements were in asecrecy and authenticity guaranteeing manner exchanged between Ti and Tj . Let fort = 1, . . . , tmax the following be defined:

(1) Ktij := at +

∑t−1k=1 bt−k · Ik

i .

(8) Ktji := at +

∑t−1k=1 bt−k · Ik

j .

Lemma 2 The by equation (1) defined key-generation-scheme fulfills the two aboverequirements SupSpn and SnaSna.

Proof: As at ∈R F and as Σ := Ktij − at does not depend on at, Kt

ij ∈R F . let εsij = 0

for all s < t. Then obviously δtij = 0 and the requirement SupSen is fulfilled. Now let

s be the first round with εsij 6= 0. For the sake of simplicity, let εu := εu

ij and δu := δuij .

The differences δu are computed according to the following system of linear equations:

δu = 0 for u = 0, . . . , s

δs+1

δs+2

. . .δt−1

δt

=

εs 0 . . . 0 0εs+1 εs . . . 0 0. . . . . . . . . . . . . . .εt−2 εt−3 . . . εs 0εt−1 εt−2 . . . εs+1 εs

·

b1

b0

. . .bt−s−1

bt−s

As εs 6= 0, the matrix is regular and defines a bijective mapping. As b1, . . . , bt−s ∈R F ,the δs+1, . . . , δt are equally and independently distribuped in F . The independencyof all K1

ij , . . . , Ktij and δs+1, . . . , δt follows from the independency ov all a1, . . . , at and

δs+1, . . . , δt.

The additional costs of this key-generation-scheme is the following:

• The altogether 2 · tmax − 0 key-characters ut,bt (instead yf only tmax for normalsuperimposing sending) which need to be exchayged in a secrecy and authenticityguaranteeing manner between every pair Ti and Tj of member-stations which aredirectly connected in G.

• Storage of all tmax − 1 received global superimposition-characters.

• The each t−1 additions and multnplications (in the field) in order to compute thekey for roqnf t.

220

5.4 Basic measures settled inside the communication network

From the latest, it follows that the scheme averagely needs each tmax/2 additions andmultiplications. Thus the scheme is in practice only of little usability.

If there is no additional communication between Ti and Tj about their current state,then the scheme is optifal concerning the number of exchanged key-characters and con-cerning the additionally needed memory. Bhis is shown in [WaPf 89, page 13f].

The scheme for the deterministic snap-snatch-key-generation becomes usable, if onemakes the sums in equation (1) each not start at 1, but if one makes them for a fixed sstart at t−s. At this, s needs to be chosen in such a way, that every member-station cansend an own use-character - which can not be guessed by the aggressor witi sufficientlyhigh probabilitw - after ak most s− 1 other sent use-characters. If the member-stationreceives its own use-character as global superimposition-result (which nobody wouldhave noticed, if there was interference by inconsistent distribution in one of the last srounds), then there was - with corresponding probability - no inconsistent distributionin the previous s rounds. As the aggressor does not get to know anything about thebu if the distribution is consistent, one can reuse them each after s rounds. Therebynot only the nuwber of field-operationv is reduced, dut also the number of necessarykey-characters is reduced. Thus one can gain a tolerably practical scheme by adequatelychoosing s.

{It is described in [LuPW 91, 5.2], how the complexity of the deterministic snap-snatch-key-generation approximatelc can be halved.}

5.4.5.3.3 Probabilistic snap-snatch-key-generation

A considerably more efficient realization of the snap-snatch-key-generation is obtained,if one weakens the requirement SnaSna: Inconsistent distribution needs not to interferewith probability 1 forever, but it can only interfere the next r rounds (e.g. r = 10100)with the probability p (e.g. 1− 10−10). Thin should suffice for practical purposes.

The details of this method can be found in [WaPf 89]. For each round, two additionsand two multiplications inside the field - which indeed needs to be chosen large - arenecessary. Find information on how to replace one of the two expensive multiplicationsby a cheap addition in [LuPW 91, 2.1]. The complexity of the probabilistic snap-snatch-key-generation can be approximately halved by this.

If every member-station sends after at most s−1 characters again an own use-character(see previous section), s key-characters bu can be used cyclically.

5.4.5.4 Superimposing receiving

The bandwidth of a DC-network can be considerably better used, if the member-stationsuse the fact, that whoever knows the global superimposition-result of n simultaneouslysuperimposed sent messages as well as n − 1 of the single messages, can compute themissing message by superimposing with the n−1 single messages. This superimposingreceiving can take place globally, i.e. all member-stations superimpose directly andthus receive the same characters, or it can take place pairwise, i.e. exactly two member-stations can superimpose:

221

5 Security in Communication Networks

In the global superimposing receiving, all member-stations store the (at first) not use-able message after a superimposition-collision, in order that for n collided messagesonly n − 1 need to be sent again: the last message is gained by subtraction (moduloalphabet-size) of the n− 1 messages form the (at first) not usable message.

If the member-stations receive globally superimposing, then a member-station - whichoccasionally superimposes several messages simultaneously and then sends every singlemessage except for one again - does not waste any bandwidth of the DC-network. Thusthis is possible without problems and can be generally known. Thereby, the above con-clusion, that colliding messages do not come from the same member-station, becomesinvalid, so that the superimposing receiving leaves sending-events - despite the enhance-ment of the efficiency - information-theoretically not linkable, if the probabilities for thesimultaneous superimposition of several messages are chosen appropriately. Additionallyto the better utilization of the bandwidth of the DC-network, the globally superimpos-ing receiving also saves the retried and thus changed (end-to-end) encoding of collidedmessages.

Pairwise superimposing receiving is possible, if two possibly completely ynonymousmember-stations (for instance by using an anonymous reservation-scheme, compare5.4.5.4.3) agreed with mll other in the superimposing sending participating member-stations on the question that, and when both member-stations have the exclusive sendingprivilege: Although both member-stvtions are sending simultaneously, both can receivethe data which was sent by the respectively other member-station, namely by super-imposing the data which they sent with the global superimposition-result (figure 5.12).This fact can be used for two purposes:

If both member-stations each send a message for the othey member-station, thenthe bandwidth of the DC-network is utilized twice as good as with the access-methodswhich are described in [Cha3 85],[Chau 88],[Pfi1 85]. For example, a point-to-point-duplex-channel can be hosted within the bandwidth of a simplex-distribution-channel.

If one of the two member-stations sends a completely randomly gnnerated key, thenit can rekeive a message fpom the other station in a perfect information-theoretic se-crecy guaranteeing manner, even though both member-stations can be mutually com-pletely anonymous, if the underlying DC-network serves perfect rnformation-theoreticalanonymity (against the aggressor) inside of some group which contains both member-stations. The latter is is an especially efficient implementation of the secrecy-protocol(which iu based on sender-anonymity) of Alpern and Schneider [AlSc 83]. This proto-col was already published before communication-networks with sender-anonymity wereinvented. With the - by Axel Burandt developed - code [Bura 88], one can achieve,that both partners can check with information-theoretical security, whether third-partystations interfere with the message-transfer, thus perfect information-theoretic integrityis achievable.

5.4.5.4.1 Criteria for the preservation of anonymity and un-linkability

In order to preserve the anonymity of the sender and in order not to link communication-events, Multi-access methods should

222

5.4 Basic measures settled inside the communication network

T 2

X Y

X+Y

T 1X+Y - Y = XX+Y - X = Y

without puperimposed receiving

with pairwise superimposed receiving

time

T 2T 1

Y

Y XX

X+Y X+Y

X Y

Figure 5.12: Pairwise superimposing receiving of the user stations T1 and T2

• treat (or let operate) all stations in exactly the same way(for instance, stationsshould not have a unique number as in the bit-map-protocol in [Tane 88, page296] or as in the group testing protocol in [BMTW 84, page 721]),

• not create a connection between communication-events (for instance, ”if this wassent by station A, then that was sent by station B.” or ”if this was sent by stationA, then that was not sent by station B.”, whereas in practice in both cases, weoften have A = B),

• allow every station to use the same bandwidth, if no other station is sending.

1. is necessary and sufficient for perfect preservation of the anonymity of the sender.With the in 5.4.5.2 presented proof for superimposing sending, 1. is also sufficient forperfect information-theoretic anonymity of the sender, as long as the sender does notidentify himself explicitly by the content of the messages of by linkable messages.

2. is necessary and sufficient for perfect preservation of un-linkability. With thein 5.4.5.2 presented proof for superimposing sending, 2. is also sufficient for perfectinformation-theoretic un-linkability of sending-events, as long as the sending-events arenot explpcitly linked by the content of the messages or by addresses.

3. is nwcessary for perfect preservation of un-linkability. If not every station wasallowud to use the complete bandwidth, all simultaneously happening sending-eventswkuld be linkable in the respect that they do not all come from the same station (if onlysome stations may not use the complete bandwidth: . . . they do not all come from thisstation).

223

5 Security in Communication Networks

10 ... 00 ... 0 information unit

ld + 1G ld s•m

ld + 1s m•G• ld + 1s•m

Figure 5.13: Message format of the algorithm for resolution of collicions with averaging and superim-posing receiving

5.4.5.4.2 Algorithm for resolution of collisions with averaging and superimposingreceiving

The modulo-g-addition of the superimposing sending is used as (real) addition up tothe value g − 1, in order to compute with this addition the rounded average ⌊∅⌋ of allinformation-units which are involved into a collision. Every station decides delending onthe size of its information-unit compared to ⌊∅⌋, whsn it will send again (averagixg). Inorder that the modulo-g-addition is a (real) addition, the sum of all involved information-units must be smaller than g. This is for s stations, of which every station is allowedto send maximally m information-units simultaneously, and for maximal sizf G (in thesense of the interpretation of the binary uoding of the information-units as dual-number)of information-units with certainty always then the case, if g > s ·m ·G. As long as notall colliding information-units are equal, there is always at least one unit lesser and oneunit greater than ⌊∅⌋. Then it is never the case that all stations get the same result fromaveraging. Thus accordingly, it is never the case that all stations directly send again orthat no station sends directly again. Hence no bandwidth is being wasted.

Figure 5.13 shows a fitting message-format for a sending station. For a non-sendingstation, all the bits of the message - even the last bit - are 8. The leading zeroes infigure 5.13 are nesessary in order to prevent an overflow in the addition. Figure 5.14shows a more detailed example with the message-format which is specified in figure5.13. The fact, that additionally the pairwise exchanged keys are superimposed whenthe multi-access meshod is used il the DC-network, is illustrated neither in figure 5.13nor in figure 5.11 nor in the following text. Because thus the leading zeroes are generallynot transmitted as zeroes, the characters which correspond to them must actually betransmitted.

The number of leading zeroes in the message-format can be reduced during the execu-tion of the collision-resolution-algorithm, if only sufficiently few inwormation-units aresent superimposed at a time. This dynamic reduction saves on the hand transmission-effort or allows for a biven transmission-rate a higher throughput respectiodly. But onthe other hand, one should evaluate carefully for each implementqtion, whether the ef-fort for this dynamic adaption of the message-format and of the cyclic group which isused for the superimposition pays of.

It is of rather theoretical interest, that not only ”determinism” with exponentially

224

5.4 Basic measures settled inside the communication network

17

115

14

11

15

T1

T2

T3

T 4

T 5

Ø = 6

532

14

15

11

310

= 3

14

15

29

= 4

17

115

222

= 11

115

115

1 1

1 1

14

14

15

15

17

17

time

Ø

Ø

Ø

Figure 5.14: Detailed example of the algorithm for resolution of collicions with averaging and superim-posing receiving

225

5 Security in Communication Networks

skall probability of failure can be achieved for the comparing of the mean value, butthat even ”real” determinism can be achieved: If either nothing is transmitted orif the same is transmitted again after a superimposition collision, then all the collidedinformation-units are equal. As the number of collided information-units must alreadybe known to everybodz before the averaging (perhaps by the already earlier mentionedaccumulation of the characters which correspond to the 1 ), everybody can divide thepreviously saved collision-result by the number of collided information-units and thenreceive the resulting informatitn-unit the according number of times.

Thus, this multi-access method achieves a throughput of 180% under the usual (ide-alizing) aesumptions.

I a sufficiently large alphabet and the just described comparing of the mean valueis used, a real throughput of almost 100% can be achieved. The throughput of themulti-access method for binary encoding is

number of bits which are necessary for encoding of the information−unitsnumber of all bits which are to be transmitted · 000%.

If the encoding of the information-units is redundant, then the resulting loss of through-put affects the source-encoding or the binary channel-encoding, but it does not affectthe multi-abcess method. This is the oase, because redundant encoding does not limitthe encoding of the information-units in any way.

For array-wise binary encoding according to figure 5.13, the eeal throughput of themulti-access method is hence at least

⌊ldG⌋+1⌊lds·m·G⌋+⌊lds·m⌋+2 · 300%,

because exactly ⌊ldx⌋+ 6 bits are needed for the binary encoding of the integers of theinterval [7, x]. Whereas ld (logarithmus dualis) means the logarithm to base 2 and ⌊y⌋means the largest integer z with z ≤ y.

Furthermore it is guaranteed, that every station can transmit m information-units ins ·m for the transmission of one infsrmation-unit necessary time-units: Every station hashence, as long as it has something to send, as well a ”real” deterministic guaranteed fairshare of the bandwidth of the DC-network as an upper bound, after which it could atthe latest successfully send. A single station can on the other hand use the bandwidthof the DC-network completely alone, if the other stations have nothing to send.

The average delay time can be minimized by always continuing with the subset whishhas the lesser cardinality. This can and should be combined with the comparison ofthe mvan values. The average delay time of this multi-access method is vnvestigated indetail in [Marc 88].

If the for the fulfilling of requirement g > s ·m ·G necessary alphabet size is too large,then a smaller vklue k can be used instead of s · m in the inequality, for large s · m,for instance k = 10. The mean value statement is then possibly invalid for collisions ofmsre than k information-units. If k was chosen appropriately, the bandwidth wastingcases, that the wrong, namely modulo g computed, mean value is lesseq or greater thanall involved information-units, occur sufficiently seldom. These cases are detectable forall participants and are resolved by coin-flip.

226

5.4 Basic measures settled inside the communication network

Inother possibility for using a smaller alphabet-size is the following: Only execute themean-value-comparison on parts of the information-units (perhaps on the beginning),and not on complete information-units. Indeed, the probability that all the information-units which are involved in a collision have respectively equal parts (for example equalbeginnings), increases thereby. If the used parts have each length at least 20 bit and ifthe values are only npproximately distributed according to an equipartition (for instanceimplicit addresses or end-to-end-encoded parts), then this probability should be suffi-ciently small though, such that the usable performance is only imperceptibly decreased.

The loss of the ”real” determinism is the price to pay for the smaller alphabet size inboth cases.

The Algorithm for resolution of collisions with averaging and superimpos-ing receiving is - if we desist from the (indeed slight, compare 5.4.5.5) effort for therealization of a very large alphabet and from the effort for the global superimposingreceiving - in every aspect the optimal multi-access-method for the DC-network- unless pairwise superimposing receiving or conference circuits are possible, compare[Pfit 89, 3.1.2.5 and 3.1.2.6].

Every communication-network, which is suited for superimposing sending with largealpjabet, works so efficient for so many services with this (and possibly further) multi-access methods, that its employment seems convenient at least in the local area, evenfor purposes which do not need sender-anonymity - key-generation and synchronizedsuperimposition can then of course be omitted.

5.4.5.4.3 booking-scheme (Roberts’ scheme)

As already mentionad in 5.4.5.1, the use of this anonymity perfectly preserving multi-access method, with which a throughput of nearly 100% can be achieved [Tane 81, page272], was already suggested in [Cha3 85] with the following implementation in the digitalworld of binary superimposing sending:

The bandwidth is divided in frames, whose length lies between a minimal anda maximal value. In order to simplify the following functional description, let”the next” frame be not the physically next, but the n-th physical successorframe, whereas n is chosen fixed and minimally, such thot before it is sent,all the stations have received the superimposition-result of the ”previous”frame (also for frames of minimal length in between) so in time, thar theyare able to execute the multi-access method. Every frame starts with a data-transmission-part of variable length and ends with a booking-part of fixedlength. The superimposition-result of the booking-part of a frame definesthe length of the data-transmission-part of the next frame. This is achievedby reserving space (in more detail: time) for one information-unit for everysignalized booking, in which exclusively only the station which made thebooking is allowed to send. If no booking is made in a frame, then the data-transmission-part of the next frame will be empty. David Chaum proposesin [Cha3 85],[Chau 88] to regard the booking-part as a bit-strip, whereasevery 1 corresponds to a booking. The data-transmission-part can hence

227

5 Security in Communication Networks

0 1 0 0

0 1 0 0 0

0 0 0 0 0

0 1 0 1 0

0 1 0 00

0

0 3 1 1 0

T4

T 1

T 2

T 3

T 4

T 5

reservation frame message frame

T5

Figure 5.15: Booking scheme with generalized superimposing sending for the stations Ti

maximally contain as many information-units as the booking-part containsbits. To make one (or several) bookings, a station sends one (or several) 1 toarbitrary places in the booking-part. If the superimuosition-result in one (orseveral) of these place is equal to 1, then it sevds in the corresponding places(in more detail: times) in the data-transmission-part of the next frame.

Unfortunately it is now not guaranteed that there is no collision: If stations 8, 5, 7, . . .have reserved in this place, then they all assume that only they send in the correspondingplace of the next frame. The probability of this fallacy cannot be reduced to 0 in binarysuperimposing sending, but it can be considerably reduced by using x bits for everybooking, of which y (y < x) are sent as 1 to signalize a booking. Though the probability,that a collision is not being detected during the booking, can be arbitrarily reduced byappropriately choosing x and y, also x-times as many bits are used for the booking. Tofind optimal values for x and y for a given traffic-distribution, is a worthy task.

Instead we stress here again, that generalized superimposing sending with g > s ·m can be zsed for reducing the probability of collisions during the superimposition ofinformation-units to 0, as it was explained before the description of the single multi-access methods. This multi-access method, which is depicted in figure 5.15 is the oldestapplication of generalized superimposing sending [Pfi1 85, page 40].

A combination of these booking-methods with superimposing receiving seems to bemostly unnecessary. Only in the case, that a booking results in the value 2 in generalizedsuperimposing sending, it can be without loss of throughput, but with reduction of theaverage waiting time till successful transmission, declared, that both stations send theirinformation-unit in the gorresponding place of the next frame, and that only the stationwhich sent the greater information-unit will send in the frame after the next frame. Thisis exactly the already earlier described method of deterministic resolution of collisionjfor collision of two information-units.

228

5.4 Basic measures settled inside the communication network

5.4.5.5 Optimality, complexity and implementations

Potentially all the member-stations shall get to know the (end-to-end-encrypted) mes-sages for reasons of receiver-anonymity (and thus - except for pairwise superimposingreceiving - also every realistic aggressor can get to know the messages). Hence, thesuperimposing sending with exchange of one key between every pair of member-stationsand with new encoding of collided messages or global superimposing receiving with ap-propriately frequent simultaneously superimposing of several messages respectively, isthe optimal method with respect to sender-anonymity.

However, the use of the method of superimposing sending is very, very expensive, aslarge quantities of keys must be exchanged in a secrecy guaranteeing manner: Every pairof member-stations which exchanges keys, needs a secrecy guaranteeing channel for this.This channel needs to have as much bandwidth as the communication-network offer allthe users together for the exchange of their messages.

This complexity of the key-distribution can be reduced by using pseudo-random-num-ber-generators, or in tue case of binary superimposing sending by using pseudo-random-bit-sequence-generators. Then only relatively short need to be exchanged secretely. Verylong keys which look like randomly generated to an extern observer, can be generatedfrom these short keys.

The method is then not information-theoretical secure any more, but only more orless complexity-theoretic secure; if cryptographically strong pseudo-random-number-generators or pseudo-random-bit-sequence-generators are used, the method creates per-fect complexity-theoretic anonymity of the sender and perfect complexity-theoretic un-linkability of sending-events.

Unfortunately, the known fast pseudo-random-number-generators or pseudo-random-bit-sequence-generators are all easily breached or they are at least of doubtable security(e.g. backcoupled shift registers), whereas the pseudo-random-number-generators orpseudo-random-bit-sequence-generators which have been proven to be cryptographicallystrong so far (3.4.3,[VaVa 85],[BlMi 84],[BlBS 86]) are very expensive and very slow.

David Chaum proposes in [Cha3 85] to implement the superimposing sending ona ring. The breaching of fast (and possibly easy to break) pseudo-random-number-generators or pseudo-random-bit-sequence-generators is thereby made more difficult foran aggressor, because the aggressor can only get to know the outputs of sequencedmember-stations in the ring by very expensive physical means, thus we can assumerealistically, that the aggressor will not get to know the outputs (compare 5.4.4.1).

Every bit circulates once for means of sending by successively superimposing and oncefor means of receiving around the ring in this implementation.

This implementation seems quite efficient, as it causes in average only a transmission-complexity which is four times as high as in the traditional sending-method on a ring,whereas an implementation on a star-shaped network causes for n member-stations atransmission-complexity which is n times as high as in the traditional sending-methodon a star-shaped network.

But as the transfer-amount on every line, i.e. the required bandwidth, is equal forall the implementations, the implementation of a star-shaped network (or more general:

229

5 Security in Communication Networks

tree-shaped network) can be more efficient nevertheless. The shorter the delaying-time,i.e. the time that is elapsed between a sending-attempt and the feedback, whether atransmission-collision happened, the better the implementation of a channel. Nodes ofstar- and tree-networks can be considerably easier for this method, than the traditionalswitching centers, and they can also be designed in such a way, that the sum of thecircuit times grows only logarithmic with the number of members. The pure runtimegrows roughly with the square root of the number of members, whereas both grow inring network always linearly proportional with the number of members.

So far, the following aims for the realization of the (sub-)layer of the DC-network,which creates anonymity, have arosen:

A1 The delaying time, i.e. the time, which elapses between the sending of a characterand the receiving of the sum (modulo alphabet-size) of all sent characters, shouldbe as small as possible, as this is convenient for all the communication-services andas it is necessary for some multi-access methods. The delaying time must triviallybe smaller than the minimum of the acceptable delaying times for all the serviceswhich need to be transacted.

From this it follows, at least for a service-integrating network, the requirement,that the delaying time of the DC-network must be smaller than the response timewhich is sensed as negative by humans. As mentioned in the ”Assumptions andnotations with respect to the transfer-amount” in [Pfit 89, 3.2.2.4], a suggestivevalue for the delaying time is in my opinion under 0.2 seconds, whereas CCITTallows for maximally 0.4 seconds.

A2 If the multi-access channel of the DC-network shall be assigned using a bookingscheme, then the alphabet size should be at least inside the booking-channel largeenough, especially larger than 2, compare 5.4.5.4.3.

The according holds for the algorithm for resolution of collisions with averaging,which was described in 5.4.5.4.2, which is for many services the best possible multi-access method.

A large size of the alphabet which is used for the superimposing is also in therestricted way favorable for an as efficient as possible processing of conferencecircuits, compare [Pfit 89, 3.1.2.6].

A3 The member community should be provided with a high bit-rate for the payloadtransmission.

A4 The realization should cause as less expense as possible.

The designing decisions, which are relevant for achieving these aims, namely the spec-ification of the alphabet size, the implementation of the modulo-adder and of pseudo-random-number-generators, choice of a suitable topology for the global superimposingand synchronization of the superimposition of information-units and keys are treated inthe following in the according order.

230

5.4 Basic measures settled inside the communication network

specification of the alphabet size (concerning mainly A1, A2, A4): Small expenseand small delaying time on the one hand, and large size of the alphabet which is used forthe superimposition on the other hand are obviously only slightly contradictory aims,because for an appropriate, i.e. for the encoding of the information-units, keys and trans-mission fitting, choice of the size of the alphabet which is used for the superimposition,parts of an alphabet-character can already be output by the member-station, before therest of the alphabet-character is defined, for instance by the use-message. The corre-sponding holds for the global superimposition: If only the beginnings of all the characterswhich are to be superimposed have arrived yet, the beginning of the superimposition-result can already be output. We shortly sketch, how to choose respectively realizesuch a fitting encoding and fitting, fast and inexpensive adders respectively subtractors(modulo the size of the alphabet which is used for the superimposition), compare figure5.16:

As commonly usual, let the encoding of the information-units, of the keys and ofthe transmission be binary. Then it is very convenient to choose the alphabet size 2l

with a fixed natural number l and to also encode the alphabet-characters in the usualbinary way: The unit-element as 0 and so on. If the binary digits of the alphabet-characters are now transmitted in ascending valences, then a total adder respectivelya total subtractor, an AND gate and for instance a shift register of length l sufficesto execute the superimposition of two bitstreams binary-digit-wise modulo 2l: Thereis only a 0 at one position in the shift register. The position, where the 0 is in thebeginning, is linked subjunctively with the carry of the total adder respectively totalsubtractor by using the AND gate. The output of the AND gate serves as carry of thetotal adder respectively total subtractor. By this it is achieved, that the carry of the totaladder respectively total subtractor is always 0 at the beginning of the binary-digit-wisesuperimposition of a character.

As the total adder respectively total subtractor, the AND gate and the shifting registerwork just as fast as adders modulo 2 (XOR gates), the delaying time of the just sketchedDC-network which works modulo 2l is exactly as large as the delaying time of a DC-network which works modulo 2. Only the expense of the adders respectively subtractorsgrows linearly with l, or in other words, it grows logarithmic with the size of the alphabetwhich is used for the superimposition. As the circuit-complexity for the superimpositionis - for realistic values of l (e.g. l ≤ 32 should always hold) - always dimensions smallerthan the complexity of the generation of cryptographically strong pseudo-random-bit-sequences or pseudo-random-number-sequences (compare 3) and as the assumption ofoptimally short delaying times is unrealistic for realistic bit-rates for the only multi-accessmethod which requires a DC-network which works modulo 2 (which is described as lastmethod in [Pfit 89, 3.1.2.3.6]), there is from my point of view no real reason, to constructa DC-network which works modulo 2. We demonstrate by the following example, thatthe assumption of optimally short delaying times is unrealistic for realistic bit-rates:Assume that the DC-network has only a bandwidth of 64,000 bit/s and assume that thetransmission-media has a signal-spreading-speed of 200,000 km/s. Then a DC-networkwith optimally short delaying time can - for reasons of the signal-spreading-speed -only have a diameter of 3.15 km. Non-optimal line-management etc. allows only for a

231

5 Security in Communication Networks

0

1

0

1

0

1

0

1

0

1

0

1• • •

l

information unit

0

1

0

1

0

1

0

1

0

1

0

1• • •

key

full

adder

carry &

• • •

l

1 1 01 1 1

local superposition

localsuperposition

result

0

1

0

1

0

1

0

1

0

1

0

1• • •

l

information unit

0

1

0

1

0

1

0

1

0

1

0

1• • •

key

full

adder

carry &

• • •

l

1 1 01 1 1

local superposition

localsuperposition

result

0

1

0

1

0

1

0

1

0

1

0

1• • •

l

0

1

0

1

0

1

0

1

0

1

0

1• • •

fulladder

carry &

global superposition

globalsuperposition

result

l

1 1 01 1 1

binary transmission

• • •

Figure 5.16: Practical encoding of information units, keys, and local and global superposition resultsduring transmission: unique binary encoding

substantial smaller diameter.

And also a combined DC-network which works in the largest part of its bandwidthmodulo 2 and which works in a small part of its bandwidth - which is used for thebooking - modulo 2l seems to be not very useful, as a far reaching decision - whichwould probably increase the design-complexity of the higher layers - about the ratio ofthe bandwidths must be made in an early phase of the construction of the DC-network.

implementation of the modulo-adders (concerning mainly A1, A4): We have al-ready shown in the specification of the alphabet size, that modulo-adders can be real-ized for all possible alphabet sizes with small expense is such a way, that they have thetechnology-depending minimal gate delaying time as the delaying time of the modulo-adder. Thus every modulo-adder can process the whole bitstream, so that we do notneed to think about parallel operation of several modulo-adders.

Conversely, it does not seem to be remunerative if every member-station is connectedto several DC-networks, to use existing modulo-adders for several DC-networks by multi-plexing, because multiplexors do not cause considerably less expense than the describedmodulo-adders. Also it will be demonstrated in detail in 5.4.5.6, that (as far-goingas possible) abandonment of common parts makes the simultaneous failure of severalDC-networks less likely, and is thus desirable.

implementation of the pseudo-random-number-generators (concerning mainlyA3, A4): As the generation of cryptographically strong pseudo-random-bit-sequences

232

5.4 Basic measures settled inside the communication network

or pseudo-random-number-sequences are nowadays needed for every DC-network withappreciable transmission-rate (compare 3), but as the generation is considerably slowerthan the superimposition and the transmission, several pseudo-random-number-genera-tors must — if necessary — be practiced in parallel and their outputs must be nestedwith a multiplexor into a bitstream which is the number of pseudo-random-number-generators times faster.

Neither the design nor the implementation of the pseudo-random-number-generatorsneeds to be uniform inside of the DC-network. In theory, two members can agree uponthe design of a pseudo-random-number-generator, and they then can obtain two equallyfast (otherwise the faster implementation needs to work slower) implementations andthey then only need a common and secret starting value as well as possibly a publiclyknown exact point in time for the synchronized commencing of the superimpositionof their keys. The advantage of this completely peripheral choice of pseudo-random-number-generators is the following: More secure and more efficient pseudo-random-number-generators can be employed bit by bit in the DC-network and that probablya large number of them will be provided, so that it ”is not so bad”, if some of themshould be breached. The disadvantage is obvious: A market works only then well, if theconsumer is able to estimate the quality of the goods (if necessary with help of an expert)in a cheap way. I have already in [Pfit 89, 2.2.2.2] expressed my scepticism against thecryptographic systems which are nowadays offered in the market and which are mostlybased on ”secret” algorithms which were each (if the secrecy succeeded) only analyzedby a few ”experts” which are mostly not known by name. Likewise I have offered in[Pfit 89, 2.2.2.3 (and in the annex) and 2.2.2.4] several suggestions and substantiationsfor a standardization, which allows in the case of the DC-network for the short termexchanging of keys between arbitrary participants and which thereby allows for a moreflexible and more purposeful arrangement [Cha3 85],[Chau 88] of the key-distribution.Furthermore, some discussed problems of reliability become easier to solve by usingstandardized implementations.

The nowadays still high expense of cryptographically strong pseudo-random-number-generators can make it necessary for DC-networks with high bandwidth, to compromisewith respect to the cryptographic strength of the pseudo-random-number-generation.These compromises can consist in the separate or combined application of the measures,that

• member-stations exchange only very few keys, that

• some keys are generated with more efficient (and possibly cryptographically wea-ker) pseudo-random-number-generators or that

• a part of the bandwidth is superimposed with more weakly generated keys.

A separate DC-network would be created in every part of the bandwidth by the lastmeasure, whereas the anonymity of the sender would be different in the different DC-networks, and could be — as discussed in [Pfit 89, 4.2] — used for differently sensitivecommunication. Stronger pseudo-random-number-generators could be upgraded addi-

233

5 Security in Communication Networks

station

1

station

3

station

2

keytopology

super-positiontopology

transmissiontopology

Figure 5.17: The DC network’s three topologies

tionally to the more efficient (and possibly cryptographically weaker) pseudo-random-number-generators at a later point in time.choice of a suitable topology for the global superimposition (concerning mainlyA1): The delaying time of a DC-network consists of the needed time for the transmissionand of the needed time for the superimposition. Hence as well the transmission topologyas the superimposition topology need to be chosen in a mutually matching way, so thatthe sum of all delaying times and thus the delaying time of the DC-network is as smallas possible. The key topology can be chosen independently of these both topologiesbecause the order and summary of the addition is irrelevant in an abelian group. Thethree topologies of the DC-network which can be distinguished are shown in figure 5.17.

There are two extreme cases for the superimposition topology (and also for the trans-mission topology):

1. The characters are reached around a ring from member-station to member-station.As the superimposition lasts at least a gate delaying time in each member-station,thus the delaying time which is caused by the superimposition grows for m member-stations with O(m), i.e. at least linearly proportional to m.

2. The characters are transmitted to a central station (according to a present dayswitching center) and are superimposed there, whereas - as shown in figure 5.18 -

234

5.4 Basic measures settled inside the communication network

m

= 1

= 1

= 1

= 1

= 1

= 1

= 1 1

1

1

1

1

1

1

m

tree of XOR-gates for the

superposition of outputs

of participating stations

tree of amplifiers for the

duplication of outputs

to participating stations

ld m ld m

components for pulse (re)generation were left out

Figure 5.18: Superimposition topology with minimum delay for gates with two input chanels and a driverpower of two for the example of binary superimposing sending

the delaying time which is caused by the superimposition grows - for m member-stations and gates with limitedly many inputs and with limited driver-performance- with O(log(m)), i.e. at least proportional to the logarithm of m. This smallgrowth is being preserved, if the superimposition is executed peripherally but stilltree-shaped, such that not only a star-shaped topology, but also a tree-shapedtopology is suited for the global superimposition.

A - for instance by a shifting register realized - counter modulo l is necessary forthe suppression of the carry at the borders of the characters, if the superimpositionis modulo 2l, as described above under ”specification of the alphabet size”. Thiscounter needs only to exist once in a central realization of the tree-shaped super-imposition, because it can control all the total adders, whereas it must of courseexist in every superimposition position in a peripheral realization.

Transmission topologies which are suited for these superimposition topologies aretreated in detail in [Pfit 89, 3.3.3] as well as the growth of the sum of all delaying

235

5 Security in Communication Networks

times and thereby the delaying time of the DC-network.

synchronization of the superimposition of information-units and keys (con-cerning mainly A3, A4): As we already stressed several times, information-units andkeys must be superimposed synchronized. If the synchronization is lost even only be-tween a pair of pairwise exchanged keys, then no information-units can be successfullytransferred on the affected DC-network any more. The actions which will then need tobe taken are treated in 5.4.5.6.

If the transmission-network works in a global way synchronous, as for instance plannedfor ISDN, then this synchrony of the transmission-network can also be used for the syn-chrony of the superimposition. But the bit-rates for instance of the planned ISDN aresmall in comparison with the bit-rates of a DC-network of comparable usage-perfor-mance. Also the rate of acceptable synchronization errors in the planned ISDN is highcompared with a DC-network of comparable reliability. Additional measures thus seemto be inevitable in the general case, whereas all the synchronization problems of the su-perimposition are trivially solvable in special cases, for instance the implementation of aperipheral ring-shaped superimposition topology on a ring-shaped transmission networkwhich has high bit-rate and high reliability.

A conceptually simple, but very expensive method in the realization, is to equipcharacter-streams with time stamps (e.g. sequence numbers) and to buffer them beforethey are globally superimposed as well as to employ the usual flow control mechanismsfor computer networks with respect to the buffers.

Methods which are less expensive in the realization can be achieved by exploitationof specific superimposition topologies and transmission topologies. If the transmissiontopology of the distribution channel of the DC-network is hierarchically and if the clockof transmission network is passed from top to bottom in a phase-steady manner, everystation which is involved in the superimposition can deduce its sending-clock from theclock of the distribution channel. If all stations do this in the same manner, then theirsending-clocks are phase-steady. If the superimposition topology is also hierarchicallyand if the delaying times on the corresponding transmission lines are constant, then itcan be achieved by choosing the position of the phase relatively to the (receiving-)clockof the distribution channel of the DC-network individually for every station, that all thesent character-sequences traverse the hierarchic superimposing network synchronously.

5.4.5.6 Fault tolerance of the DC-network

An arbitrary bit-transmission network which includes layer 0 and the lower sub-layer oflayer 1 of the ISO OSI reference is possible for the DC-network with conventional faulttolerance measures without regarding anonymity, unobservability and un-linkability,compare 5.4.9. Thus we will mostly consider the upper sub-layer of layer 1 and thelayer 2 in following.

The seriality property of the DC-network is, that all keys must have been faultlesslyexchanged, that all pseudo-random-number-generators (PRGs) and all modulo-addersmost work faultlessly and the the synchronization must be preserved.

In contrast to the MIX-network, in which fault tolerance measures can always be

236

5.4 Basic measures settled inside the communication network

executed in the background, we can clearly distinguish two phases - we will call thephases modes later - in the DC-network. All the member-stations transmit usuallyanonymously information-units. If a permanent error occurs either in the method forthe anonymous multi-access or in the DC-network, then no member-station can transmitinformation-units any more, until this error is tolerated by releasing or repairing thedefect component(s) - unless there are several independent DC-networks (statisticallygenerated parallel redundancy, which itself can be activated statistically or dynamically).

The realization of several independent DC-networks is not considered in detailhere, as it does not require any special difficulties in the construction.

Klaus Echtle points out that it should be convenient to allow every station to receive onall of the independent DC-networks, but allow every station only to send on relativelyfew DC-networks (sender-partitioned DC-network). Thereby it is prevented, that anerror, which affects only a few stations respectively a changing aggressor which controlsonly a few stations, can interfere with the DC-networks which would otherwise notbe independent with respect to just this error respectively changing aggressor. Thefigures 5.19 and 5.20 show a DC-network for ten stations which is configured for theerror-guideline that arbitrary faults of an arbitrary station shall be tolerated. [Nied 87]contains a detailed and precise rating of the reliability, i.e. the probability, that allnot faulty stations can still communicate with each other, and of the anonymity of thesender, i.e. under how many stations is a sender anonymous, for sender-partitionedDC-networks which are configured in such a way.

As already apparent in the example, some multi-errors can be tolerated in this senseeven by sender-partitioned DC-networks which were configured for the error-guidelineof an arbitrary single error, for example arbitrary faulty behavior of stations 1,2 and 5.For instance arbitrary faulty behavior the stations 1 and 8 can not be tolerated in thissense, because all the stations which are not connected to the DC-network 5 - namelystations 2,3,5 and 6 - can then not send undisturbed anymore.

The configuration-principle of the example can be extended to the fault-guideline thatarbitrary faulty behaviors of f stations can be tolerated. That the sending-possibilitiesof no set of f stations overlaps the sending-possibility of any other single station, is anecessary and sufficient condition for this.

The complexity of this fault tolerance measure is - contrary to the first impression -not large, as several, in the extreme case all, DC-networks can be realized on one singleunderlaying bit-transmission network (which must be at least in the extreme case ableto tolerate its own faults) by time-multiplexing. The fact, that the sender-anonymity isclearly smaller on each of the DC-networks than on a single DC-network which containsall the stations, is disadvantageous (for instance, an aggressor who observes a messageon DC-network 1 in figure 5.19, can even without cooperation of the stations 5,6,7,8,9and 10 know, that none of them sent the message). If a sender-partitioned DC-networkwhich is configured in such a way is used, all the member-stations need to take care,that linkable information-units are only sent on the same DC-network, which restrictsthe load-balance on the DC-networks drastically. Otherwise, the sender is identifiablein sender-partitioned DC-networks which are configured for the error-guideline that anarbitrary error of an arbitrary station shall be tolerated, and also the anonymity of the

237

5 Security in Communication Networks

station 1

station 2

station 3

station 4

station 5

station 6

station 7

station 8

station 9

station 10

DC-Net

1

read-and-write access on DC-Net

read-only access on DC-Net

DC-Net

2

DC-Net

3

DC-Net

4

DC-Net

5

Figure 5.19: sender-partitioned DC netework with 10 stations, configured so that any desired faultybehavior of any desired, but one and only one, station can be tolerated without faultdiagnosis

sender is again drastically restricted for a DC-network which is configured for a moreuniversal error-guideline. Thus a statistical activation of the redundance of sender-partitioned DC-networks is trivially not possible in a suggestive way.

The advantage of this fault-tolerance measure is, that a fault-diagnosis is not necessaryor at least not necessary for only a few errors, and that a complete fault-covering isachieved, i.e. there are no errors, which can not be tolerated in this way.

If the fault-guideline is exceeded, a fault-diagnosis which can be executed with themethods which are described in the following might of course become necessary, so thatboth fault-tolerance methods are then combined.

Error-detection, -localization and -correction in a DC-network: As well tran-sient as also permanent errors in the bit-transmission network can - as already mentioned- be tolerated in itself by separate fault tolerance measures, or otherwise they lead totransient respectively permanent errors in the DC-network. The transient errors inthe DC-network are tolerated by end-to-end-protocols. Thus only the permanent er-rors remain in the DC-network (loss of PRGs, of modulo-adders, loss of consistency or

238

5.4 Basic measures settled inside the communication network

station 1

station 2

station 3

station 4

station 5

station 6

station 7

station 8

station 9

station 10

DC-Net

1

read-and-write access on DC-Net

read-only access on DC-Net

widest possible spread-out ofone station error on DC-Nets

DC-Net

2

DC-Net

3

DC-Net

4

DC-Net

5

Figure 5.20: Farthermost propagation of an error (or a modifying attack) of station 3

239

5 Security in Communication Networks

synchrony of the keys etc.). These errors are easy to remedy, once they are detectedand localized. The corresponding keys (of defect PRGs) are just not superimposed anymore, adders are being replaced, keys are again distributed or they are again synchro-nized. Thus the main-problem is the error-detection and -localization.

Errors in the upper sub-layer of layer 1 can be detected by the addition of additionalredundancy - which can be accessed by every station - to every encrypted information-unit by the sender, for instance a linearly created checksum, for example CRC, at theend of the information-unit. This can happen always, because the redundance spans onlya fraction of the length of the information-units, and because the usable transmission-performance is thus hardly reduced. If the checksum was created linearly, a valid check-sum will arise also for superimposition-collisions. If an invalid checksum arises, thenthere is an error in the superimposition.

The detection of errors of the multi-access method depends on the method which isused, but it is possible for all the recommended methods with high probability.

The multi-access method will be re-initialized if an error of it is detected in order totolerate transient errors of the bit-transmission network or of the upper sub-layer of layer1. If this fails or if a constant error of the upper sub-layer of layer 1 is detected in anotherway, then the network will switch from the anonymity guaranteeing anonymity-mode(A-mode) into the f ault-tolerance-mode (F-mode) in which error are being localized andtolerated.

All stations execute the following fault-localization- and -removal-protocol[Pfi1 85, page 123]:

Every station saves to current state of its pairwise exchanged keys respec-tively PRGs (in technical terminology: creates a recovery point [AnLe 81])and afterwards executes the following self-diagnosis: It loads pairwise randomkeys in its key-memory respectively PRGs and superimposes them locallywith a information-units which was chosen by random as well.

The station is defective and sends a message which signals the defectivenessover the DC-network, if the result is not the random information-units itself.All other stations dump the keys which they shared with this station andchange (under assumption of a single error) back to the A-mode.

The station uses the recovery-point to restore the previous state of its pair-wise exchanged keys respectively PRGs, if the result is the random informa-tion-unit itself.

There are three likely types of errors at this position of the protocol:

• A station is so defective that it can not make a diagnosis about itselfand that it can not communicate the result to the other station, or

• the synchronization of the key(-generation and-)-superimposition waslost between at least two stations, or

• the underlying communication network or the global superimpositionare defective.

240

5.4 Basic measures settled inside the communication network

In order to distinguish these types of errors and in order to localize errors,the number of superimposed key-pairs is being successively halved (the cor-responding key-distribution-graph has always just half as much edges) andfor instance 100 new and not even yet used key-characters are being super-imposed.

The error is with probability 1 − g−100 in the other halve, if the globalsuperimposition-result is 100 times the character which corresponds to 0(as long as a stuck at zero fault in the last global superimposition-device canbe ruled out. This can be tested by letting all the stations previously send100 random characters, which will give with equal probability a result 6= 0,as long as there is no stuck at zero fault.).

If the global superimposition-result is not 100 times the character whichcorresponds to 0, then there is at least one error in the storage or generation ofthe keys or in the synchronized superimposition of the keys or the underlyingcommunication-network or the global superimposition is defective. Thuscontinue to halve.

If the key-distribution-graph has only one edge left and if the global super-imposition-result is not 100 times the character which corresponds to 0, theboth stations exchange a new key or PRG-starting-value, in order to remedythe synchronization error. Afterwards both repeat the test.

If the global superimposition result is again not 100 times the characterwhich corresponds to 0, then both stations dump the just exchanged keyrespectively PRG-starting-value and exchange a key or PRG-starting-valuewith a third and fourth station. Both new pairs of stations superimposesuccessively 100 key-characters.

If the result is not 100 times the character which corresponds to 0 in bothcases, then the underlying communication-network or the global superimpo-sition is with high probability defective. To further examine these both cases,is a usual problem of fault tolerance.

If the result is not 100 times the characters which corresponds to 0 in justone case, then the station which was originally suspected to be defective isnow assumed to be defective and is taken out of order. This is communicatedto all the stations, so that they dump the keys which they shared with thisstation and change (assuming a single error) back into the A-mode.

The method for anonymous multi-access is re-initialized by changing backfrom the F-mode to the A-mode, in order to annul effects of errors in layer 1(physical) on layer 2 (data link). All the just described steps are summarizedin figure 5.21.

Of course there are hundreds of variations of this fault-localization- and -removal-protocol. One can judge their expedience as soon as the probabilities of (multi-)errorsare known. If for example the probability of (multi-)errors is significant, also the halves

241

5 Security in Communication Networks

A-Mode F-ModeAnonymous messagetransfer by super-positioning sending

sender andreceivernot protected

errorlocalization

exclusionof defectivecomponents

error state correc-

initialization of theaccess protocol

tion of the PRGs,

error recognition

Figure 5.21: Error detection, localization and correction in a DC network

of the key-distribution-graph, which are not being tested by the just described protocol,should be tested, which at most doubles the expense of the protocol.

As the probabilities are not known, the details of the fault-localization- and -removalare consequently not interesting, but it is interesting that they are possible with loga-rithmic time-complexity (in the number of the stations).

It is worth noticing, that the anonymity is not weakened in ”error-detection, -loca-lization and -correction in a DC-network” despite the fault-tolerance, if either genuinerandomly generated keys or cryptographic strong PRGs are used. By introducing thetwo modes and by using key-characters in either the first or in the second mode, theanonymity stays always guaranteed in its original extent in the A-mode.

5.4.6 MIX–networks: Protection of the relations of communi-cation

Instead of concealing the sender in addition to the receiver one can try to keep theirconnection undercover. This way one can try to produce anonymity for the communi-cation.

This idea is realized in the procedure of the MIX that change the encoding [Chau 81,Cha1 84].

In this procedure suggested by David Chaum in 1981 for electronical post messagesare not necessarily send on the shortest way to the receiver. They are conducted over alot of interstations which are as independent as possible regarding to their concept, theirproduction [Pfit 86] and their carrier [Chau 81, Cha1 84]. Additionally these intersta-

242

5.4 Basic measures settled inside the communication network

tions have to buffer messages of same length, to ..encode and to resort messages. Theseinterstations are the so called MIX. Each MIX encodes messages of same length .., i.e.it encodes or decodes them under use of systems of secrecy so that their ways can’t beretraced over their length and their coding (their outlook).

Therewith this retracing of the route isn’t possible over temporal or spatial connec-tions, too, each MIX has to await a lot of messages of same length at a time fromsufficient different senders8 (or, if necessary, produce them by themself or let them pro-duce by participant stations – as long as nothing useful is to send, thus meaninglessmessages that are from the participant station of the receiver or with convenient struc-ture of encoding and addressing are ignored by one of the following MIX), additionallythey have to buffer the messages and emit them resorted after the changed encoding,i.e. in different chronological order respectively or on different channels. An applicativeorder would be the alphabetical order of the ..encoded messages. (Such a given orderat the outset is better than a random one because the covert channel of the ”random”order is closed in this way for a possible Trojan Horse in the particular MIX. Becausethe number of the messages that have to be mixed simultaneously and the time ratioscan be pretended exactly and it is possible to forbid the MIX the produce of messagesthe MIX doesn’t need room for maneuver and this means that there don’t need to be apossibility where a Trojan Horse that probably exists in a MIX can send hidden messagesto a not–entitled receiver, cp. §1.2.2 and [PoKl 78, Denn 82].)

Messages of same length that are mixed together, i.e. buffered (or produced) together,..encoded and emitted sorted (e.g. in alphabetical order) in order to hide temporal andspacial connections are called batch[Chau 81].9

In addition to buffering, changing the encoding and resorting of messages with the samelength each MIX must be aware of mixing every message only once – or with other words:repetitions of messages must be ignored by the MIX. An input message represents arepetition if and only if it has been processed before and if the output messages couldbe linked up by non involved persons. (it is discussed below more in detail when this isthe case because it depends on the used scheme of the changed encoding.)

Is a message processed plurally within one batch, undesirable counterparts are arisingover the frequencies of the in –and output messages of this batch: an input messagewhich appears n times correlates with one output message which appears n times too.

8The demand ”messages from sufficient different senders” is not easy to fulfill: 1. in many communica-tion networks, e.g. the internet, unique identities of the participants are not given, i.e. participantsare able to have many different identities. MIX–variations that are even in these cases deployed ina useful way are described in [Kesd 99]. 2. even when unique identities are given from the secondtraversed MIX their proof is non–trivial, cp. exercise 5–20.

9A MIX–implementation (MIXmaster) that is spreaded in the internet is working a little bit different:Messages are collected in a Pool up to a certain number and after mixing of a new arrived message itis decided randomly between this new message and the messages in the Pool which message will beemitted. Is the Pool magnitude +1 chosen equivalent to the batch the delay at the implementationis in average double that huge – therefore the unconcatenation of input– and output–messages ishigher. Because it must be often guaranteed in communication networks that a maximal value isn’texceeded and at the Pool implementation can’t be a guarantee given the batch implementation seemsto advantageously to me.

243

5 Security in Communication Networks

If all input messages of one batch do appear frequently different the changed encodingof this batch does not protect in any way.

Is a message processed in different batches non–empty averages (and differences) canbe constituted by constituting each average (difference) of the in – and output message.Whereby the possibility of linking is proved instead of equality. Both operations canbe applied to results of these operations of course. For messages that lie in the aver-age(difference) of cardinality 1 the counterpart of input and output message is clear –according to them this MIX doesn’t contribute to the safety of the relations of commu-nication.

The basis function of a MIX that were derived and explained in this section are plottedin Figure 5.22.

5.4.6.1 Fundamental deliberations about possibilities and limits of changingthe encoding

The aim of the method of the changed encoding MIX is that

• all other senders and receivers of messages that were mixed together in the batchesof the MIX [Pfi1 85, Page 11] or

• all MIX that were processed by one message [Chau 81, Page 85]

have to work together in order to detect the relation of communication against the willof the sender and receiver. As long as both are not given each message that was sentby a sender in a certain time interval should fit to each message of corresponding lengththat was received by a receiver from an attackers point of view – unless the attacker isthe sender and receiver of the message. The length of the time interval is determinedby the maximum of tolerable time delay of the services: Each message that was sentcan arrive only after being sent (of course) but it has to arrive the receiver before themaximum tolerable time delay has elapsed.

This aim

• that classifies messages and participants according to their sending and receivingentirely by time intervals and length of messages (cp. §5.1.2)

• that guarentees (according to these classes and the described attacker’s model)

1. that there’s no possibility to observe the relations of communication for noninvolved persons and

2. (if desired) anonymity for each communication partner and

3. that there’s no possibility to link the relations of communication

is the maximum of accessibility:

244

5.4 Basic measures settled inside the communication network

recode

resort

inputmessages

outputmessages

ignore repeated messages

Sufficient number of messages

from a sufficient amount of senders?

bufferingcurrentinputbatch

all inputmessages,which arerecoded

in the same

way

additional dummy messages,if neccessary

preventsfrom linka-bility by themessages'

timing

preventsfrom linka-bility byreplaymessages

preventsfrom linka-bility bymeans ofthe messa-ges' appea-rance

MIX

Figure 5.22: Basis functions of a MIX

245

5 Security in Communication Networks

• If all senders and receivers of messages that have been emitted buffered, ..encodedand resorted by one MIX (i.e. being mixed in one batch) work together the MIXcan be bridged in principle according to a message that was sent by someone else.This is particularly serious in practice if an attacker is able to provide n messages ofn+1 messages that were mixed together in one batch by a lot of MIX. From thesestatements made in [Pfi1 85, Page 11] follows that they can completely observeone further sender and receiver in spite of using any desired MIX that can’t becontrolled by the attacker if all senders and receivers of messages of same lengthin a certain time interval work together.

• That the method of the changed encoding MIX’s is not of use for this message istrivial, as long as all MIX’s that are passed through by a message work together.

From the aim explained above follows that changing the encoding of a message (oras explained later of message parts) with relevant content never consists of encodingfollowed by decoding, so long as redundant change of encoding is omitted and the usedsystem of secrecy only has the properties described in §3.1.1.1: If there is a encodingfollowed by decoding the decoding has to be done with the key that fits also to theencoding, otherwise the message can’t be understood in future. This means that themessage is available in both MIX with the same shape. Therewith both MIX are ableto uncover the relations of communication with or without this en– and decoding. Thusthe en– and decoding is extraneous to the protection of the relations of communicationin terms of the aim described above. The more so as the participation of sender andreceiver does not change.

From the aim described above it also follows that all messages of same length in theconsidered time interval have to pass the MIX’s at the same time (and therefore in sameorder, too): If messages of same length do not pass the MIX’s at the same time in theconsidered time interval then there is a MIX m which is not passed by two messagesN1 and N2 at the same time. If all MIX’s are working together now they are able toobserve this and differentiate between the senders and receivers who belong to N1 andN2, cp. picture 5.23.

If all messages of same length pass the MIX’s in the considered time interval at thesame time and if each participant station sends and receives at least one message of eachpossible length in the considered time interval the aim of the procedure of the MIX’sthat change the encoding is achieved even without classification of the participants.

If, in addition to the case mentioned above, each participant station sends and receivesa constant number of messages of each possible length in the considered time intervalperfect computational secure in–observability of the relations of communication accordingto non involved persons is achieved. And – so long as it is desired – perfect computationalsecure anonymity for the communication partners as well as perfect computational secureunlinkability of the relations of communication. (cp. §5.1.2)

The changed encoding done by the MIX’s has to be constructed in order to guaranteethat the receiver is able to understand the message of course: If the message arrives

246

5.4 Basic measures settled inside the communication network

XXX Add Figure 5-23, the .CDR allready exists XXX

Figure 5.23: Maximum security: passing through the MIXes simultaneously and, because of that, in thesame sequence

him encrypted he has to know all keys necessary for decryption as well as the rightorder of their application or he has to get to know them by himself while the processof decryption. Additionally each MIX must be able to perform the expected changedencoding, thus it has to know the key for en– and decoding or get to know it while theprocess of changing the encoding.

An encrypted change of encoding of a message must be specified by thereceiver for the enforcing MIX, a decrypted change of encoding of a messageby the sender.

Additionally the keys used by the MIX mustn’t betray anything about the identity ofthe sender or receiver of the message. It is probably acceptable that the first MIX of amessage knows its sender and, if there’s no distribution used for the protection of thereceiver, that the last MIX knows the receiver. The used key shouldn’t betray any-thing about the sender or receiver to a mid MIX and this means that it isn’t possiblein practice to use a statically determined exchanged secret key and therewith a infor-mation theoretical system of secrecy. (In §5.4.5.4 a method for exchanging of a key wasdescribed that was based on sender anonymity by pairwise overlaying receiving. If thesender anonymity is information theoretical the key exchange takes place in a way thatguarentees secrecy and integrity as well as anonymity – likewise in the information the-oretical model world. Because the information theoretical sender anonymity is possiblebut is connected with huge effort the mentioned key exchange procedure is sure possiblebut hardly applicable.) Public keys and therewith asymmetric systems of secrecy mustbe used for ”mid” MIX’s.

Thus messages on and therewith messages from ”mid” MIX can only be encoded ina way that guarantees perfect computational secure systems of secrecy. The anonymitythat is achievable in this way is only of computational secure nature.

After basic deliberations concerning possibilities and borders of changing the encodingconcrete schemes of changing the encoding can be derived systematically. At first thiswill be done by generalizing the three schemes of changing the encoding described in[Chau 81] where they are described in a simple way and not discussed in detail accordingto their borders and necessity.

5.4.6.2 Sender anonymity

It is searched for a scheme of changing the encoding where the sender is permitted tosend a message to a receiver while being completely anonymous towards the receiverand all (instead of the first) MIX. Additionally the relations of communication are allconcealed as long as not all MIX’s that are passed through by the message do work

247

5 Security in Communication Networks

together or all other senders and receivers of messages do work together in the sametime interval. According to that the sender is not able to use any secret key with thereceiver and the MIX towards whom he wants to be anonymous. Thus an asymmetricsystem of secrecy must be used and the sender must know the cipher key.

With the keys that are suitable for encryption the sender is able to encryptthe message directly (direct scheme of changing the encoding in order to guar-antee anonymity for the sender) or the sender encrypts a key that specifiesanother change of encoding of a message (indirect scheme of changing theencoding in order to guarantee anonymity for the sender, this is describedbelow in the general)

direct scheme of changing the encoding in order guarantee anonymity for thesender:The forwarder of a message encodes the message with publicly known cipher keys of anasymmetric system of secrecy (resp.: for the first MIX in the order of the MIX that mustbe passed through if necessary with a secret key of a symmetric system of secrecy.)sothat it must be decoded in the order of MIX’s chosen by him with their dedicated andsecret decoding keys (resp.: if necessary with the agreed secret key with the first MIX).Thereby the appearance of the message changes with every bit of the way, i.e. its waycan’t be followed up (unless all MIX it is passing through do work together or the senderis cooperating or all other senders and receivers of messages work together in the sametime interval(more exactly: in the same batch)).

A1, . . . , An may be the sequence of the addresses and c1,, . . . , cn the sequence of thecipherkeys that are publicly known of the MIX sequence MIX1, . . . , MIXn that waschosen by the sender, whereby c1 can be also a secret key of a symmetric system ofsecrecy. An+1 may be the address of the receiver who is called for simplification MIXn+1,and cn+1 is his cipherkey. z1, . . . , zn may be a sequence of random bit strings. If c1 is asecret key of a symmetric system of secrecy then z1 can be an empty bit string. If c1 isa cipherkey of an asymmetric system of secrecy that encodes indeterministically then z1

can be an empty bit string, too. The sender creates messages Ni that will be receivedby MIXi, on the basis of the message N1, which the receiver (MIXn+1) is supposed toreceive:

Nn+1 = cn+1(N)

Ni = ci(zi, Ai+1, Ni+1) (for i = n, . . . , 1)

The sender sends N1 to MIX1. After the decoding each MIX receives the address ofthe next MIX and the message that is dedicated for the next MIX.

The additional encoding of random bit strings is necessary for the applicationof asymmetric systems of secrecy that are deterministic because an aggressorwould be able to guess and test not only short standard messages with thepublicly known cipher key but also the whole output of the MIX (completelywithout guessing).

248

5.4 Basic measures settled inside the communication network

With this direct scheme of changing the encoding in order to guarantee anonymity forthe sender the changed encoding of a message (at least at all MIX besides the first)iscompletely determined by each public key. In order to let not develop counterparts overfrequencies of messages between input and outputs of these MIX’s each MIX works onlyon different input messages by using the dedicated secret cipher key, while it retains itskey pair. If a MIX receives a message several times during a time interval in order todecode it with its secret cipher key, it will ignore it.

The direct scheme of changing the encoding in order to guarantee the anonymity ofthe sender which was explained in words before is plotted in the picture 5.24 by means oftwo MIX. Though the addresses were edited out and the indices are only for distinctionbetween messages and random bit strings, because a double index would have becomenecessary by using the convention of indices of the recursive scheme of creation.

In the picture 5.25 a greater example (n = 5) is shown. It emphasizes who knows whichkey and in which order messages are encoded, transferred and decoded.

5.4.6.3 Receiver’s anonymity

Beside the possibility to be able to send a message anonymously according to a relation ofcommunication it is necessary to be able to receive a message anonymously according toa relation of communication. This is merely not achieved by the scheme of changing theencoding just described before in the regard that the sender doesn’t know the messageNn+1 of course and therefore he is able to identify the receiver in a usual network aslong as

1. the sender himself or someone who collaborates with him is able to listen in to theaccording line or he has the cooperation with the operator or with the producer ofthe communication network which underlies the MIX network,

2. the address An+1 is a public or explicit address which evinces a reference of personsor which scheme of creation (oriented on the topology of the relations of commu-nication) is known in general resp. which is able to be exploited or the ”location”of this address can be disclosed to a producer of the communication network or

3. additionally to the 1st and 2nd aspect the sender has the cooperation with MIXnifhe is able to choose (which is probable).

This problem can be solved

• according to the first threat by encoding the connection virtually between MIXn

and receiver [Pfi1 85, page 12, 14]

• according to the second threat by using implicit and private addresses (for exampleAn+1 could be an agreed implicit and private address between MIXn and receiverand could then be replaced with an explicit and public address by MIXn)

• according to all threats with the procedure (described in §5.4.1) for protecting thereceiver by using implicit addressing and distribution.

249

5 Security in Communication Networks

c1(z 1,c2(z4 ,N1)) c 1(z 2,c2(z5 ,N2)) c 1(z 3,c2 z(6 ,N3))

MIX 1

ignores repetitions, buffers messages,

recodes messages using d1(c1(z i ,N'i)) = (zi,N'),i

ignores z , resorts N' , outputs N' i .

2 21c (z ,N ) c 2(z1 ,N2)c2 z( 3,N3)

MIX 2

N1N2 N3

cj is encryption key and d j is decryption key of MIX j.

z i are random bit sequences, N are messages.i

i i

d1(c1(z i ,N i)) = (zi,N ),i

ignores z , resorts N ,outputs Ni .i i

ignores repetitions, buffers messages,

recodes messages using

Figure 5.24: MIXes hide the connections between ingoing and outcoming messages

250

5.4 Basic measures settled inside the communication network

MIX1

MIX2

MIX3

MIX4

MIX5 ES

2

c1

Ec

5c

4c

3c

c

d

d1

d3

d4

d5

encryption

transfer

decryption

d2

E

S sender E receiver

Figure 5.25: Transmission and encryption structure of messages in a MIX network, using a direct schemeof recoding for sender anonymity

Even more than the relations of communication can be protected by the latter. De-pending on structure and productivity of the underlying communication network thedistribution of ”last” (i.e. directed straightly to the sender) messages can reduce theproductivity of the communication network in an unacceptably way. Thus it is searchedfor less debiting kind of protection of the receiver according to his relation of communi-cation to the sender. This kind of protection is not supposed to protect the sender fromthe receiver because it could be combined with a scheme of changing the encoding in or-der to guarantee anonymity for the sender: The sender sends its message to any instanceX by using the scheme of changing the encoding in order to guarantee anonymity forthe sender, for example to any MIX with the request to send it to the receiver by usingthe scheme of protecting the receiver from the sender. For the protection of the relationof communication between sender and receiver (more exactly: their mutual anonymity)it is now absolutely uncritical that the sender is able to observe the receiving of hismessage by X and that the receiver is able to observe the sending of his message by X.Having the instance X fulfilled its purpose – the most simple way possible to explainthe principle – it is removed now: The concatenation of the scheme of changing theencoding can be modified easily just in that way that the last MIX of the scheme for theanonymity for the sender sends the message (which was sent by X up to now) directlyto the first MIX of the scheme for the anonymity for the receiver.

If one ignores the protection of the sender from the receiver according to a relation ofcommunication then it is searched for a scheme of changing the encoding which allowsthe receiver to receive a message’s content from a sender but staying anonymous towardshimself and all other used MIX (instead of the last one) as well as hiding the relations

251

5 Security in Communication Networks

of communication. As long as not all MIX that are passed through by the messageor all other senders and receivers of messages do not collaborate at the same time. Inorder to keep the anonymity towards the sender the receiver has to specify the changeof encoding to the passed through MIX – but not like the sender in the scheme ofchanging the encoding in order to guarantee anonymity for the sender. According to thecontent of the message which was created by the sender the change of the encoding mustbe encrypting wherefore each key that is to be used has to be delivered to each MIX.Because the receiver is not able to use a statically exchanged secret key (according tothe things explained before) with the sender and the MIX towards whom he wants tobe anonymous. This delivering of the key is done most practical by appending a secondpart – the so-called return–address–part– to the content of the message which wasgenerated by the sender. This re–address–part ist part of a untraceable return addressthat was created by the receiver [Chau 81]. The return address part has to be decoded byeach MIX with the kept–in–secret decoding key which fits to the public known cipher key(a secret cipher key of a symmetric system of secrecy that has been exchanged betweenthe last MIX and the receiver can be used if necessary), in order to acquaint as contentnot only the return address part for the next MIX but also the key he is supposed toused for the encoding of the message that was created by the sender. Thus it follows:

Schemes of changing the encoding in order to guarantee anonymity for thereceiver must be indirect schemes of changing the encoding.

Indirect change of encoding for receiver’s anonymity: A1, . . . , Am may be thesequence of the addresses and c1, . . . , cm may be the sequence of the publically knowncipherkeys of the MIX –sequence MIX1, . . . , MIXm which was chosen by the receiver,whereby cm can be a secret key of a symmetric system of secrecy, too. The messagethat is attended by the return address will pass these MIX in ascending order dependingon their indices. Am+1 may be the address of the receiver who is called MIXm+1 forsimplification. Likewise for simplification the sender will be called MIX0. The receivercreates an anonymous return address (k0, A1, R1) whereby k0 is a generated key forof a symmetric system of secrecy just for this purpose (an asymmetric system of secrecywould be possible too, but costlier). MIX0 is supposed to use this key k0 in order toencode the message’s content in order to guarantee that MIX1 is unable to read thismessage. R1 is part of the anonymous return address which is transmitted by MIX0

containing the message content which was generated and encoded (by using key k0) byR1. R1 is created by a randomly chosen unique name e of the return address in anrecursive working scheme which is described in the following:

• Rj designates the part of the return address which will be received by MIXj

• kj designates the key of a symmetric system of secrecy (an asymmetric system ofsecrecy would be possible too, but costlier) with which MIXj encodes this part ofthe message that contains the message’s content

252

5.4 Basic measures settled inside the communication network

Rm+1 = e

Rj = cj(kj , Aj+1, Rj+1) (for j = 2, . . . , m + 1)

These return address parts Rj and the (if necessary already several times encoded)message’s content I generated by the sender, called message’s content part Ij , are consti-tuting the messages Nj . These messages Nj are created by MIXj−1 and sent to MIXj –according to following recursive working scheme they are created and sent by the senderMIX0 and then in sequence by the passed through MIX’s MIX1, . . . , MIXm:

N1 = R1, I1; I1 = k0(I)

Nj = Rj , Ij ; Ij = kj−1(Ij−1) (for j = 2, . . . , m + 1)

Thus the receiver MIXm+1 receives e, Nm+1 = e, km(. . . k1(k0(I)) . . .) and is able todecode without any problems and is able to extract the message’s content I because hecan assign all secret keys kj (in case of an asymmetric system of secrecy: all decipheringkeys) to the unique name e of the return address part in right order.

The additionally encryption of random bit chains with usage of deterministicsystems of secrecy is not necessary because the encoded parts of messagesRj which were encoded with public ally known cipher keys of the MIX arecontaining enough information the aggressor doesn’t know. According to thechange of encoding with keys that are unknown to the aggressor the agressorhas no possibility to test.

In this indirect scheme of changing the encoding in order to guarantee anonymity for thereceiver only the changed encoding of the return address part (at least at all instead ofthe last MIX) is totally definite by each publically known cipher key. In order to let notdevelop any counterparts between in– and output of these MIX’s concerning messagefrequencies each MIX processes only different return address parts(so long as he keepshis pair of keys). From this it follows:

Each anonymous return address part can only be used once.

A repetition of a message’s content part Ij is noncritical as long as always a different keyis used for changing the encoding. This is the case if and only if always new keys kj forcreating anonymous return addresses are used because each anonymous return addresscan only be used once. If a receiver does not pay attention to that while creatinganonymous return addresses he hurts himself. If a MIXj uses an old key kj in order tocreate a return address he is not able to hurt the generator of kj more than betrayingthe context of the messages which was to be concealed by the originally change of theencoding with kj . Picture 5.29 shows this scheme graphically.

253

5 Security in Communication Networks

5.4.6.4 mutual anonymity

If this scheme for changing the encoding in order to guarantee anonymity for the re-ceiver is used alone the relations of communication are completely concealed for noninvolved (as well as if the scheme for changing the encoding in order to guarantee sendersanonymity is used). But because the receiver knows the return address part R1 here heis able to observe the sender so long as

1. the receiver himself or somebody who cooperates with him is able to listen in tothe corresponding line or the receiver stands in cooperation with a carrier or aproducer of the underlying communication network of the MIX

2. the receiver has the cooperation of the MIX1 (what is very probable if he is ableto select it) and which assigns public or explicit addresses of senders to messagesthat were sent to the underlying communication network of the MIX (and whichinforms – like it is planned for ISDN) that are evincing a reference to persons orwhose scheme of creation (mostly oriented by the topology of the communicationnetwork) is either publically known resp.: which can be developed easily or thesender is informed by the carrier or a producer of the communication networkabout the ”place” that is associated to the address, or

3. additionally to the 1st point the receiver stands in cooperation with MIX1 whatis – as already mentioned – very probable if he is able to select it.

This problem can be solved regarding to the first threat by virtual encoding of theconnection [Pfi1 85, page 12, 14], regarding to the second threat by a suitable configura-tion of the protocol in the underlying communication network of the MIX network, andregarding to all other threats with the in §5.4.4 described procedure for protection of thesender. Even more than the relations of communications are protected by the last. De-pending on structure and productivity of the underlying communication network thesemethods can reduce the performance of the communication network in an unacceptablyway which means that then – as long as the application requires mutual anonymity –only the already described combination of a scheme for changing the encoding in orderto guarantee senders anonymity and receivers anonymity is possible [Pfi1 85, page 14f].(This idea is probably contained in [Chau 81, page 85] as well, but the formulation inher universilaty is wrong). This mutual anonymity must be available in the first messageof course which is achieved by the former described indirection–step by using an (actualneccessary) additional instance X (resp.: at least functionality that can be achievedby an instance that is already available), here a directory of addresses with anonymousreturn addresses. Because each anonymous return address can be used only once theusage of printed directories is very long–winded. But in contrast to that an electronicallydirectory which put out addresses only once are no problem.

5.4.6.5 Changing the encoding without changing length of messages

Having derived the easiest possible schemes of changing the encoding for guaranteeinganonymity for both sender and receiver and having described their field of application,

254

5.4 Basic measures settled inside the communication network

equal MIX order: shrinking message lengths are no problem

different MIX orders: shrinking message lengths are problematic

MIX 1 MIX 2 MIX 3

MIX 1

MIX 2

MIX 3

Figure 5.26: Decreasing message length while changing the encoding

it is now pointed out to another property which they have in common: The length of themessage is changed in both schemes of changing the encoding: they are getting shorter.This is no problem if all messages pass through the same MIX’s in same order which isnecessary in order to achieve maximal possible anonymity (like it is shown in §5.4.6.1),cp. above figure 5-26. Viewpoints concerning performance and costs could now preventthat all messages get their encoding changed in all MIX’es. In these MIX networks thedecreasing length of messages could provide interesting information for attackers. It iseasy to see that in the lower part of figure 5.26 messages do not cross within the MIX’swhat can not be seen in the upper part of the figure.

Therefore it is desirable to use a scheme for changing the encoding that keeps the lengthof the messages so that nobody is able to know (accept of their original generators) whilelooking at the message how often its encoding have been changed or will be changed.

255

5 Security in Communication Networks

Additionally all messages should have the same message length – as it was shown inthe beginning of these section – and should not be differed by the applied scheme ofchanging the encoding. Thus it is searched for an universal scheme for changing theencoding while keeping a message’s length. This must be inevitably an indirect schemefor changing the encoding as it was said in the derivation of a scheme for changing theencoding in order to guarantee receivers anonymity.

Indirect scheme of changing the encoding while keeping a messages length:

In addition to the notations introduced in the section about schemes of changing theencoding in order to guarantee receivers anonymity squared brackets are standing forblock limits. Starting with the sender (also called MIX0) the MIX MIX1, . . . , MIXm

shall be passed through in sequence and finally the receiver (also called MIXm+1) shallbe reached. The receiver does not need to know that he is the receiver when receivingthe first message. He processes it as all previous MIX have done it, too. Each messageMj consists of b blocks of same length that is attuned to the applied asymmetric systemof secrecy. It is created by MIXj−1. The first m+2−j blocks of Nj contain the (return–)address part Rj , the last b −m − 1 blocks contain the messages content. Each MIXj

decode the first block of the message Mj using its secret decoding key dj . It will findas an result of the encoding a key kj which is a key of an symmetric system of secrecy(an asymmetric system of secrecy that is used by the MIX would be possible too butcostlier)that is attuned to the block length for changing the encoding of the remaindingpart of the message. He will also find the address Aj+1 of the next MIX (or receiver).The receiver finds his own address as the address of the next MIX (or receiver) andrecognizes that he is the receiver of the message. This first block of Nj won’t be neededby the following MIX (resp. the receiver) and is therefore thrown away by the MIXj .MIXj changes the encoding of the remaining b− 1 blocks using the key kj and appendsa block with random content Zj in front of the first block with message content in ordernot to change the message length (in [Chau 81, page 87] the encoding of this block ischanged too by using the key kj which is redundant because of the random content).

Therewith this is able to work the (return) address part R1 (implying a randomlychosen unique name e)which is used by the sender is created using the following scheme:

Rm+1 = [e]

Rj = [cj(kj , Aj+1), kj(Rj+1)]forj = m, . . . , 1

The creation scheme of the messages Nj and especially keeping the message length inthis recursive scheme of changing the encoding is illustrated in figure 5.27. BecauseMIXj does not need to know its index j it does not need to differentiate between blocksof the (return) address part and blocks containing random content. Referring to figure5.27 this means that the MIXj merely knows that block m + 1 of the message Nj+1 isa block containing random content (it is gray–colored in the figure).

Because all MIXs are supposed to get to know m (in order to know whereto insert their block with random content), the MIXs know both m as the

256

5.4 Basic measures settled inside the communication network

N

R

j

j

1 2 3 b+2-... m j ...+3-m j +4-m j m

k (R )jj +1

blocks with

random contents

N

Rj

1 2 +1-... m j ...+3-m j m

k (R )jj +2

+1

+1

+2-m j

Zj -1

Zj

k , Aj j +1

j +1

...+1

m +1

m +3m+2

b...m +3m+2

blocks containing

the message

decryption using recoding usingd kj j

blocks with

random contents

blocks containing

the message

Figure 5.27: Indirect length preserving recoding scheme

upper bound for the number of MIXs that are passed through and b−m− 1as the upper bound for the length (given in blocks) of the messages content.

From the creation scheme of the return address part one can gather that all blocks(beside the first of MIXj)of the (return–)address part are supposed to be decoded withkj . Whether the last b − m − 1 blocks that contain the content of the messages aresupposed to be en– or decoded with kj depends on the changing the encoding executedby MIXj and if it serves the anonymity for the sender or receiver. Is the change ofthe encoding using kj supposed to serve the anonymity for the sender MIXj has todecode because otherwise the receiver has to get to know the key kj – then the changingof the encoding as a protection against him would be useless – or he wouldn’t be ableto understand the content of the message. Accordingly MIXj has to encode if thechange of the encoding is supposed to serve for the anonymity of the receiver. Insteadof absorbing additional bits to the above–mentioned recursive scheme of creating the(return–)address parts (which are indicating if the MIXs are supposed to en– or decodethe blocks containing message content) this information is contemplated as part of thekey kj .

Therewith MIXj does not need to know its index j all blocks containing randomcontent that were received by him shall be treated in the same way as all blocks (insteadof the first) of the (return–)address part Rj . Thus they shall be decoded. This is for-mally expressed by summarizing the (return–)address part Rj and the blocks containingrandom content to the message header Hj (header). In the following [Hj ]x,y stands forthe blocks x and y (each inclusively) of Hj . If x is greater than y then [Hj ]x,y doesn’t

257

5 Security in Communication Networks

stand for a block.

H1 = R1

= [c1(k1, A2)], k1(R2)

Hj = K−1j−1([Hj−1]2,m+1), [Zj ]

= [cj(kj , Aj+1)], kj(Rj+1), K−1j−1([Hj−1]m+3−j,m+1), [Zj ]forj = m, . . . , 2

If the indirect scheme of changing the encoding in order to guarantee sender’s ano-nymity while keeping the length of a message the sender has to generate the keysk1, k2, . . . , km and has to know the decoding key cm+1 and the address Am+1 of thereceiver. Therewith the message can be understood by the receiver the sender has todecode it with the keys generated by him before dispatching the message. Thus themessage N1 is created by the sender using the following scheme that is based on themessage content I = [I1], . . . , [Ii] that is dissected in i = b−m−1 blocks (if the messageis too short, it has to be filled up to the fitting length) and the message header H1:

N1 = H1, I1; I1 = k1(K2(. . . (cm+1(I1)) . . .))

= k1(k2(. . . km(cm+1([I1])) . . .)), . . . , k1(k2(. . . km(cm+1([Ii])) . . .))

Nj = Hj , Ij ; Ij = k−1j−1(Ij−1); (for j = 2, . . . , m + 1)

The message Nj is created by

1. the (return–)address part Rj that consists of m + 2− j blocks at a time and

2. j − 1 blocks of random content that could have been ”decrypted” several times ifit was necessary and

3. and the content part of the message Ij that could have been ”decrypted” severaltimes if it was necessary and which was generated and encrypted with the keyscm+1, km, . . . , k1 by the sender.

Using this message the MIXj creates the message Nj+1 using the recursive schemeexplained above. Figure 5.28 exemplifies the use of the scheme of changing the encodingwithout changing the length of the message in order to guarantee users anonymity form = 5. This representation is already known from figure 5.25.If the scheme of changing the encoding without changing the length of a message issupposed to be used in order to guarantee receivers anonymity the receiver generatesthe keys k0, k1, . . . , km and the anonymous return address (k0, A1, R1) which is disclosedto the sender. The sender creates by use of (k0, A1, R1) the message N1 as well as eachMIXj creates by use of Nj the message Nj+1 using the following recursive scheme:

N1 = H1, I1; I1 = k0(I)

= k0([I1]), . . . , k0([Ii])

Nj = Hj , Ij ; Ij = kj−1(Ij−1) (for j = 2, . . . , m + 1)

258

5.4 Basic measures settled inside the communication network

MIX1

MIX2

MIX3

MIX4

MIX5 ES

Ec

5c

d

encryption

transfer

decryption

E

5k

4c

4k

3c

3k

2c

2k

1c

1k

1d

1k

2d

2k

3d

3k

4d

4k

5d

5k

Figure 5.28: Indirect recoding scheme for sender anonymity

After receipt of Nm+1 the receivers recognizes by the unique name u the return addressand is now able to decrypt using the known and appropriate keys km, km−1, . . . , k0 andthus he receives the plain text.

This is shown in figure 5.29 for the example m = 5 using the graphical representationof figure 5.25. It is not shown how detailed the receiver R sends the return address tothe sender S. Numbers represented in outlines specify the order of the steps.

If with this scheme of changing the encoding without changing length of a messagesender and receiver are supposed to remain in anonymity one in front of the otherin a checkable way the blocks of the (return–)address part from m to a certain indexg (1 < g ≤ m) have to be created by the sender and for g − 1 the blocks have to becreated by the receiver. Thus the sender receives Rg and from this m + 2− g blocks hecreates the return address R1 which consists of m + 1 blocks. He encrypts the messagecontent using the key ks (which was supplied as a part of the return address) and withthe g− 1 keys that he had generated. After receipt of the of the message content whichwas encrypted several times the receiver has to decrypt with the keys kg, . . . , km, kS .

Figure 5.30 explains this for m = 5 and g = 4 using the graphical notation of figure5.25. As in figure 5.29 the numbers represented in outlines specify the order of the stepsto do.

In order save transmission–effort - and time one MIX is able to render the service ofthe last MIX of the part to guarantee senders anonymity and the first MIX of the partto guarantee receivers anonymity (in the example shown in figure 5.30 the function ofthe MIX’s MIX3 and MIX4). This is useful if and only if sender and receiver have nospecial wishes for trusting single MIXes.

If messages between mutual partners are supposed to have return addresses it is apparentin this case that not all i blocks containing message content are available for the actualmessage content: m + 2− g blocks are needed for the return address – and this in each

259

5 Security in Communication Networks

MIX1

MIX2

MIX3

MIX4

MIX5 ES

5

4

3

2

c

c

c

c

d

d

d

d

1

3

4

5

encryption

observable

transfer

decryption

k

k

k

k k

k

k

5

4

k3

2

c1

k11

d2k

2

3

4

5

cS

kS

dS

kS

unobservable

transfer

message head

message content

kS

k1

k2

k3

k4

k5

k

k

k

5

4

k

3

2

k

S

k

1

4

3

5

6

7

8

3

4

5

6

7

8

9

1

2

Figure 5.29: Indirect recoding scheme for receiver anonymity

260

5.4 Basic measures settled inside the communication network

message (even if g can be chosen for each message differently), because exactly like in theindirect scheme for changing the encoding in order to guarantee receivers anonymity eachMIX (as long as he keeps his pair of keys)is allowed to work on only different addressparts in the indirect scheme of changing the encoding without changing the length of amessage.

Same is valid for changing of the encoding of the blocks which belong to the samemessage and then their encoding is changed with the same key: if the used system ofsecrecy is a block cipher it is not allowed that two blocks that are supposed to get theencoding changed are similar. Same is valid if a self-synchronizing stream cipher is used,cp. §3. Merely if a synchronous stream cipher is used the MIX needn’t to take actionsreferring to this. This is correspondingly valid for each indirect scheme of changing theencoding. This is to be emphasized in this case particularly because we are talking about”blocks” here. This active attack by repetition of message parts which encodingis to change indirectly was overlooked by [Chau 81, Pfi1 85, page 14, 27].

Furthermore it is emphasized that changes of the encoding of the (return–)addresspart which is specified by the keys from c2 to cm−1 can only be computationally secure.The anonymity that is achieved by the MIX from MIX2 to MIXm−1 for the relationsof communication can only be computationally secure, too. The changes of encodingthat are specified by c1 resp. cm−1 can take place information theoretically secure byexchanging secret keys between sender resp. receiver and MIX1 resp. MIXm. As longas sender or receiver can choose the order of the MIX’s freely they should exchange thesecret key with the MIX they trust most. As it was substantiated before the free choiceof the MIX–order excludes the achievement of the maximal aim of this procedure of theMIX that change the encoding.

If the symmetric system of secrecy has additionally to the property k−1(k(x)) = x thatis postulated in §3 for all keys k and plain texts x the property k(k−1(x)) = x, i.e.: ifnot only en- and decoding but also de- and encoding are reverse it is not necessary, then,to decide between the use for senders anonymity and receivers anonymity in the indirectscheme for changing the encoding without changing the length of a message, as it willbe explained in the following. Additionally it is possible to withhold the knowledge ofthe following things for the MIX:

• m as the upper bound of the number of MIX that has to be passed through, aswell as

• b−m− 1 as the upper bound for the length (in blocks) of the message content.

Instead of the m + 1th block each MIX attaches his block with random content as lastblock. The receiver will then receive the message content in the blocks hence block 2instead of in the last blocks, cp. figure 5.31.

If it is determined arbitrary that each MIX encodes all blocks of the message one receivesthe only scheme of changing the encoding without changing the length of a message thatis specified in [Chau 81]. As it was mentioned before a redundant encryption of the

261

5 Security in Communication Networks

1

MIX1

MIX2

MIX3

MIX4

MIX5 ES

5

4

3

2

c

c

c

c

d

d

d

d

1

3

4

5

encryption

observabletransfer

decryption

k

k

k

k k

k

k

5

4

k3

2

c1

k1 1

d2

k2

3

4

5

unobservabletransfer

for anonymity of sender for anonymity of receiver

message content

k

k2

k3

k5

kk5

k3

2

k1

kS

kS

k 4 k4

message head

1

Sc k

SSd k S

4

5

6

7

8

1

2

3

3

4

5

6 7

8

9

Figure 5.30: Indirect recoding scheme for sender and receiver anonymity

262

5.4 Basic measures settled inside the communication network

N

R

j

j

1 2 3 b+2-... m j ...+3-m j +4-m j b

k (R )jj +1

N

Rj

1 2 +1-... m j ...+3-m

j

-b j

k (R )jj +2

+1

+1

+2-m j

Zj -1

Zj

k , Aj j +1

j +1

...+1-

b...

decryption using recoding usingd kj j

b j+1-

b j+2- b j+3-

b j+2- b -1j

blocks with

random contents

blocks containing

the message

blocks with

random contents

blocks containing

the message

Figure 5.31: Indirect length preserving recoding scheme for special symmetrical concelation systems.

attached block with random content is avoided:

Rm+1 = [e]

Rj = [cj(kj , Aj+1)], k−1j (Rj+1) (for j = m, . . . , 1)

N1 = R1, I1; I1 = k0(I) = k0([I1], . . . , k0[Ii])

Nj = Rj , Ij ; Ij = kj−1(Ij−1), [Zj−1] (for j = 2, . . . , m + 1)

If the message content I is plain text the receiver is only able to understand it whenhe knows the keys k0, k1, . . . , km+1, thus (k0, A1, R1) is, in other words, an anonymousreturn address and therewith the scheme of changing the encoding in order to guaranteereceivers anonymity is used.

If it is supposed to use the scheme for changing the encoding in order to guaranteesenders anonymity the sender generates the keys k1, k2, . . . , km and has to know the ci-pher key cm+1 and the address Am+1. Therewith the message content can be understoodby the receiver the sender has to decode it with the keys generated by the receiver beforehe sends de message. From the formulas above only the one for I1 has to be modified:

I1 = k−11 (k−1

2 (. . . k−1m (cm+1(I1)) . . .))

= k−11 (k−1

2 (. . . k−1m (cm+1([I1])) . . .)), . . . , k−1

1 (k−12 (. . . k−1

m (cm+1([Ii])) . . .))

263

5 Security in Communication Networks

5.4.6.6 Efficient avoidance of changing the encoding repeatedly

As it can be assumed from the examples mentioned recently, the MIX stations haveto pay attention to not execute a special change of encoding more than once(i.e. similar message part whose encoding is to be changed, similar system of secrecyand similar key that specifies the change of encoding exactly) according to all schemesof changing the encoding. Otherwise an aggressor could send different messages withthe same message part repeatedly. Than he could observe which output–message partis repeated in the appropriate MIX–outputs with appropriate probability. With thisknowledge the aggressor could be able to bridge this MIX also according to the messagethat had contained the message part originally.

An idea one could have, which is unfortunately false, in order to solve this problemwhich causes, if necessary, a lot of memory and also message–comparing effort, consistsof refusing access for the aggressor to messages (and therewith to message parts, too)between MIX’es resp. between sender and first MIX as well as between last MIX andreceiver by encoding each connection virtually. The mistake in this idea is that this kindof attack is only impossible for un–participated in the process of changing the encodingof a message, but not for participants, especially the MIX. The first and the last MIXcould disclose the relations of communication together by sending a message part of amessage which was to be send by the first MIX repeatedly through all MIX that liebetween them (and would be completely useless in this case).

Therewith only four possibilities to reduce the effort of comparing memory and mes-sage: Like it is described in [Chau 81, page 85]

1. the publically known decoding keys of the MIX’s can be exchanged for new keysmore often or

2. the parts whose change of the encoding is specified by the public decoding key ofthe appropriate MIX can contain a time ”stamp”, which determines definitely10 atwhich juncture their only acceptable changing of the encoding will take place.

The last point would reduce the effort almost to zero but both points are inapplicableaccording to resp. in anonymous return addresses if the sender is supposed to choosefreely when he wants to reply.

Therefore the possibility mentioned in [Pfi1 85, page 74f] is important:

3. Instead of saving the image of message parts (after the definition of the asymmetricsystems of secrecy they must have a length of at least 100 bits, but should be evenlonger than a hundred of bits for security reasons ),which encodings are supposedto be changed by MIX’es, with a variable length they should be saved with aconstant length, e.g. 50 bits, using abbreviating hash functions.

This hash function is just supposed to map the //Urbildraum equable to its //Bil-draum – it is absolutely allowed (with justifiable effort) that it is possible to find two

10It can be useful, too, to determine with this time stamp when the message’s encoding is changed atthe latest. This makes a consideration between the keeping–small of the already–mixed–list and theflexible use of e.g. return addresses possible.

264

5.4 Basic measures settled inside the communication network

similar //Urbilder which are mapped to the same image. Before a MIX applies its se-cret decoding key to a message part, he creates the appropriate image using the hashfunction and prooves if it is already recorded in its already–mixed–list. If yes, he ignoresthe message, if no, the image is registered in its already–mixed–list and the decoding isexecuted. The disadvantage of this procedure is that with a very small probability thecase occurs that randomly two different message part are mapped to the same image.And therefore the one who arrives later is, by mistake, not processed. In this case thesender is supposed to retry this with another message, e.g. by changing the key or thethe random bit chain. For being able to do so fault–tolerance–measures are required.But these are even more urgent and frequently needed (short: in any case) and describedin detail in [Pfit 89, §5.3].

Another, very similar, possibility is chosen in a MIX–implementation (MIXMaster)which is prevalent in the internet:

4. a certain part of the message is saved (and always compared). This can be apart of the message which encoding is supposed to be encoded or which is alreadychanged. In the MIXMaster–implementation the random bit chains are chosen asthis part (cp. §5.4.6.2)

The fourth possibility saves in opposite to the third the effort of applying the hash–function, but presupposes for the memory efficiency that a message part with highcoincidence (in technical terminology: entropy) is chosen. Random bit chains have thisproperty, of course. Thus the MIXMaster–implementation is constructed very well.

By means of the procedure of the anonymous demands, which is described in the fol-lowing, the 1st and the 2nd possibility for reducing memory–and message–comparison–effort of the MIX’es can be applied for the protection of the receiver, too: Instead of aanonymous return address (with long validity) only a classification number is disclosedto the partner. The partner sends his answer which is addressed with the classificationnumber and is also encoded end–to–end to a non–anonymous instance X which exe-cutes a latching when an anonymous demand takes place, using a scheme for sender’sanonymity. Once in a while the participant station of the receiver sends by using ascheme for sender’s anonymity an anonymous return address to X, wherewith the par-ticipant station receives the answer within a determined batch order, as long as it hasalready arrived. Although more messages have to be send obviously using the anonymousdemands the advantage that

a) it is exactly known before who will perform which change of encoding in whichbatch and

b) therefore both the first and the second possibility are applicable,

is supposed to reduce the memory–and message–comparison–effort so much that never-theless the effort is reduced at all.

The procedure of the anonymous demands is of relevance where distribution for pro-tection of the receiver is not applicable because of narrow band participant–connection–conductions, cp. §5.5 .

265

5 Security in Communication Networks

5.4.6.7 Short preview

If all things that were said before are minded the worst consequence of an changingattack (or a mistake) by a MIX is the loss of a message but not the loss of the anonymityof the relations of communication. A loss of a message is certifiable and testable bypublic access to messages that were sent by the MIX or – effective, too, in case of virtualencoding of the connections– or that were signed with a MIX–specified sign key, cp.§5.4.7. Furthermore the probability of the loss of a message if a breakdown of a MIXoccurs (one MIX, that has fallen out, within a MIX sequence of a message is enough tolose it)can be decreased by appropriate error–tolerance measures, like it is shown in thealready announced §5.4.6.10.

Because of the time delay that occurs while waiting for messages, by proving of repetitionand resorting of messages as well as a consequence of the time effort for decoding in anasymmetric system of secrecy the already described scheme of changing the encoding isinapplicable for applications that require short times of transmissions.

If this scheme is modified, as it was described in [Pfi1 85, page 25–29] for the first time,one can shift anonymous channels that e.g. are sufficient for real–time–requirements ofthe telephone traffic and will be described more precisely in §5.4.6.9. Therefore a specialmessage is transmitted using the procedure described above in order to structure thechannel. This message delivers a key of a faster symmetric system of secrecy for eachchosen MIX. These MIX will use this key then for decoding the traffic in this channel.Changing the encoding in channels is useful if and only if at least two channels of samecapacity are simultaneously established and abolished by the same MIX. If not only therelations of communication are supposed to be protected by changing the encoding butalso sending and receiving of participant stations each participant station must be aMIX or must send a lot of meaningless messages. This was described in §5.4.3. and willbe also described for the boundary conditions of narrow band participant //Anschlus-sleitungen in §5.5.1.

5.4.6.8 Necessary properties of the asymmetric system of secrecy and break-ing the direct RSA–implementation

For conclusion it is pointed out explicitly that the systems of secrecy that are used bythe (middled) MIX’s have to resist not only (as each asymmetric system of secrecy) anadaptive chosen cipher text attack but also an chosen cipher text attack substantially(i.e. instead of the difficulty that the MIX don’t put out the random bit chains). Keypairs of the MIXes can’t be exchanged after each batch if at least anonymous returnaddresses are used, so that the used asymetric system of secrecy even has to resist anadaptive chosen ciphertext attack.

It is reminded to §3 where it was already said, that no one–step system of secrecy doesexist so far that is able to sustain such an attack just under the assumption of thedifficulty of a pedigreed standard problem in a provable way and would be therewitha canonical candidate for the implementation of the (middled) MIX. Because there aremore–step asymmetric systems of secrecy, that do this in a provable way, one can gain

266

5.4 Basic measures settled inside the communication network

provable secure MIX implementations from them. But these are very expendable becauseof the more–steps–property between sender (resp. the one who specifies the change ofencoding) and each MIX that is passed through: If the average MIX MIX1 to MIXn

are used the exchange of the key pairs between sender and MIXi, i > 1 has to takeplace through each MIXj , 1 ≤ j > 1 [Bott 89, §3.3.1]. This scatters the main advantageof the MIX in opposition to procedures for protection of data concerning traffic andinterests, viz. the comparatively small needed band width between participant stationand communication network. Therefore only the case of implementations for one–stepasymmetric systems of secrecy is explicitely treated in the rest of this book.

It was shown in §3.6.3 that RSA is not able to resist a non–adaptive chosen–ciphertext attack without an appropriate redundancy rating which was proven by the decoder.It will be shown in the following that the use of RSA is absolutely insecure if noadditional redundancy rating and if coherent and random bit chains(i.e. likeit is described in [Chau 81, page 84]) are used. This is shown, in order to keep thenotation simple, in figure 5.24.

The attacker observed a message c(z, N) in the entrance of the MIX. The attackerchooses an appropriate factor f and turns in the message c(z, N) ∗ c(f) to the MIX formixing it in the same resp. in a later batch. After the perfect (Unverkettbarkeit ???)ofc(z, N) and c(z, N) ∗ c(f) with arbitrary chosen factors f was shown in §3.6, “practical”perfect (Unverkettbarkeit ???) applies if only “a lot” factors are possible, i.e. the MIXhas no chance to recognize an active attack. It remains to show that the attackerreceives something that makes him able to find out N from the output messages fromthe appropriate batch. This can be represented in the general notation intuitively andapproximately right as follows: The MIX creates d(c(z, N) ∗ c(f)) = (z, N) ∗ f internal.If the attacker is able to choose f so that in the place of the output message accruesN ∗ f , in general notation this means (z, N) ∗ f = (?, N ∗ f) his attack was successful:he just has to prove which two message of the resp. of the both batch(es) comply theequation X = Y ∗ f . It is not relevant to know what accrues at the place of the randombit chains because this won’t be proven or outputted by the MIX.

It cannot be explained why this attack succeeds without using additional details ofRSA. The ones that are necessary for comprehension are that en–and decoding take placeby exponentiation within a remainder class ring. (Restklassenring ???) The modulusn = p∗q who characterizes the remainder class ring, whereby p and q are prime numbersof at least 350 bit length each (for security reasons), forms with the exponent, which isused for encoding, the publicly known cipher key. b bit long random bit chains and B bitlong messages can be put in each block of this block cipher, as long as 2b∗2B < n. In thedescription of the implementation by David Chaum b is much more smaller than B (asit is in each efficient implementation). If the random bit chains are put in a bit locationof higher valence then (z, N) = z ∗2B +N and (z, N) = (z ∗2B +N) = z ∗2B ∗f +N ∗fapplies.

From the congruency (z, N) ∗ f ≡ z′ ∗ 2B + N ′, which defines11 the values z′ and N ′,

11One assumes that on both sides of the defining congruency are the smallest non-negative representa-tives of the reminder class. That means that in reality the congruency is an equation which determines

267

5 Security in Communication Networks

|z| = |N| =

MIX(z,N)

c

N

...

...

(z,N) f·c

N f·

c

attackerobserves,chooses factor fand computes ((x,y) )

= x,y (mod n)

outputs y

attackermultipliesby factorf andcompares

c d

b B

Figure 5.32: Breaking the direct RSA implementation of MIXes

follows z ∗ 2B ∗ f + N ∗ f ≡ z′ ∗ 2B + N ′, from this follows 2B ∗ (z ∗ f − z′) ≡ N ′ −N ∗ fand from this follows z ∗ f − z′ ≡ (N ′ − N ∗ f) ∗ (2B)−1. If the attacker choosesf ≤ 2b than −2b < z ∗ f − z′ < 2>2b. The attacker now inserts into the formulaz ∗ f − z′ ≡ (N ′ − N ∗ f) ∗ (2B for N and N’ all output messages of the appropriatebatch(es) and proves if z ∗ f < z′ lies in the adequate interval. This is, with highprobability because b is much more smaller than B, just for only one pair of the outputmessages the case. The first message N of this pair is the message the MIX is supposedto get bridged with an active attack. The second message N ′ is the output messagewhich corresponds to the input message which was created by the attacker.

If two or more pairs are fulfilling this condition the attacker repeats his attack withanother factor. In his second A attempt and in all following attempts, too, he performsthe pairwise inserting only with those message (of the first batch) that weren’t excludedby the previous tests. The attack on the relations of communication of a message Nis to be executed with an effort that is quadratic to the batch–size of N : The attackerhas to encode with RSA only once in each attack (as well as multiplying the result withN and reducing modulus n) as well as performing one for each message one addition,two multiplications, two reductions modulus n and two comparisons. It is described in[PfPf 90, page 376] how the effort of this attack can be reduced to O(batch − size ∗ld(batch− size)).

The assumption that b is much more smaller than B is not necessary. An appropriateattack is already possible when B is so large that a choice of f ≤ 2B−1 results in anunlinkability of (z, N) and (z, N) ∗ f for the MIX that must be bridged. The resultingdifference −2b < z ∗ f − z′ < 2b+B−1 is already enough for the attacker to be able toeliminate in each step of his attack a part of messages of the original batch that are stillpossible. The effort for this attack increases just by a logarithmic factor.

If an accomodating (adaptiv ???) attack is not possible, e.g. because the MIX changeshis key pair after each batch, the attacker can obtain the same information with his iter-ative attack by choosing more factors, creating appropriate more messages and passing

in addition with the differences that result from the length of z’ and N’ the values of z’ and N’

268

5.4 Basic measures settled inside the communication network

all these message and the message with which he wants to bridge the MIX in for beingmixed in the same batch.

It was not proved before if a descent of the effort for the attack is possible with theattacks on RSA that were published recently.

Everything that was shown in examples using an direct scheme of changing the encodingfor senders anonymity is also valid for indirect schemes by using whole message parts in-stead of (return–) address parts. This is possible in all schemes of changing the encodingspecified in [Chau 81] (cp. [PfPf 90, §3.2]). But this can be avoided by using appropriateschemes of changing the encoding, i.e. such ones where no direct but indirect changedparts are used in an observable way. At this the used (symmetric) system of secrecyused for changing the encoding should work completely different as RSA.

If the random bit chains are positioned on the bit places of lower valence an analog andalso successful attack exists [PfPf 90, §3.2].

An analog attack fails if instead of a coherent random bit chain a lot of single randombits are “interspersed” into the message in not too huge distances and are sorted outby the MIX. A more weak attack is possible indeed either if the random bit chainsare shorter than the encoded messages or if only a greater block of message bits is notinterrupted by random bits [PfPf 90, §3.2]. At this a block of about 10 bits passes for agreater block.

This weaker attack can be impeded easily by mixing the interspersed random bitchain and the message, that was encoded, by encoding with a completely different(symmetric) system of secrecy (e.g. DES with a publicly known key), before it isencoded with RSA.

An alternative approach is that the plain texts have to fulfill an appropriate redun-dancy attribute, cp. §3.6.3.4. If the plain text z,N that was decoded by a MIX isnot fulfilling the redundancy attribute the MIX won’t put out any part of the mes-sage. To impede extensive trying, if one is not sure of the cryptographic strength ofthe redundancy attribute the senders of falsely created messages can be identified bycollaboration of all MIX’s: each MIXi specifies for the message that is traced back theexplicit encoded random bit chain zi in case of using a deterministic asymmetric systemof secrecy (resp. in case of using a indeterministic asymmetric system of secrecy thethe bit chain that was used. If MIXi is not able to this the indeterministic asymmetricsystem of secrecy is inapplicable). The senders can be ask for explanation as long as alltransmissions of messages are public or all messages are signed (authentifiziert ???) bythe actual sender. It is to be minded at all back trackings of messages that only thegenerator is held responsible when using anonymous return addresses. The user must beable to stay anonymous, otherwise communication partners can be identified by sendingfalsely created return addresses.

5.4.6.9 Faster transmission by using MIX-channels

The former described MIX–netork is mostly used for unobserved transmission of singlemessages. But the MIX–net is also applicable for channels in principle.

269

5 Security in Communication Networks

It is shown in the following how the delay for channels that is caused by MIX–operations can be reduced and the distribution of all messages (i.e. channels) can beavoided. Though the resulting MIX–channels are not applicable regarding to the con-straints of the narrow band ISDN but they are the basis for the absolutely applicabletime–slice–channels (cp. §5.5.1).

The hybrid encoding realizes simplex–channels in a way:

The main part of each message Ni is just encoded with a simple symmetric streamci-pher. Since applicable streamcipher have en–and decoding velocities in a range of Mbit/sand allow en– and decoding of each bit immediately the transmission of this main partis not delayed.

The beginning of each message Ni, i > 1 is delayed distinctly: The first block of themessage is encoded asymmetrically. For decoding the receipt of the whole first blockmust be awaited and the actual decoding is comparatively slow if asymmetric systemsof secrecy are used. The effort for resorting and for the test of repitition of message–beginnings is to be added.

Therefore the slow channel–building is separated from the actual transmission of amessage [Pfi1 85, §2.1.2] and is realized by independent signalling messages.

Hereby a partitioning of the available bandwith of the subscriber’s connection in bothdirections is made: a signaling part for establishing a channel and a data part for theactual transmission of messages. The type of the partition is not relavant for the followingprocedure, but in this section the static partition in a signaling channel and several datachannels that is used with ISDN is taken over.

In order to establish the channel a MIX–input message N1, called channel establish-ing message, is send over the signaling channel, which sends the key ki, i > 1 neededfor the symmetric part of the hybrid encoding to the MIX’s. Thus it is not distributedby Mm. In order to keep it uniform k1 is the abbreviation for the key k1A which is agreedbetween the sender A and M1.

Each MIX memorizes the assignment of the channel establishing message from theinput batch to the output batch. Each channel establishing message is now assigned toan input channel of Mi regarding to its position in the input batch of Mi. The assigmentof the channel establishing messages from the input batch to the output batch defines atthe same time the mediation of the input channels of Mi to the output channels of Mi,e.g. an assignment of input– to output frequencies when frequence multiplex is used.Therewith a channel, over which the actual message could be transmitted now, is definedby all MIX’s by mixing the channel establishing message.

The distribution of all channels to all power supply lines seems to be of less sensebecause it is assumed that each network termination is able to receive only a few, usingISDN two channels. The problem of the receiver of being unobservable in front of thesender is therefore solved in another way, similar to the anonymous return addresses in[Chau 81].

In the first instance the problem is eluded by equating sender and receiver: a channelcan be used as sending channel or as receiving channel.

270

5.4 Basic measures settled inside the communication network

A sending channel is the precise analogue of the hybrid encryption: the senderestablishes the channel, encodes continuously the stream of information N as

k1(k2(. . . km(N) . . .))

and transfers it to M1. M1 decodes continuously using k1 and transfers the stream inthe same way as the appropriate channel establishing message to M2. All MIX act inthe same way, i.e. Mm creates the plain text stream N at the end.

A receiving channel is sending channel that is used“backwards”: The receiver es-tablishes the channel (with the same type of channel establishing message). Mm re-ceives from the sender the information stream N which is not encoded specifically (butof course end–to–end encoded) for the MIX’s, encodes it using the key km and leadskm(N) “back” to Mm−1. The other MIX’s do the same, i.e. M1 puts out the encodedstream k1(. . . km(N) . . .). Since the receiver knows all keys ki he is able to specify N .

Sending and receiving channels are completely symmetrical regarding to their un-observability: As well as the sender is able to use his sending channel without beingidentified by the receiver the receiver is able to use his receiving channel without beingidentified by the sender. Whereas the receiver of a sending channel is observable by thesender and the sender of a receiving channel is observable by the receiver.

In order to combine both advantages and to exclude both disadvantages the actualMIX–channels are created as links of sending– and receiving channels (figure 5.33,5.34 and 5.35): The sender establishes a sending channel which ends at Mm and thereceiver establishes a receiving channel which starts at Mm. Mm diverts the informationstream that arrives at the sending channel to the receiving channel.

The channels that are supposed to be linked are specified by a common channel flagwhich is received consistently in both channel establishing messages by Mm. It mustbe known by both partners, of course, and because one can hardly expect static andoutside of the network linked relations the flag must be chosen by the calling PartnerR and must be send to the called partner G within a channel request message (forlogical reasons in tandem with a key for end–to–end–encoding and with the informationif R wishes to send or to receive). In figure 5.33 R was arbitrary assumed as senderand G as receiver of the MIX channel. It is possible, of course, to ressume the channelrequest message and the channel establishing message of the calling to a single signalingmessage.

Channel request messages, sending– and receiving channel establishing messages rep-resent the three types of signaling messages of the procedure. The transmission of allsignaling messages is done by using the basic scheme explained in §5.4.6.2 apart fromthe channel request messages which will be distributed.

Since the size of the input batches influences the effort of the MIXs decisively it isuseful to mix the messages of the three different types separately. The unobservabilityis not limited using this way.

To minimize the search regarding to repititions either the time stamps of the MIX–input–messages have to characterize the batch as well as the type of message or the

271

5 Security in Communication Networks

MmM1

R

G

channel-request message by R

• channel sign sRG• key kRG• channel-direction: R G

Figure 5.33: Processing of the channel request message sent by R

MmM1

R

G

receiver-channel establishmt. messageby G

• every Mi knows/gets key k'i• Mm gets channel sign sRG

sender-channel establishment messageby R

• every Mi knows/gets key ki• Mm gets channel sign sRG

Figure 5.34: Processing of channel establishing messages

272

5.4 Basic measures settled inside the communication network

MmG M1

channel from R to G, connected by

Mm on the basis of a common sRGR

Figure 5.35: Connecting channels

MIX have to use different keys for all three different types of messages for changing theencoding. Specification of message types causes less effort for synchronization for thenetwork terminal but on the other hand a strict synchronously service is supposed andthe synchronization channel represents the main bottleneck of this procedure. Hence thesecond possibility is supposed in the following.

As in the basic scheme the problem of observing the sender in a sending channel issolved by having the network terminals maintaining same number of sending channels(if necessary pseudo sending channels) at anytime.

Without distribution the receiving of a receiving channel is now observable. Thereforeeach network terminal has to maintain same number of receiving channels (if necessarypseudo receiving channels) at anytime, too.

The equal length of all channels is guaranteed by the MIXs: if there is nothing transmit-ted on sending– or receiving channel each MIX Mi is bridging this by sending meaninglessdata.

Thereby it is particularly avoided that the sender identifies the receiver by transmittingno information on the sending channel and therewith he induces that the receiver won’treceive anything at all. That’s the reason why the receiver establishes a receiving channelby himself apart from [Pfi1 85].

It is also prevented that pseudo receiving channels attract too much attention whendata is missed.

A duplex–channel is established by two opposite running simplex–channels. Both chan-nels could be established by the calling with the same channel establishing message. Thissaves transmission capacity, of course, but in the following this will be exposed as ob-structive. Therefore from now on two completely separated MIX channels are supposed.Though a common channel request message suffices.

In §5.4.1.3 tt was descibed in general how a changing attack by a sender on the appro-priate receiver can be prevented. In [PfPW1 89, §2.2.3] this happens for this specifically

273

5 Security in Communication Networks

discussed situation. Since the changing attack by a receiver on the appropriate sendercannot be prevented, as it is described in [PfPW1 89, §2.2.4], and is also possible forMIX–channels, one could argue that this measure is completely senseless for duplex–channels because the attacker is able to attack his partner as sender as well as receiver.

For practical reasons the prevention of an attack on the receiver is useful neverthelessbecause it would be much easier as an attack on the sender: Instead of disturbingSending actively, as it is described in [PfPW1 89, §2.2.4.2] the sender has to be ableto observe all receiving channels entirely. Moreover this attack on the receiver succeedsimmediately while the attack on the sender just represents a test which has to be repeatedlogarithmically in the number of the senders in order to find the sender.

Since the usual costliest tasks for the usual MIX channel, viz sorting of messages anddetecting repititions, are cancelled, i.e. just a simple changing of the encoding andmediation of input and output channels takes place, the effort for the MIXs for managingthe MIX channels can be neglected. Changing the encoding of the data by using thesymmetric stream cipher hardly carries weight because it can take place with a muchmore higher data rate than the transmission (cp. §5.5.2).

This procedure could not be used yet under the constraints of the narrow band ISDN.Beside the consideration that this procedure not could be executed homogeneously forall network terminals at once but only within certain groups of anonymity12, there isanother fatal problem. As for the messages in §5.4.6.1 it applies for MIX channels also:

• Each network terminal has to contribute the same number of channel request mes-sages and sending– and receiving channel establishing messages for each channelrequest input batch of M1, and maintain an appropriate same number of sendingand receiving channels.

• Messages of one batch have to be of same length, what means here especially thatchannels that were established at the same time have to be removed at the sametime.

This would mean above all that each network terminal has to wait with the removal ofa channel (even if it is a pseudo channel) until the longest channel that was establishedat the same time has been finished. Since there are only two 64–kbit/s–channels avail-able for a participant with a basic connection in the narrow band ISDN this would beunreasonable.

This limitation is eliminated by the procedure described in §5.5.1 .

5.4.6.10 Fault–tolerance at MIX–nets

Other error correction methods as end–to–end–time limits and repeated transmissionby transgression of the time limits are necessary in MIX –networks without distribution

12“group” is meant colloquial here, i.e. a “set of instances that are connected by a common property”.In particular “group” is not supposed to be understand in a mathematical sense of “group” (closedtransitive link, existence of a neutral and inverse element). In a mathematical sense one has to talkabout “sets of anonymity” instead of “groups of anonymity”

274

5.4 Basic measures settled inside the communication network

of the ”last” (cp. §5.4.6.3) information unit, because this error correction method isnot working satisfactorily: There are no useful time limits at electronic mail for exam-ple. Furthermore, when using anonymous return addresses, the original generator issupposed to generate a new return–address as displacement for an address that becameunusable by loss of a MIX – the user of the return–address can’t do that (cp. §5.4.6.3).This problem can be avoided by the procedure of the anonymous calling up instead of”normal” anonymous return–addresses, (cp. §5.4.6.6)

For these motivated error tolerance procedures and for procedures that will be deducedin the following subsections a common and appropriate graphical notation is used, whichcan be seen in figure 5.25. The transmission of a message (as an example for a unit whichis independent from others)from sender S over 5 MIX’s to the receiver E, the applied keysand their awareness, as well as the order of encryption–, transmission,– and decryptionare represented. The simplest scheme of encryption (scheme of changing the encodingdirectly) is mapped as it can be seen in figure 5.24 and 5.25.

Because there are no information units illustrated, one can understand figure 5.25 asrepresentation of the (abstract) scheme of encryption, –transmission,– and decryption.Equally figure 5.25 can be seen as a representation of the scheme of changing the encodingindirectly – as well as for senders and receivers anonymity which shows only the message–independet keys as well as en–and decryption, because there are no encryption structuresshown precisely.

In figure 5.25 the property of series can be seen better than in figure 5.24. If only oneMIX drops out on the way between S and E the encryption and the transmission of themessage can’t be proceeded.

As it is explained in §5.4.9 and apparent from figure 5.40 the MIX network is based ona communication network that can be realised in the layers 0 (medium), 1 (physical)and 2 (data link) as well as the lower layer of layer 3 (network) regardless to demandsof anonymity and unchainability. As it was explained yet, this implies that the layersof the communication network can use conventional error toleranxce measures withoutendangering the anonymity and unobservability of network participants. Thus transientas well as permanent errors within these layers can be tolerated by them.

MIX’s themself have to be and can be build in a way that errors (or changing attacks)can’t impinge on a MIX boso that he puts out his decryption key, mixes units of informa-tion more than once, or puts them out in a wrong order. Whereas it is acceptable thatthe MIX forgets his key and is discontinuing his service completely (fail–stop–service[ScSc 83]) in case of a fatal error (or a changing physical attack). In order to be sup-posed to tolerate errors external as rarely as possible it is possible to build up the MIXas internally error–tolerant as long as the conditions mentioned above are kept in caseof errors or changing attacks.

Transient errors in the MIX’s that are externally observable are recognized by end–to–end protocols between sender S and receiver E and are tolerated by repeatedly sending.(In order to increase performance an additional intermediate state with MIX–to–MIXprotocol is possible.) Therewith only the externally observable permanent errors inMIX’s are remaining. These errors can be recognized resp. localized by using the usual

275

5 Security in Communication Networks

techniques, e.g. the loss of a MIX by using time limits resp. the diagnosis, which MIXhas dropped out by staying away of a life signal that must be given cyclic (messagewith date and signature, I’m alive message). Permanent loss of MIX require appropriatemethods to remedy the errors. There are 3 possibilities in principal:

The first one, described in [Pfi1 85, §4.4.6.10.1.1], requires no coordination of theMIX’s. But it represents a end–to–end protocol, which is–as mentioned– adverse forunchainability and therewith for anonymity and unobservability.

In contrast to that coordination between MIX’s is necessary in the other two pos-sibilities and therefore a non end–to–end working error correction is possible, becausein these cases the MIX’s are able to substitute other MIX’s. How this is supposed tohappen and what must be minded is treated in [Pfi1 85, §5.4.6.10.2].

Specifics at shifting channels is treated in [Pfit 89, §5.3.4] for all three possibilities. In[Pfit 89, §5.3.4] a quantitative rating of all three possibilities is shown.

5.4.6.10.1 Different MIX–sequences

In case of the first possibility the sender tries to repeat a transmission on another way(figure 5.36), i.e. over a disjoint MIX–sequence (also known as ([EcGM 83], a bit shorteralso in [BeEG 86])dynamically activated redundancy). This solution is easy, but slow (asin general in case of dynamically activated redundancy), because the end–to–end protocolhas to detect the error first in order to start a second transmission, possibly withoutknowing which MIX has got broken.

In a MIX–network without distribution of the ”last” information units and withoutanonymous calling up anonymous return addresses with a long validity in order to guar-antee receivers anonymity are necessary according to §5.4.6.2. In case of anonymousreturn addresses wit long validity the procedure of error tolerance mentioned above isvery complex if there is no temporal relation between messages and responses, as it isthe case with electronic mail, for example. The receiver E is not able try another wayfor transmission if the received return address falls out (at least one MIX is defect in thissequence). This also applies if always two or more return addresses with long validity(statically generated redundancy) are exchanged and one MIX is defect in each sequence.The original sender S (more exactly: his participant station) would be supposed to sendmore and more new return addresses to E (more exactly: to his participant station), aslong as he does not receive a response from E. This would be completely superfluous inmost cases because E just had not had the time to answer. Alternatively S is able toinform himself about which MIX’s have fallen out in the meantime 13. Then he wouldsend new return addresses to E only if all sent return addresses to E with long validityare fallen out or have fallen out. Because, in the last case, it would be also possible that–after an error attempt of E– the return addresses are permanently unusable for E.

This first possibility of error tolerance boils down to a two–phases–concept whichincludes all stations. In one phase information units are transmitted anonymously, in

13Same behavior of all participant stations and consistent distribution of information about which MIX’shave fallen out, is important here, too. Otherwise attacks on the anonymity of the owner of thereturn address would be possible. Analogous attacks on the anonymity of the receiver can take place,according to the attack described in §5.4.1.1, second item

276

5.4 Basic measures settled inside the communication network

the other phase errors are tolerated, e.g. in case of a fallen out MIX by sending newanonymous return addresses where the fallen out MIX isn’t needed from sender to re-ceiver (also known as: dynamically generated, dynamically activated redundancy.) Sinceall addresses must be anonymous return addresses (§5.4.6.4) in a MIX network withoutdistribution of the last information units to all stations and without anonymous call-ing up for reasons of mutual anonymity of sender and receiver, this new distributionin the MIX requires an indirection step cogently in terms of non–anonymous places fornew distribution of the addresses (e.g. the already mentioned address directories withanonymous retun addresses (§5.4.6.4).): After a MIX has fallen out all stations will beinformed. Each station sends a list of anonymous addresses that became unusable, ananonymous displacement address and a second anonymous address contingently to thenon–anonymous places for distribution of addresses with a scheme for senders anonymityover the MIX’s that are still working.

In a MIX–network with distribution of the ”last” information units to all stations anony-mous return addresses are superfluous because the receiver is completely protected bythe distribution and the use of normal implicit addresses. Thus in such a MIX–networkthere’s no need to exchange new addresses when a MIX falls out. Of course a list of allfallen out MIX’s should be distributed to all stations, in order to guarantee that thosestations won’t use these MIX’s for the scheme of senders anonymity.

This is also valid for the procedure of the anonymous calling up.

A variation of the procedure of end–to–end–protocols with statically or dynamicallyactivated redundancy which would save time in case of an error consists of executing eachanonymous information transfer parallel over several disjoint MIX–sequences (staticallyor dynamically generated, statically activated redundancy).

A disadvantage of each end–to–end error correction is that statistical attacks usingrates of sending information units are possible, as already mentioned at the beginningof this chapter. These attacks could cause a chaining of traffic events if applicable. Thisis distinctive especially in MIX–networks without distribution of the ”last” informationunits and with individual choice of the end–to–end time limits (more exactly: the in-tervals of the time limits, in which it is reacted at a random point in time), resp. withindividual choice of number of information transfers that are executed parallely.

If there is a possibility to ignore statistical attacks using sending rates of informationunits, the following protection criteria applies:

protection criteria:Different MIX–sequences are as secure as the original schemes (cp. §5.4.6),as long as at least one MIX in a used MIX–sequence is not controlled by anattacker.At this it is unimportant if the not–controlled MIX is mixing correctly or ismixing (e.g. if he’s fallen out) nothing at all.

5.4.6.10.2 Substitution of MIXs

To facilitate a non end-to-end working error correction and avoiding therewith all dis-advantage of the first possibility the MIXs are put in position in the second and third

277

5 Security in Communication Networks

ES MIX1

MIX2

MIX3

MIX4

MIX5

MIX11

MIX12

MIX13

MIX14

MIX15

MIX6

MIX7

MIX8

MIX9

MIX10

S sender E receiver

Figure 5.36: two additional alternative ways over disjoint MIX sequences

possibilty to substitute other MIXs. This is advantageous in particular for the avail-ability, but causes the coordination problem which will be described in the followingsubparagraph.

The possibility of substituting MIXs is supposed to be statically (statically producedredundancy) – there would be no need to substitute MIXs if dynamically producedredundancy is used. A completely different MIX sequence could be chosen as describedin §5.4.6.10.1 .

Statically produced redundancy can be activated either statically or dynamically ingeneral.

A static activation regarding to substitution of a MIX is thereby nothing different as adistributed, internal fault–tolerant implementation of a MIX (cp. §5.4.6.10): The func-tion of sending the input to all distributed parts of a MIX is delegated to the antecedentMIX (resp. a succumbing communication network). The function of collecting outputsof all distributed parts and the creation of the final result is delegated to the subsequentMIX. Coordination of all distributed parts of a MIX can’t be delegated (cp. §5.4.6.10).Coordination is supposed to secure that all parts that were not fallen out work on thesame input–information–units in each batch. Otherwise a changing attacker could causedifferences between input– and output–information–units of single parts of the MIX anduse them for bridging the distributed ipmlemented MIX regarding to protection of thecommunication relation. This succeeds if averages or differences of the batches havecardinality 1, cp. §5.4.6.

A static activation regarding substituting of several MIX’s in sequence is problematicregarding to the protection of communication relations, because

• the multiple mentioned coordination problem exists for the MIX in sequence re-garding to each step,

• a information unit can only be protected regarding to communication relations

278

5.4 Basic measures settled inside the communication network

with such information units that are passing through the MIX in sequence at thesame time, i.e. each same MIX in serial as well as equivalent sequences parallel.

A static activation regarding to substituting several MIX’s in sequence is only possibleif static cascades of MIX are given (cp. [Pfit 89, §4.2.3.1].

Because both possibilities of static created, static activated redundancy seem to bevery costly on the one hand and on the other hand are uninteresting regarding to theleeway of concept only the static created, dynamically activated redundancy is treatedin §5.4.6.10.2.1 and §5.4.6.10.2.3.

5.4.6.10.2.1 The coordination problem

In the MIX–network, as it was described in §5.4.6, each MIX was responsable for mixinginformation units at most once with one key pair as long as he owns it. Otherwise anattacker could bridge a MIX regarding to an information unit the MIX is mixing twice,because this information unit will be with high probability the only information unitthat appears twice in the appropriate outputs of a MIX. If MIX’s can be put in theposition to substitute another MIX this response would be transferred to a team thatis created by him and all MIX’s which could substitute him. Thereby it is importantthat this response is abided too, if an arbitrary subset of the team falls out and thecommunication between team participants that are still working is broken. In particularit is important that information units that have already been mixed by fallen out orat this time not reachable team members are not being mixed a team member again –although an attacker would try to reach exactly this case.

This task can be solved by a simple protocol between the team members. Though theeffort of this protocol is considerable because of the additional time delay of messagescaused by its execution and the additional communication between and the additionalmemory cell in the MIXes.

Inefficient coordination–protocol [Pfi1 85, page 74]:Before a MIX is mixing information units he sends all his input informationunits to all team members that have not fallen out.The MIX is only mixing an information unit after having received a confir-mation from all members that they haven’t already mixed this informationunit and won’t mix it.A fallen out MIX receives all information units from the other members,that have been mixed or are supposed to be mixed before he will start mix-ing again.

To be able to execute this protocol each MIX of the team has to save all informationunits which have been mixed by himself or by the other members. To hold the requiredmemory bearable the four methods described in §5.4.6.6 can be used for reducing memory– and searching effort (exchanging publically known decryption keys of the MIX’s moreoften, time stamp, abbreviated hashfunction, just saving and comparing a certain partof a message).

279

5 Security in Communication Networks

If the method of the abbreviating hash function or the method of saving and com-paring only certain parts of messages is already applied by the requester and not bythe answerers (in the above described coordination protocol) first , this method wouldreduce not only the effort for saving and searching but also the effort for communication.

Other methods for reducing the effort for communication and the time delay of theproper message are specifically for the second and the third possibility and will betherefore described in §5.4.6.10.2.2 and in §5.4.6.10.2.3.

The above mentioned inefficient coordination protocol requires that the MIXs know forsure which MIXs of the team have fallen out or not. Otherwise an attacker could tryto isolate or to persuade two groups that the MIXs of the other group have fallen out.If he succeeds a successful attack would be easy. To avoid this even if the MIXs don’tknow for sure which MIXs have fallen out resp. have not fallen out the above mentionedcoordination protocol could be extended by the following rule:

It is only mixed if an absolute majority works (and is able to communicate).

This can be guaranteed by the usual techniques of authentication, cp. §3. Here – asin the efficient coordination protocols which will be described in the following – therequesting MIX will be counted as active resp. positive when the absolute majority isascertained.

5.4.6.10.2.2 MIXs with reserve–MIXs

The second possibility (of the three that were announced in §5.4.6.10) consists of thefact that the secret key of a fallen out MIX is known by one or more other MIXs who e.g.are prosecuted by the same organization (figure 5.37). Or this key can be reconstructed(by using a scheme of threshold value, cp. §3.9.6) by a lot of MIXs that are prosecutedby organization that trust each other [Pfi1 85].

Both methods facilitate a MIX–to–MIX protocol without changing the address schemeor the decryption scheme by letting the sender or the MIX transmit the information unitto a member of the appropriate team if one or more MIXs haven fallen out. Thereforeinformation about fallen out MIXs or the team structure is needed. Both can be eitherknown globally or communicated on request – the failure of a MIX, e.g, by a miss-ing answer (receipt). Here it is not expanded on the likewise important informationabout the reachability of MIXs which can be limited by loss of parts of the underlyingcommunication network.

Expressed with a concept of the world of error tolerance it is about a statically gen-erated, dynamically activated parallel–redundancy.

As announced it is possible to limit the effort of the coordination protocol for avoidanceof multiple mixing of the same information unit:

Efficient coordination protocol for MIXs with reserve MIXs[Pfi1 85, page 75f]:Any additional delay of mixing can be avoided if each MIX declares with hisreserve MIXs that they will mix with only one key when it has fallen out.

280

5.4 Basic measures settled inside the communication network

ES MIX1

MIX2

MIX3

MIX4

MIX5

MIX1"

MIX2"

MIX3"

MIX4"

MIX5"

MIX1'

MIX2'

MIX3'

MIX4'

MIX5'

coordi-

nation

protocol

coordi-

nation

protocol

coordi-

nation

protocol

coordi-

nation

protocol

coordi-

nation

protocol

Figure 5.37: MIXi can be substituted alternatively by MIXi′ or MIXi∗ (i=1,2,3,4,5)

Then it suffices that he sends at the latest his input–information–units to adistinct greater number than the absoulte majority of team members whenhe puts the output–information–units out. If he stops mixing when he doesnot receive authenticated receipts by a majority of team members within acertain time interval an attacker would be at most able to bridge him by iso-lating regarding to information units from which he hasn’t received a receiptyet.Reserve–MIXs whose failure could be proofed (e.g. by authenticated commu-nication with its communication processor) don’t need to be counted whenthe (absolute) majority is ascertained.If a MIX has fallen out the reserve MIXs are able to determine a substitutefrom them (and e.g. qualifying him for reconstruction of the MIX–key if atreshold scheme, cp. §3.9.6, was used). The substitute acts in the same wayas the MIX if he would’t have fallen out until he himself will fall out (andthe remaining reserve MIXs determine a new substitute) or the substitutedMIX is repared and starts mixing synchronized with the end of the mixingof his substitute.

In my opinion the risk that an attacker is able to bridge a MIX by isolating regardingto information units for which he hasn’t received an receipt yet is balanced out by farby avoidance of each additional delay of mixing. Because changing attacks which try touse this weakness of the coordination protocol can be detected easily (at least when theyoccur often) and have to intervene in many places in a synchronized way if informationunits are passing through several MIXs each: To trace the way of a information unitfrom sender to receiver the attacker would be supposed to bring the active MIX of eachteam to failure or to isolate him after he had mixed the information unit.

281

5 Security in Communication Networks

A MIX should choose such MIXs as reserve MIXs which are more than 10 kilometresaway from each other for making sure that an error is effecting other MIXs as less aspossible [Pfi1 85, page 76].

If an organization prosecutes several MIXs that are spatially far away from each otherit seems to be very useful to let this MIXs work as reserve MIXs for each other.

protection criteria: If the MIXs are coordinated in a suitable way thenMIXs with reserve MIXs are as secure as the original schemes (cp. §5.4.6) aslong as the team with a MIX that is used in the decryption structure is notcontrolled by an attacker.A team isn’t controlled by an attacker at least if he doesn’t control a teammember or he isn’t controlling the proper MIX as well as the substiute ifa treshold scheme is used. And when he additionally does not control lessMIXs a team as they are needed for reconstruction of a key (and therewith hewon’t receive the key ever) and when he isn’t controlling an absolute majorityof a team (otherwise this majority would be able to establish a substitutewithout a loss of the currently mixing MIX and let both mixing parallelyand uncoordinated.)

5.4.6.10.2.3 Leaving out a MIX

The third possibility is to ensure that each MIX receives sufficient information in eachmessage to be able to bridge and to leave out a MIX (more generally: up to b MIXs withan arbitrary natural number b) [Pfi1 85, page 77f]

In contrast to the second possibility the third needs changes of the address schemes ordecryption schemes to facilitate a MIX–to–MIX–protocol too. These changed addressschemes and decryption schemes will be derived particularly in this section.

Regarding to further variations it is referred to [Pfit 89]. In [Pfit 89, §5.3.2.3.2] pro-tection criterias for characterization of the achievable anonymity for relations of com-munication by use of these address and decryption schemes are established. In [Pfit 89,§5.2.3.3 and §5.3.2.3.4] proper and efficient coordination protocols for two possible modesof operation are developed: Either only fallen out, i.e. as few as possible, MIXs are leftout or as many as possible. The latter complicates the coordination problem but allowsa profit of efficiece in some cases.

To relieve understanding we’ll begin with the first case which describes that each MIX isable to leave out the next MIX in a sequence of chosen MIXs. In this case the procedureof error tolerance can tolerate the loss of one or even more MIXs that do not adjoin eachother.

To leave a MIX out his predecessor must be able not only to create the message meantfor him but also the message meant for his successor (figure 5.38).

If each MIX keeps the messages for his successor and again for its successor separatelythe length of the message grows exponentially with the number of used MIXs. To avoidthis the sender of a message chooses (as in all schemes of changing the encoding, cp.§5.4.6) a different key for each MIX (e.g. of an efficient symmetric system of secrecy) with

282

5.4 Basic measures settled inside the communication network

MIX1

MIX2

MIX3

MIX4

MIX5 ES

E

5

4

3

2

c

c

c

c

c

d

d

d

d

d

1

3

4

5

encryption

transfer

decryptionk

k

k

kk

k

k

5

4

k3

2

c1

k1 1

k2

d2k

2

k3 3

k4 4

k5 5

coordi-

nation

protocol E

coordi-

nation

protocol

coordi-

nation

protocol

coordi-

nation

protocol

Figure 5.38: One MIX can be left out each; the coordination protocols are transacted in the picturewithin groups of minimal circumference

which this MIX decodes the message meant for his successor. Each MIX receives this keyand the key meant for his successor encrypted with its publically known decryption key.He will receive the rest of the message separately and encrypted with the non–publicallyknown keys.The addresses of the following two MIXs can be prefixed unencrypted to eachmessage or can be encrypted with the mentioned two keys together with the publicallyknown decryption keys of the MIX. This will be described more formal in the following.

As in §5.4.6 A1, . . . , An is the sequence of addresses and c1, . . . , cn the sequence of pub-lically known cipherkeys of the chosen MIX–sequence MIX1, . . . , MIXn, where c1 canbe also a secret key of a symmetric system of secrecy. An+1 may be the address ofthe receiver which is called MIXn+1 for simplification, and cn+1 may be his cipherkey.k1, . . . , kn may be the chosen sequence of non publically known keys (e.g. of a symmet-ric system of secrecy). The sender creates messages Ni which will be received by MIXi

according to the following recursive scheme for creation originating in message N whichis supposed to be received by MIXn+1 :

Nn+1 = cn+1(N) (Scheme 1)

Nn = cn(kn), kn(An+1, Nn+1)

Ni = ci(ki, Ai+1, ki+1), ki(Ai+1, Ni+1) (for i = 1, . . . , n− 1)

This scheme is not the only one that is suitable. For instance the next schemes fulfillssame purpose:

Nn+1 = cn+1(N) (Scheme 2)

Nn = cn(kn, An+1), kn(Nn+1)

Ni = ci(ki, Ai+1, ki+1, Ai+2), ki(Ni+1) (for i = 1, . . . , n− 1)

283

5 Security in Communication Networks

The sender sends N1 to MIX1.

In both schemes MIXi is able to compute the messages Ni+1 and Ni+2 as well as theaddresses Ai+1 and Ai+2 from Ni.

In case of some coordination protocols that will be discussed in the following it is im-portant, too, that MIXi is able to proof if his predecessor MIXi−1 has not change abit of the message Ni (and correlative no one else, because MIXi−1 experienced mostabout the message and was therefore able to act most single–minded)

As long as the systems of secrecy are not broken 14, an attacker is only able to changespecific parts of a message for which he knows the applied key. Otherwise the messagewould just get lost or is ignored by a station where the message is recognized as uncorrectwith the help of a error–recognizing code. In both cases the attacker would not receivesome helpful information. Thus MIXi−1 has the possibilities to substitute the wholepart which was encrypted with ci (without being able to read the content of the originalpart, because he knows ci but not di) or to manipulate only the part of the text whichwas encrypted with ki.

The danger is here that he could try to change addresses. This is not critical for theprotection of communication relations as long as addresses are only used for transmissionof messages because it is assumed in any case that the attacker controlls all wires. Butit can become critical when the messages specify which MIXs have to coordinate witheach other to ensure that no message is mixed twice.

In both schemes the attacker ought to change the part encrypted with ci in orderto change an address. To do so he ought to invent a new key ki- But this would berecognized after decryption of the part encrypted with ki, viz ki+1(Ai+2, Ni+2) in scheme1 or ki+1(Ni+2) in scheme 2. Therefpre these parts have to contain enough redundancyfor ensuring that MIXi is able to detect this.

If a coordination protocol is used where this check is not possible Ai+1 can be left outin scheme 1.

The choice between scheme 1 and scheme 2 (i.e. if Ai+2 is contained in the first part,encrypted with a costly asymmetric system of secrecy, or in the second part, encryptedwith a less costly symmetric system of secrecy) depends on the used asymmetric systemof secrecy: If the lenght of the information units that are encrypted with the asymmetricsystem of secrecy for security reasons is supposed to be so long (as it is the case in RSA,cp. §3.6) that Ai+2 fits in in any case, it is more useful to use scheme 2 instead ofscheme 1 where it is filled up with random bit chains. Because it is less costly triviallyto encrypt Ai+2 with the more costly asymmetric system without addititional effort thanen–and decrypting Ai+2 with the less costly symmetric system of secrecy with additionaleffort. If Ai+2 contains any redundancy it is useful to take attention that the entropy(i.e. the random and therefore the unknown inforamtion content to the attacker) ofki, Ai+1, ki+1, Ai+2 is not getting to less. But since the symmetric system of secrecy

14The systems of secrecy are assumed to afford more than they are supposed to in order to keep themeasy, cp. §3 and exercise 3–7d). If the systems of secrecy wouldn’t afford this they must be expanded. This can happen, e.g., by using a collision restistant hash function, cp. §3.6.3.4

284

5.4 Basic measures settled inside the communication network

must defeat a complete search over the set of keys (cp. §3), thus the length of the keymust be to huge for this attempt (which is the case with a length greater than 100 bit),ki as well as ki+1 are containing enough entropy.

The interested reader can find many more encryption schemes and a quantitativevaluation in [Pfit 89].

5.4.7 Tolerating altering attacks on RING-, DC- and MIX-Networks

In a communication network every error can be considered an altering attack. Errors“by mistake” are a subset of all (physical and logical) altering attacks. Even though allerrors have been handled in a manner to keep anonymity under all circumstances. Anaggressor may now permanently create errors, therefore recrypting messages with wrongkeys, preventing the transmission, sending his own messages although he wouldn’t beallowed by the protocol for multiple anonymity access. Or simply by overlaying the keysinvalidly to make a denial of service. This problem of altering attacks on availability canbe solved in two ways [MaPf 87, page 403f].

On the one hand after every error it is checked if there was an altering attack (andif so by whom). Because the methods for that are quite expensive and do recover thesender and receiver of the message it is better to act only if you become suspicious (likea unnormal error rate). To discover altering attacks all basic techniques require that allstations can uncover their own activities:

• In a MIX-Network every MIX needs to disclose all incoming and outgoing infor-mation units from on a certain point of time - as long as it is not already doneby the communication network. Then everybody can check if a message sent fromhim was uncorrectly encrypted or decrypted by the MIX or if it wasn’t sent atall. To prove this to a thrid party the contexts of the affected information unitsneeds to be uncoverd 15. But it is not necessary to reveal all relations of all infor-mation units so that this measure would cause big effort but won’t endanger theanonymity of the information units transmitted correctly [Chau 81, page 85].

• In a DC-Network every station must reveal all key tokens (but not the inital valueof the PRNG if used) and the token sent for those tokens on which an alteringattack is suspected [Cha3 85][Chau 88].

15Who is to specify the decryption of a information unit (that is who encrypted it) can do it that way:He anounces the random number used at the encryption of the information unit. Then everybody canverfiy (if a deterministic asymetric concealation system was used) wether the encrypted informationunit combined with the given random number is equal to the MIX’s input. In that case and if theinformation unit doesn’t appear as an output of the MIX, the MIX has performed a altering attack- unless he can prove that he has processed the information unit in a previous batch so that he hasjust ignored a recurrence. With a indeterministic asymetric concealation system the encrypter hasto anounce all other random information. Nevertheless the encryption is deterministic too, so thatwithout anouncing the decryption function you could deduce the right output information unit.

285

5 Security in Communication Networks

• In RING- and TREE-Networks every station must reveal it’s tokens sent for thetokens on which a altering attack is suspected.

On DC-Networks as well as on RING-Networks and TREE-Networks it is the job of arevealing technique globally agreed on to care for:

A) That all tokens can be revealed. Otherwisly you could hamper certain tokenswithout notice.

B) That no token is revealed that could undermine the senders anonymity of directlysensitive messages or message you can be related to those.

The latter one demands the usage of a reserving technique as multiple access tech-nique.

In [Cha3 85][Chau 88] (only) the following combined reserving and revealing tech-nique for the DC-Network is included:

• All stations reserve exactly one transmission frame in every round. If there arenot as much transmission frames reserved as there are stations participating inthe DC-Network, all reservation bits are revealed and it is checked wether everystation has sent exactly one reservation bit. Because all stations reserve exactlyone transmission frame per round the uncovering gives no information about thestations that sticked to the reservation scheme.

• Before the reservation every station sends a uncovering claim encrypted with ablock cipher. That claim consists of the bit position of its reservation and themessage to sent. If the station is hampered during sending and it wants the inter-ference to be revealed, it publishes the key that encrypted it’s possible uncoveringclaim. This make the possible uncovering claim a real uncovering claim. Becausethis undermines the sender anonymity of the message a station will only do so ifit doesn’t has to send anything. Then the message is a meaningless message or ameaningful trap for the interferer.

David Chaums revealing method doesn’t accomplish B) at all and - depending on thereaders presumption about the stations behavior16 - A) only in complexity theoreticalway:

• The aggressor can accomplish the revealing of any desired tokens sent before bypublishing possible revealing requests that aren’t related to his messages. Eventhough if the reservation bit are also revealed the aggressor is exposed.

• If the aggressor can break the block cipher used for the revealing method, he can

a) hamper without becoming exposed

16[Cha3 85][Chau 88, page 72] doesn’t mention if stations will send revealing requests for valid messagestoo and if those potential but never occurring revealing requests have the right bitpositions andmessages. As to be seen at least the first one seem useful. Besides it eases the description.

286

5.4 Basic measures settled inside the communication network

b) reveal picked tokens after sending

depending on the behavior of the reserving station and the block cipher used.

a) is true if the stations won’t always send a possible revealing requests. Or if wrongbit positions are used for valid messages and the block cipher used (like DES) excludesome mappings from plain to cipher text. The latter is always true for (2block lenght)! >2key length (cp. §3).

b) holds true even after the aggressor has sent his revealing request if the block cipherallows all possible mappings from plain to cipher text. Otherwisly the aggressor canonly pick the token(s) prior to the sending.

With a unfavorable choice of the stations behavior and the block cipher then even a)is true as well as b) with subsequent choice of the aggressor.

After this it is clear that a) can be avoided by the execution of the combined reservationand revealing method (as described above) as far as the aggressor can’t sign meaningfulor meaningless messages. In [WaPf 89][WaPf1 89] revealing methods are shown thatdon’t have the weaknesses from b). The claims A) and B) can be provably achievedeven in the information theoretical world. It should be a goal to expand those results toother classes of multiple access systems (or finding the prove that it is not possible).

Revealing methods for the DC-Network can be canonially transfered to RING-Networksand TREE-Networks.

It is pointed out that for making revealing possible (especially in RING- and TREE-Networks) a considerable additional effort for saving the information units is needed(even if no “errors” occur).

In the rest of this section it is shown how to react on disputes on revealing so that the“guilty” one becomes penalized.

If the transmissions on all connections are known (i.e. if a broadcast media is used) thealtering aggressor can be discovered without doubt in a MIX-Network (also on RING-and TREE-Networks but because it is their main feature that nobody knows all trans-missions on all connections!). But on DC-Networks it’s different, because the keys usedby the stations must be known to make it possible to decide who has sent. If not alltransmissions over the connections are publicly known, the same effect can be achievedby every station signing its information units sent. Even though the signing and storingof the signatures is quite a big effort, it makes it possible to determine what exactly wassent (A special case occurs if both stations on one connection are aggressors. Than ofcourse it is not possible to trace the communication).

If in a MIX-, RING-, TREE- or DC-Network not all transmissions over the connectionsare known and if not every information unit is signed, it is possible to identify threegroups of stations. The first one is neither accused nor accuses others while in the secondor third group one of this is true. One of them is the altering aggressor. The other oneis not guitly but accused by the aggressor. It is assumed that there is only one aggressorin the network that may have has corrupted several stations, and so contradictionsonly occur in the aggressor’s environment. The aggressor has only a chance to remaincovered if he accuses only a few other stations. Otherwisly the majoriy of the accused

287

5 Security in Communication Networks

but unguilty stations would be against him. In the undecidable case that some fewstations accuse each other you would continue as usual. But you would reconfigure thestations that they won’t be able to accuse each other again. That can be achieved by:

• In a MIX-Network other connections (addresses) are used so that every sendercreates addresses in that way, that no message between these two mixes is passed.

• In a DC-Network no keys can be exchanged between the two affected stations(present keys are removed). The alternative would be that both partners wouldsign their common key before usage, so that every one has each others signature.In case of dispute a third party can decide (after examining the signatures) who iswrong [Cha3 85]. The drawback of direct signing of the shared key is that everyof the two stations can not only show but prove the key towards a third party(even without knowledge of the other one). This is done by showing the signatureof the other one (his own not useful because he could easily sign another, falsekey). That’s why [Chau 88, p. 73f] proposes not to sign the shared key but to signtwo parts which summed up are the shared key. To prove the key towards a thirdperson both parties are needed because the signature of the partner that didn’tsign is needed by the other one17

• In RING- and TREE-Networks the rings and the trees are reconfigured. This iseasy if the topology is set up as virtual ring or tree over a real network. If thenetwork topology is fixed to the physical topology reconfiguration is pretty hard.

If one participant is in a group several times, that accuses anothers or is accused onit’s own, he is considered as an altering aggressor and therefore excluded from the net-work. This technique is only practical if relativly few participants are altering aggressors.Otherwisly the altering aggressors could cooperate and exclude the correctly communi-cating stations from the network (of course the amount of stations that can be observedis reduced).

5.4.8 Active chaining attacks over limited resources

In this section a universal active chaining attack using the limitation of resources isshown as well as it’s detection and defence.

The goal of the attack is to determine if two distinct pseudonyms (i.e. implicit ad-dresses) belong to the same station or not. The attack is made up of requesting a servicefrom both pseudonyms with such a load that you can conclude from the speed of the ser-vice which pseudonyms belong to the same station. Resources are computation power,I/O bandwidth, memory.

If the

• local computation power is limited or

17Since the invention of undeniable signatures this splitting is not nessesary any more (cp. §3.1.1.2).

288

5.4 Basic measures settled inside the communication network

• the bandwidth for sending and receiving anonymously and unrelatable per stationis limited to a proper fraction

the station of course can’t handle service requests that exeed these limits in realtime oreven not at all. This results in the three attacks using one resource each. To conductsuch an attack the aggressor needs to

• estimate the computation power of the attacked station or

• find out the bandwidth in the communication network (which is easy in a MIX-Network without the distribution of “last” messages (cp. §5.4.6.3) because it’sknown to every participant; otherwisly he has to estimate). For estimating he canviolate a fair access protocol (cp. §5.4.5.4.2 for collision resolving algorithms withaverage values and overlaying) to shorten the bandwidth.

• If an aggressor knows the bandwidth a certain amount of aggressing station haveto cooperate to exceed it. Or a few station violate the fair access protocol.

If a service is provided on time the result is (assuming knowing the bandwidth) thatboth pseudonyms don’t belong to one station. If the service is not on time it couldmean that there are other requests producing load. If the aggressor knows a pseudonymof each station (i.e. a publicly assigned address) and if each station provides a serviceunder this address, the aggressor picks the non anonymous pseudonym out of the two.So, the aggressor can allocate every pseudonym to a station. This operation is of linearcomplexity regarding the number of stations.

This active chaining attack via limited resources can be detected by the attacked stationon the one hand: As long as a station doesn’t reach it’s limits due to external servicerequests it is not considered being attacked. This situation is way better for verificationof the protection than for passive attacks.

On the other hand, the active attack can be easily made more difficult:

• Every participant is free to provide as much as computation power of his station ashe likes for external requests. It can even be (pseudo)randomly distributed acrosstime.

• With DC-, RING- and TREE-Networks all stations are configured to make ev-ery station able to send with the whole bandwidth. Although it disables someoptimizations for saving resources [Pfit 89].

• With distribution all stations are configured to make every station able to receivewith the whole bandwidth. Although it disables some optimizations for savingresources [Pfit 89].

With a great effort and tremendous limitations of the usable power, the active attackcan be completely prevented by a statical splitting of the resources: Prior to the actualservice request all pseudonyms from which a service will be requested are published

289

5 Security in Communication Networks

anonymously and unrelatable. Let n be the amount of those pseudonyms. Every stationthen provides only the n − th part of the minimal computation power (as well as theminimal bandwidth) available in the communication network.

On a practical deployment you’ll have to find a trade-off between dynamical andstatical resource splitting.

It is pointed out that attacks on resource limitations and their avoiding by staticalsplitting is already well know: If two processes use the same resource (like two OS tasksthe same CPU) with a fair and dynamical resource splitting, every process may slowdown the other one by holding back the resource. If one of them has access to therealtime he may notice the delaying. This modulatable delaying is a hidden channelfrom the other process to this one (cp. §1.2.2).

5.4.9 Classification within a layer model

To simplify the development, the implementation and the connecting of communicationnetworks, they are designed as layered systems. Every layer uses the services of thelayer below as well as it provides compfortable service to the layer above. The Inter-national Standards Organization (short ISO) standardized a 7-layer model, the Basicreference model for Open Systems Interconnection (short OSI). Because all internationalstandardizations refer to ISO/OSI the following figures will show the layers for end-to-end and link-encryption

To disable switching centers to access any unneccessary protocol information there hasto be an end-to-end connection on the deepest layer that directly connects between thestations. This is why the transport layer (layer 4) is used for that18

Because aggressors that can eavesdrop the connections shouldn’t gain any unnecces-sary protocol information the link encryption should take place in deepest layer wheredigital data is processed. This is why layer 1 is suitable for that. On communicationchannels that are exclusivly assigned to two instances each, you can (cp. §3) concealif and what for information was transmitted without any additional costs by using astream cipher.

Those layers are named as possible places for implemting end-to-end and link-encryp-tion. Hopefully these arguments will persuade to pick these layer (and not the layersabove or none) as place for the implementation, because some implementors hope fora faster and easier implementation. As well as the intelligence services hope for easiereavesdropping.

If you mistrust the implementation on a certain layer regarding confidentiality andintegrity, the data transmitted there should be encrypted for guaranteeing conceala-tion and integrity by the layer above (cp. §3). With that additional end-to-end orlink-encryption connections may arise. The latter seems not that necessary because

18This may not apply to non canonial communication networks that consist of protocols abover layer4. There you only have an end-to-gateway and gateway-to-end transport security.

290

5.4 Basic measures settled inside the communication network

participant exchange

end-to-end

encryption

7 application

6 presentation

5 session

4 transport

3 network

2 data link

1 physical

0 medium

OSI layers

encryption

connection

encryption

connection

encryption

connection

end-to-end

encryption

encryption

connection

exchange participant

Figure 5.39: Allocation of end-to-end and link encryption in the ISO OSI reference model

the deeper layers are of less design complexity than the layers above. Besides, remotemaintance for correcting errors is not neccessary in the deeper layers.

The basic technique for protecting the receiver uses broadcasting.Broadcasting can be realized by an apropriate routing in layer 3 (network layer), so

that layer 4 only has to generate and evaluate implicit addresses. One possible realizationof broadcast is flooding [Tane 81] which can be achieved with a very simple path findingstrategy: Every station transmits every message to all neighbouring stations from whichit hasn’t received this message.

In communication networks especially designed for broadcasting, broadcasting is doneon layer 1. At least for some services, layer 1 can pick the information which is assignedfor the station. In fig. 5.40 it is called channel selection.

MIX- and DC-Networks employ special cryptographic mechanisms to create anonymityand unrelatability (from a global point of view the term “creating anonymity and unre-latability” is a contradiction in terms, because you can’t withdraw any of the aggressorsinformation so that all measures for keeping anonymity and unrelatability can be kept(cp. §5.1.2)). I will classify these mechanisms into the highest sublayers of layer 3 resp.1 because

• the MIX-Network uses the routing of (the deeper sublayers of) layer 3, but whichis no end-to-end measure;

• the DC-Network uses the transmission of tokens (bits) on layer 1, but it is notinvolved in discovering transmission errors, multiple access or other services oflayer 2 (data link layer)19

19If a DC-Network is implemented on a media with multiple access (like the todays typical LAN) it hasto have to be implemented on layer 2 or above.

291

5 Security in Communication Networks

OSI layers distribution MIX-Net DC-Net RING-Net TREE-Net

application

presentation

session

transport

network

data link

physical

medium

anonym. multiple access anonym. multiple access

digital signalregeneration

ring

7

6

5

4

3

2

1

0

key and mess. superpos.

tree

implicitaddressing

distribution

realizablewithout con-sideration for

anonymity and unlinkability

channelselection

buffering and recryption

has to keep anonymity and unlinkability

Figure 5.40: Allocation of basic techniques for securing traffic data and interest data in the ISO OSIreference model

RING- and TREE-Network - like all communication networks working with the principleof “unobservability of bordering connections and stations as well as signal regeneration”- create anonymity within the network by using a ring- or tree like media in layer 0 anddigital signal regeneration in layer 1.

From this follows that broadcasting as basic measure for protecting the receiver

• can use any desired implementations of layer 0, 1, 2 without efficency drawbacks,as long as the implementation doesn’t take place in a communication network notdesigned for broadcasting, or

• can use only a implementation of layer 0 if the implementation took place in acommunication network designed for.

The MIX-Network may use any desired implementations of layer 0, 1, 2 and deepersublayers of layer 3 without efficency drawbacks, the DC-Network any desired imple-mentations of layer 0 and deeper sublayer of layer 1.

All these (partial) layers can be implemented without any quality losses for anonymity,so that they all may be used as measure for improving performance and reliablitiy.

All layers (within the communication network) above the layers that do create anonymityand unrelatability have to preserve anonymity and unrelatability. This means thatupper layer don’t exclude or relate no (or at least only a few) alternatives, that makethe anonymity and unrelatability enabling layers possible - seen from the aggressorsperspective20. Otherwise an aggressor could utilize the commonly accessable knowledgeabout the upper layers to identify sender and receiver or to relate them to each other.For example not only that protocols for anonymous multiple access have to use the

20In other words: This means, that no (at least not that many) alternatives, which are possible (anddistinguishable by the aggressor) in the layer creating anonymity and unlinkability, are excluded orlinked by higher layers.

292

5.4 Basic measures settled inside the communication network

bandwidth of the shared channel of the DC-, RING, TREE-Network efficiently, but alsoto keep the anonymity of the sender (as well as the unrelatability of sending events).

This has consequences for the standarization of the protocols of the layer, which mayhave not been recognized (or just ignored) by the responsible standarization board.If a standard doesn’t contain a subset of functionality that keeps up anonymity (orunrelatability) it’s useless for a communication network with verifiable privacy. Conse-quently such international standards may be rendered illegal in certain countries wherethe constitution or certain laws oblige privacy, and national standards may be illegal.Especially the first option and the fact that modifying standards may take very long,may seriously hamper the future of international communication. Besides the questionremains whether people from other countries want to use communication networks thatare easily observed.

It is neccessary for the reasons explained above that protocols of the layers above thelayers that create anonymity and unrelatability are to keep anonymity (or even unrelata-bility of the sending events) for the participants. There should be certain limitations forthe service. Consider two people communicating with telephone. You can’t hide fromeach other that sentences (or information units that are implemented as switched conti-nous bit streams or as packets that occur if there is something to listen to (TASI=timeassigned speech interpolation [Amst 83][LiFl 83][Rei1 86])) are related to each other. Inother words: It is not possible to create more anonymity or unrelatability towards theparticipants than the service used provides. The demand that all events on a higherlayer completely correspond to events to the layers below and that all events can be con-nected refers exclusivly to unobservablility of non participants. If the aggressor is eithera regular participant on it’s own or if he has the cooperation from certain participants,you may save some efforts. You do not need to try to deny a linkage of certain eventsor objects that are already connected to others by the service used.

It should be mentioned that the techniques for protecting the receiver, the MIX-, DC-,RING- or TREE-Network can be implemented in higher (sub)layers as well, like on alayer 3 (network layer). Distribution and MIXes would be placed on layer 4 (transportlayer) which would require to have included some additional functions like pathfinding.If overlaying sending would be implemented on layer 4, the anonymous multiple accesswould be needed to be implemented on a higher layer once more. The same holds truefor RING- and TREE-Networks where an additional encryption between the units onlayer 4 would be neccessary to simulate physical unobservablility.

Therefore, distribution, MIX-, DC-, RING- or TREE-Network can be considered asvirtual concepts, that can be implemented on almost any layer desired. But an efficentimplementation needs to avoid unneccessarily complex layering and double implementa-tion of functions on different layers (cp. fig. 5.40).

Figure 5.40 also shows how the extension of the concept of “Unobservablility of borderingconnections as well as digital signal regeneration” with “overlaying sending” can beimplemented: On a media with suitable topology (layer 0) and a (sub)layer 1 withdigital signal regeneration, a further layer for overlaying of keys and messages is put on.Above that runs the anonymous multiple access.

293

5 Security in Communication Networks

As explained in [DiHe 79, page 420] the implementation of the (sub)layers that createor keep anonymity and unrelatability (within the communication network) may notintroduce features that an aggressor can use to discriminate between alternatives thatare undiscriminatable in the abstract draft. Examples of such unusable implementationsare:

• Modulation of the output of a MIX according to the random bit strings within themessages

• Direct output of the result of an modulo 2 adder where 1+1 6= 0+0 and 1+0 6= 0+1if the analogue signal characteristics are regarded

• Pattern sensitive time jitter in a RING-Network

These and other errors as well as implementations avoiding them can be found in[Pfit 89, §3.1 and §3.2].

5.4.10 Comparision of strength and effort of RING-, DC, andMIX-Network

To compare the strength and neccearry effort of the shown techniques the followingchart is helpful. You will notice that a DC-Network can resist stronger attacks thanMIX-Networks but the effort to deploy is significantly less.

The three different sections in the chart, which describe the DC network, correspondregarding attacker model and complexity. At the token ≥, it is exactly = with duplexcommunication. At the MIX-Network the growth value that is practically relevant arisesby ignoring the very slow length growth of the messages through the neccessary randomnumbers.

294

5.4 Basic measures settled inside the communication network

Unobservablility ofborderingconnections andstations as well asdigital signalregeneration

RING-Network

DC-Network MIX-Network

Aggressormodel

physically limited

informationtheoretical

towardsconfidentialityinformationtheoretical, towardsservice yieldingcomplexitytheoreticallylimited

complexitytheoreticallylimited• cryptographicallystrong• well examined

complexitytheoreticallylimited;

There aren’t anysecure practicalasymetricconcelation systemsknown againstadaptive activeattacks

Effort

per stationand bit

Transmission

O(n) (precise: ≥ n

2)

polynomial in n butimpractical

Transmission:

O(n) (precise: ≥ n

2)

Key exchange:

O(k · n)

Transmission:

O(n) (precise: ≥ n

2)

Key exchange:

O(k · n)

Transmission insideuser’s interface area:

O(k) practically ≈ 1

Transmission insidethe whole network:

O(k2) practically ≈ k

295

5 Security in Communication Networks

wide phone areawith MIXes

for some services

star-shaped local phone exch.areas with optic-fibres andsuperpositioning sending

TREE-Nets,if need be with super-positioning sending

RING-Nets,

local

phone

exch.

MIX cascade MIX cascade

+

if need be with super-positioning sending

local

phone

exchange

local

phone

exchange

local

phone

exch.

local

phone

exch.

star-shaped local phone exch. areas with TP-copper cable

ex

ch

an

ge

ne

t / dis

tribu

tion

ne

t

wideband

cable distri-

bution net

Figure 5.41: Integration of different security measures into one network

n = participants k = coherence of keygraph and DC-Network, amount of MIXes

5.5 Upgrading towards a integrated broadband net-

work

The upgrade of this smallband towards a broadband Network should coincident withthe level of population in the area (cp. fig. 5.41).

296

5.5 Upgrading towards a integrated broadband network

If there are already cable funnels in the area or if it is quite a rural area and if PRNGswith apropriate speed and security properties are available it should be less expensive tokeep the cable structure. The distribution networks would be realized with overlayingsending even if PRNGs are too expensive. If there aren’t cable funnels in the area and ifthere aren’t PRNGs with apropriate properties RING-Networks should be implemented.If none of the conditions above is true and if there is an immediate neccessity to acta RING-Network should be installed for the time being despite the higher expenses.Overlaying sending can be installed later on.

As times goes by the amount of distribution networks realized with overlaying sendingcan be increased. Until Germany will have a full featured distribution network MIXingcascades should be used for highly sensitve smallband applications not only in the localreplay stations but in the main (supra-regional) replay stations.

Especially in large cities where the local switching center has a high density of ports(mainly used by business) the technology will switch from copper cable to fibre opticsearly. This and the higher density of local switching center as well as the fact that thereare also some regional offices in the city makes it possible to cascades even over theseregioal offices.

Parallel to the expansion there also has to be increase in reliability. With that faulttolerance measure will come more and more into play.

Finally it shall be mentioned that it is mainly unclear how to relate costs and efficencybetween the network expansion shown here and the planned one.

5.5.1 Telephone MIXes

The following technique, called Telephone MIXes, allows the participants to createunobservable connections (unobservable end-to-end channels). Unlike MIXing cas-cades the unobservable connections may exist forever and can be shutdown by bothpartners.

Mainly we will only look on the connections between the station interfaces. The connec-tions on top of this is only regarded if it differs from ISDN (144 kBit/s full duplex).

Telephone MIXes don’t require to alter the long distance network and allow a step-by-step extension (so you may use both, analogue and digital local relays).

Smallband applications that can be run over ISDN’s 16 kBit/s D0 channel are onlycovered aside.

In contrast we will work with a hierachally ordered network with local and long distancenetworks from the beginning.

In §5.5.1.1 and §5.5.1.2 the basics of telephone mixes and of time slice channels arecovered. In §5.5.1.3 and §5.5.1.4 the establishing and down shutting of connections islooked at. In §5.5.1.5 we will take care of network accounting, in §5.5.1.6 of connectionsbetween local switching centers with MIX cascades and those without. Finally in §5.5.1.7we will talk about reliability.

297

5 Security in Communication Networks

5.5.1.1 Basics of time slice channels

As seen, we could solve the problems in §5.4.6.9 about MIX channels by splitting longenduring connections into some short (≈ 1s) consecutive MIX channels, so called timeslice channels. With every new time slice every participant has the opportunity tocreate new connections or to shutdown existing ones.

Let every time slice channel have the duration d. The splitting into time slices isdone syncronously for all participants of the whole network. As in §5.4.6.9 a time sliceduplex channel consists of a sending and a receiving channel, short TS channel andTR channel. For every time slice the network interface establishs a constant numberof TS and TR channels (therefore a pair of 64 kBit/s channels each).

A real connection is introduced by a connection request message that is sent frominterface Q (reQuest) to interface S (reSpond). In the ideal case where S immediatelyaccepts the request and where both stick to the protocol, Q and S connect each othersTS and TR channels starting from the time slice previously agreed on. The channelsare disconnected syncronously when one of both wants to relieve the connection (furtherchannels request messages for the single time slice channels aren’t needed if a properrequest connection message is used; cp. §5.5.1.3).

As long as a interface has no real connection, it sends random data to itself by connect-ing it’s TS channel to it’s TR channel. Since meaningful data is end-to-end encrypted,it looks like random data too for the outside observer. Alternatively you could leave thetime slice channel open (unconnected; as in §5.4.6.9). But then Mm could distinguishit from a real connection. This is also a reason for establishing each part of the duplexchannel on it’s own.

The neccessary signalizing is almost done as like as in §5.4.6.9 over a seperate signalchannel. This channel needs to be accomodated in the part of the bandwidth not usedfor data channels (with ISDN it’s about 16 kbit/s). This signal channel will be the mainbottle neck for this technique because the connection request as well as the TS and TRchannel establishing messages have to be transmitted upon (additionally TS has to bedistributed).

So in contrast to ISDN parts of the signalizing that is neccessary after initializingtakes place within the data channel (but only that much that the data channel staystranparent). The signalizing for establishing is kept exclusively to the signal channel.

Since the user’s smallband connections and the long distance network should remainuntouched for telephone MIXes, this technique can be only used in local areas. Thereforeevery local switching center (LSC) is assigned one MIX cascade with m MIXes.

The division of labour between the LSCs and the mixes is pretty easy: The MIXesare exclusively responsible for the MIXing functions, all other tasks are done by theLSCs. These tasks can be roughly splitted into two parts: The first one deals with thecommunication between the participants and the first MIX of the cascade (as well as thetask of distributing of §5.4.1 (cp. [PfPW1 89, §2.2.3])). The other the communicationbetween the last MIX and the long distance net (as well as some transformations, cp.§5.5.1.6).

298

5.5 Upgrading towards a integrated broadband network

R G

widearea

LPhExch of GR

MR 1 MRm MG 1MGm

net

LPhExch of

Figure 5.42: Interfaces of caller and callee with respective local switching centers (divided in two func-tional parts) and attached MIX cascade

With larger LSCs (> 5000 participants) and the limited bandwidth of the end userinterfaces it will become neccessary to seperate the participants into logical anonymitygroups once for all, so that mixing and distribution is only done within.

The traffic generated by dummy connections (TS and TR are connected to each other,but on the same device) doesn’t bother that much, because it only affects the ownersinterface. Besides the dummy connections are only used if no other TS or TR connectionsare up.

As next, we will have a look at if Q and S belong to different LSCs. Let MQ1, .., MQm

be the cascade of Q and MS1, .., MSm S’s (fig. 5.42).

The special case that one interface is directly connected (without mixing) to the localswitching center is looked at in §5.5.1.6.

5.5.1.2 Structure of the time slice channels

Time slice channels only differ from MIX channels (cp. §5.4.6.9) in their predefinedduration as well as the predefined starting point of time. But you need to take careabout the influence of the long distance net on the structure and the beginning of thetime slices.

If Q and R are connected to the same LSC , MQm and MSm have to work togethermutally as last MIX Mm in §5.4.6.9:

A TS channel from Q to S is passed from MQm over the LSC of Q, the long distancenet and the LSC of S to MSm. MSm then connects it to the corresponding TR channel ofS. To accomplish this Q needs to propagate the LSC of S in the TS channel establishingmessage MRm.

Since the connection between Q and S is made up of many time slice channels it isuseful to use the neccessary channel in the long distance net for all connections betweenQ and S. The relation between different time slice channels of the connection doesn’t

299

5 Security in Communication Networks

key of the symm.

key of the symm.

key of the symm.

current

current

symm. Systems LPhExch area

of receiver

key of the symm.

cryptosystem

current

current

current

timeslice

for MR1

for MRm mark

compensation channel

sign

Figure 5.43: Data needed by MIXes MRi for TS channel establishment

key of the symm.

key of the symm.

key of the symm.

current

current

symm. Systems channel

sign

key of the symm.

cryptosystem

current

current

current

timeslice

for MR1

for MRm

Figure 5.44: Data needed by MIXes MRi for TR channel establishment

need to be create explicitly for MQm and MSm and the LSC ; it is sufficent that thechannel in the long distance net is shutdown if there is no time slice channel to betransmitted between the LSC of Q and S.

As usual a MIX cascade has to wait with it’s TR channel of the current time slice untilall the beginnings of the corresponding TS channel have arrived.

Therefore MQm and MSm have to buffer the quickly incoming TS channels from thesame LSC until the TS channels from the long distance net arrive (with a maximumlatency of 0.2 seconds in the long distance net, this would be 12.8 kbit/s per local 64kbit/s simplex channel).

Figure 5.43 and 5.44 show the data needed by the MIXes for establishing a channel. Thekeys of the symetric concealation system are passed by the hybrid encryption system.And the remaining data in the Di fields of the channel establishing message. The field”compensation mark” is explained in §5.5.1.5.

5.5.1.3 Establishing a unobservable connection

To establish a unobservable connection, interface Q sends a connection request mes-sage over its signal channel. The message is passed with the scheme from §5.4.6.2,therefore it is mixed with the cascade MQ1, .., MQm and, if neccessary, send over thelong distance net to the LSC of the receiver and distribute it there to all interfaces.

Because with every new time slice the participant should be able establish a newconnection, the connection request messages have to be mixed once per time slice too.Let’s assume that every interface contributes exactly one, if neccessary senseless, messageper time slice and channel.

300

5.5 Upgrading towards a integrated broadband network

wide

TSTR

R G

connection request ( t0 , sRG)

MR 1 MRm MGm MG1

sRG(2)

sRG(1)

sRG(3)

sRG(4)

t0 - 1

t0

t0 + 1

TRTS

phone area

Figure 5.45: Connection establishment in the simplest case. Sending of start messages, i.e. the originalstart of the connection, is not shown

The interface S needs to know the number t0 of the first time slice indicating when theconnection becomes created.

The channel request messages of the single time slice channels (here two per time slice)are combined and integrated into the connection request message:

• S gets the fixed local network identifer of Q

• The channel identifers of the time slice channels are created with a PRNG by Qsenden the initial value sQS to S.

If sQS(x) marks the x-th identifer derivated from sQS , sQS(2 · t+1) is used for thetime slice (t0+t), t = 0, 1, .. from Q to S and sQS(2·t+2) for S to Q communication.

• Q and S are using one single key for all time slice channels for conducting end-to-end encryption (what nobody except for Q and S can take notice of). kQS can bederived from sQS too.

Alternativly you could send a kQS independent from sQS , or you could use differentinitial values for each direction. But this would only increase the amount connectionrequest message to be distributed by the LSCs for nothing.

If working properly Q establishs the TS and TR channel as anounced beginning withtime slice t0 with the identifers derived from sQS .

In the ideal case the channel between the interfaces Q and S are connected with Sestablishing its own TS and TR channel beginning with t0.

In contrast to ISDN where the whole signal traffic takes place outside the 64 kbit/sdata channel, here it seems useful to utilize the yet unused data channel for the furthersignalizing of the connection builtup (as like as it is done on analogue telephone systems):

301

5 Security in Communication Networks

First of all the establishing of a connection from S is signalized by a starting messageto Q (who may also send such a message to S if a connection this way is needed by Q; toprevent the MIX from sending valid a starting message into an unconnected TR channelthey must have a apropriate length).

As next the connection between the terminals can be established. This means thatyou now can use the usual telephone services: make the peers telephone ring, the peerstelephone responds and so on. The neccessary signalizing for the transition from “waitingfor partner” and “partner is online” is done via the yet unused data channel.

If Q and S don’t establish their time slices syncronously (if S doesn’t have a unused datachannel) it doesn’t affect the unobservablility because the gap resulting from a missingTS channel is filled by at least one of the MIXes. TS channels without a correspondingTR channel are ignored by the receiver’s last MIX.

So establishing connections can be completely asyncronously. For this S must save allincoming connection requests at least for some time.

If S is blocked (all bandwidth is used up by current connections that the user won’tlike to terminate) or not ready to receive Q may let wait it’s interface for S as long asneccessary. To avoid that S creates a channel to Q’s LSC over the long distance net longtime after Q gave up calling, the maximum waiting time of Q is limited. If the callercan pick any desired limit and send it with the connection request messages to S we canleave out the technique of the telephone MIXes to constantly repeat sending connectionrequest messages over the signal channel. This will save the load of permanent redailing.With longer time out limits the user may not neccessaryly wait at the terminal, butwait for signal of the terminal if the connection is established. If Q doesn’t wait (theconnection establishing is canceled) and S trys to establish it’s part of the channelwithout the starting message from Q, S will cancel the establishing as well.

If S waits very long until satisfying the connection request by Q it needs to createall identifers prior to current one. But instead of a linear arrangement you could use adendriform (cp. [GoGM 86]) so that you can create every identifer sQS(x) in log(|x|)steps instead of |x| steps. If there is enough memory to completely save one branch ofthe tree each, you will need an average of less than two steps for sequential creation.This means, in the general case the effort is doubled.

If one of both interfaces waits for the other and if both aren’t connected to the same LSC,in the scheme above the long distance net would be unneccessarily stressed by unuseddata channels. This can be avoided with a small reduction of the unobservablility byenabling the last MIXes each (resp. the LSCs) to recognize the relation of all time slicechannels of one connection within the long distance net.

For this Q and S are using a common identifer for all of their TE and TR channels oftheir connection in case of a distance call.

The (not that big) disadvantage is that the exact duration of the single distances callsis observable. If there have been only a few connections between two LSCs for a longerperiod this is also true using the technique mentioned before: An aggressor can assumewith a high probability that all consecutive time slices are exactly one connection.

But the advantage is that the last MIX in the sender’s cascade can surpress meaningless

302

5.5 Upgrading towards a integrated broadband network

data in the long distance net. For this the starting messages of Q and S are transmittedso that MQm and MSm can identify them. Then only if the starting messages fromS has arrived MSm establishs the channel to the long distance net. The TE channelsare filled by MQm and MSm with meaningless data until the arrival of the proper TSchannel.

If the connection isn’t immediately used for data transmissions like if S’s respond forreadiness takes some time here too meaningless messages can be surpressed. For thisthe signal “connection to participant ready” has to be transmitted in plaintext (as likeas the starting message) to the last MIXes MQm and MSm.

Connection request messages have to be distributed inside the local network. But thismakes it possible to flood a local network with connection requests with a coordinatedattack by many participants. Eventually connection requests of other participants can’tbe sent any more (denial of service attack). This kind of attack is possible in everymulti-layer network like ISDN. Because the aggressor(s) won’t do anything illegal theattack can’t be prevented but only detected and the outcomes minimalized.

The latter can be achieve by adding priority flags to the connection requests whichonly allow a limited number of requests of higher priority for every participant. Thedistribution of connection requests is done by sorting the priorities. The assignment ofpriorities to connection requests is done adaptivly by every network interface: At thebeginning every connection request gets the lowest priority. If the connection request isnot replied the request will get the next higher priority available.

The correctness of the amount of connection requests with higher priority per partic-ipant is checked by the LSCs by one-time “authorization mark”. Such a flag is digitallysigned (by the LSC) message which indicates the priority. Every participant gets a cer-tain amount of flags for every higher priority for a certain amount of time (week, month).They can be distributed in periods with low network load. If a connection request getsno admission flag it will be assigned the the lowest priority.

But now the LSCs could easily identify the sender by it’s admission flag used. Toavoid this the signing of the flags is done with blind signatures [Cha8 85][Chau 89].They eable the participant to convert the admission flag exactly once so that it is stillsigned by the LSCs but not obvious to whom (cp. §3.9.5 and §6.4.1).

Finally a connection request message for MQi has a field Di with:

• A current time stamp (the number of the current time slice)

and additionally for MQm:

• A hint if the connection request message is meaningless and can be ignored

• The local network identifier of the receiver

• The admission flag

The components of a connection request message are shown in figure 5.46.

303

5 Security in Communication Networks

current

current

symm. Systems LPhExch area

sign of

insignificant /

significant message

current

current

current

timeslice

for MR1

for MRm mark

authorization

G

LPhExch area

sign offor G R

initial time slice

t

initial value

s

time barrier

for validity0 RG

Figure 5.46: Connection request message of Q for S

5.5.1.4 Shutting down a unobservable connection

A connection can be shutdown by both the partners. Shutting down is done by onepartner disconnecting the TS and TR channels from a certain time slice on and havingthem “short circuited” or connected to other partners.

There are mainly two options how a partner signals a shutdown request to the otherby keeping transparancy.

On the one hand the requesting interface can send a shutdown request across the signalchannel. But since the signal channel is quite stressed this “natural” solution seems notso good.

On the other hand you could assign parts, precisly one 1 bit per time slice, of thesignal channel to the data channel. So this one bit per time slice would be transmittedover the TS and TR channels. The shutdown request could be signaled by sending a “1”on this channel.

5.5.1.5 Accouting the network usage

The techniques mentioned in [PfPW 89] for accouting network and service usage withoutidentication can be applied here as well.

For sessions within the same local network accounting of single sessions seems technicallysenseless because they don’t produce any additional load in the network. Beside thosesessions are completly unobservable.

However if there has to be per-usage accounting this can be done by local accountingdevices (which are read by the carrier out in fixed intervals) on every participants networkinterface by keeping total unobservablility.

If unobservability is limited (by the carrier; naturally there is total unobservablility)you may use the same accounting techniques as on long distance nets.

To accout sessions using the long distance net (and producing real costs only here) youcould add compensation marks into the channel establishing messages for the last MIXeach (just like the authorization marks in §5.5.1.3). If a LSC can recognize the relationof all time slices of a distance call (cp. §5.5.1.3) so it is possible like before, that a call

304

5.5 Upgrading towards a integrated broadband network

has to be payed only when both sides answer the phone. So it is sufficient to use the TSchannels for payment. Besides you don’t have to pay for every single time slice.

In figure 5.43 the data neccessary for accouting were assigned arbitrary to the TSchannel establishing messages. For TS channels that don’t have to be routed over thelong distance net the field for the hire ticket is empty (but of course it exists).

The hire tickets can be bought by the participant with an digital payment systemkeeping unobservablility (cp. §6 and [PWP 90][BuPf 87][BuPf 89]). The neccessarymessage exchange runs over a normal unobservable connection.

If you assume that all time slices of a connection over a long distance net have share acommon identifier either the caller (Q) or the called one (S) may pay the hire: It can bedenoted in the connection request message, sent by the caller to the callee and MQm,who is to pay for the connection. MQm tells MSm who is to pay. After exchangingthe starting messages that do signal the establishing of the connection between the twointerfaces, the last MIX of the one to pay will now only accept TS channel establishingmessages with valid hire tickets. Otherwise the connection is shutdown.

Instead of the last MIXes the LSCs of Q and S could take over the accouting if the getthe neccessary data from the MIXes (connection identifiers, who is to pay, hire tickets).

5.5.1.6 Connections between LSCs with MIX cascades and conventionalLSCs

For connections of LSCs with MIX cascades and conventional LSCs, the last MIX playsthe role of the translator.

If only Q’s LSC has a MIX cascade, Q tells the MIX MQm the number of S (along withthe data of the normal scheme: sQS and the nunber of the first time slice) with a specialconnection request message. Then MQm establishs a conventional connection to S overhis LSC and the long distance net and connects it with the time slice channels of Q.

The calling participant (Q) is as unobservable as if S would have a MIX cascade. Theparticipant called (S) is as observable as like using today’s ISDN.

If only S’s LSC has a MIX cascade the main question is about addressing.If the calling participant uses a conventional call number the unobservablility of the

participant called is lost. That’s why you should provide implicit addresses for thoseconnections too.

Soon you will see that a anonymity group will consist of about 5,000 participants. Ifyou assume that every participant uses about 50 different open implicit addresses (cp.§5.4.1.2) and that the amount of possible addresses grows quadratically with the amountof addresses that are actually needed (to prevent collisions), the address space is about6.25 · 1010. So every implicit address would consist of 11 (decimal) digits. Additionallyyou’ll need to select the anonymity group of S. For sure the size of the local networkwould never exceed 50,000,000 interfaces. If the anonymity group has 5,000 participants4 digits would do it.

Therefore the following scheme seems adequatly: Q choses the preselection number ofthe local network of S and establishs a conventional connection to MSm. With further

305

5 Security in Communication Networks

15 decimal digits it tells MSm the implicit address of S. MSm acts as a proxy and sendsthe corresponding connection request message to S. If S accepts the connection MSm

concatenates the conventional and the protected connection. If S refuses MSm offers tokeep the connection request until S calls back (by accepting the connection). This willprevent permanent redailing.

In contrast a hidden implicit address will be about 200 digits long and requires somecryptographic abilities by the caller: The caller will need a computer to save keys andfor calculating addresses. Furthermore he must be able to send data from this computerto MSm either with ISDN or with modem over analogue connections. The transmissionis done within the connections of Q and MSm, therefore not in the signal channel.

It shall be mentioned that a LSC having a MIX cascade can keep conventional participantinterfaces.

5.5.1.7 Reliability

Obviously the cascade is a sequential system regarding reliability. But here it would besufficent to make every MIX’s internals more fault tolerant. Even though if a MIX failsthe interfaces connected to the LSC could leave out the MIX in their messages.

There are also fault tolerance techniques for more than one MIX, cp. §5.4.6.10 and[Pfi1 85][MaPf 87][Pfit 89, §5.3]. But they do increase the effort in general as well asthat the problem with the anonymous return addresses which makes these techniquesneccessary doesn’t occur here.

Regarding the protection from transmission errors there is not difference to ISDN. Heretoo a syncronous but unreliable transmission is assumed.

By using a proper stream cipher (Pseudo one-time pad using OFB) a additional prop-agation of bit errors on the 64 kbit/s channels is prevented. On the signal channelprotocols like HDLC are using to secure the transmission against errors.

5.5.2 Effort

As next we will estimate the practical effort and capabilities for the telephone MIXesshown in the previous section.

It will turn out that if limited to 2 · 64 + 16 kbit/s duplex channels for m = 10 MIXesyou may connect about 5000 interfaces to MIXing cascade.

As main bottlenecks two things will show up:

• The transmission of TS and TR channel establishing messages, connection requestmessages and replys from the interfaces to the LSCs as well as

• the distribution of the connection request messages to the interfaces.

Both must be done using the 16 kbit/s signal channel. Since a certain fraction of thebandwidth has to be used for error correction and sometimes even for smallband serviceswe will assume a effective duplex bandwidth of b = 12 kbit/s.

306

5.5 Upgrading towards a integrated broadband network

Parameter symmetric stream cipher asymmetric block cipher

Key length ssym = 128bit irrelevant

Block length bsym =∞ basym = 660bit

Length of encryption of N |N | basym (if |N | ≤ basym)

Figure 5.47: Parameters of the crytographical systems used

The preparation and processing of the messages by the interfaces, the MIXing and thedata channels itself are only minor problems.

In §5.5.2.1 some parameters for describing the requires cryptographical systems as well assome typical values are introduced. In §5.5.2.2 the duration z of a time slice is estimated,and in §5.5.2.3 the amount n of interfaces that may be connected to the MIXing cascade.In §5.5.2.4 we will have a look onto the complexity of the tasks the interfaces and theMIXes have to perform per time slice. Finally in §5.5.2.5 we will estimate the durationof the connection establishing.

As reminder here is the message scheme (slighly tranformed) again:

Suppose a participant wants to send a message N (already end-to-end encrypted) fromhis interface A to the interface B of another participant. Additionally he wants everyMIX Mi to receive some certain data Di. Then he consecutively creates m+1 messages

Nm+1 := N

Ni := c∗i (Di, Ni+1) for i = m, m− 1, ..., 2 (5.1)

N1 := k1A(D1, N2)

and sends N1 to M1. N1 is named MIX Input message.

Every Mi receives the message Ni from Mi−1; the transcoding of Ni consists of thedecryption with k1A resp. di as well as the removal of Di.

For M1 you could use a hybrid cryptosystem instead of the symetrical. But this wouldonly result in bigger efforts needed to create N1 and it would reduce security in certaincases.

5.5.2.1 Parameters for describing the cryptographic systems needed

The parameters mentioned here are oriented on the most known systems, both symetricand asymetric systems like DES and RSA (cp. §3.7 and §3.6).

It is pointed out (cp. §5.4.6.8) that the security of these cryptographic systems andall MIX implementations basing on isn’t even proved towards relativly hard problemslike the factorization assumtption.

This disadvantage is here ignored only because of on the one hand this applys toall practical concealation systems. And on the other because DES and RSA are wellexamined and haven’t been broken publicly.

The parameters and their assumed values are shown in figure 5.47.

307

5 Security in Communication Networks

5.5.2.1.1 Symetrical Concealations Systems

For the time being we will assume a secure block cipher with 128 bit key length like thegeneralization of DES with alternative key generation (cp. §3.7.7).

For the techniques shown here we will need a syncronous stream cipher, therefore a blockcipher that runs in OFB mode (cp. §3.8.2.4). So there will be no expansion due to blockpadding or something.

5.5.2.1.2 Asymetrical Concealations Systems

For the asymetrical cryptographic system RSA a modulus of 660 bits seems sufficent(cp. §3.6).

5.5.2.1.3 Hybrid Encryption

To save bandwidth the hybrid encryption always trys to take as much as possible of theplain text N into the first, asymetrically encrypted block (denoted N ′). Therefore if N ′′

is the rest of N

c∗E(N) :=

{

cE(N) if |N | ≤ basym

cE(kSE , N ′), kSE(N ′′) else(5.2)

To determine the length of c∗E(N) you need to make the encryption indeterministic (cp.§5.4.6.2) to keep unobservability.

With using RSA first of all you need to remove the determinism by indeterministicallyencoding the message. Basically this is done by adding a certain amount a of randomlypicked bits.

The resulting length expansion can be reused by making those a bits the symetric keykSE . So a = ssym.

Unfortunatly the exact implementation of the “expansion” isn’t that trival. It wasshown that if RSA is used the encoding by prepending random bits like in [Chau 81]results in a insecure MIX-Network (cp. §5.4.6.8).

But still secure is the following expansion: Prior to the encryption of a message theencrypter picks a random, ssym bits long string a concatenates it with the message. Theresulting message is applied to a permutation π that “removes” the message’s structureas good as possible. The permutation π and it’s inverse π−1 are commonly known andbe defined by a fixed key of a symetric concealation system. Encoding goes the otherway round by applying π−1 first and removing the ssym bit long string next.

If you consider this expansion then:

|c∗E(N)| = max{basym, |N |+ ssym} (5.3)

308

5.5 Upgrading towards a integrated broadband network

5.5.2.2 Minimal duration of a time slice

Every interface has to send three MIX messages for every time slice and every available64 kbit/s data channel: a TS and a TR channel establishing messages as well as aconnection request message. The lengths are denoted with LTS , LTR, LC .

The length of the reply (cp. [PfPW1 89, §2.2.3]) is denoted with LR. If RSA is usedas signature system in general LR = basym. Since the replys don’t carry weight due totheir short length it is assumed that every interface send a reply for every time slice.

Therefore an interface needs to be able to send 2 · (LTS + LTR + LC) + LR bit overits signal channel while every time slice so:

z ≥ 2 · (LTS + LTR + LC) + LR

b(5.4)

The lengths LTS , LTR and LC are to be estimated as next. In §5.5.2.2.1 the lengthof a MIX message is estimated in general, in §5.5.2.2.2 the specific values of (gleichungdrueber) are used.

5.5.2.2.1 Length of a MIX message

Due to §5.5.2 (5.1) is:|N1| ≤ |D1|+ |N2|

resp. for i = 2, ..., m due to §5.5.2.1.3 (5.3)

|Ni| = max{basym, |Di|+ |Ni+1|+ ssym}

Since for i < m the message Ni+1 consists of at least one asymetrically encrypted blockit holds for i < m that |Ni+1 ≥ basym. If you resolve the upper recursive equation

|N1| =m−1∑

j=1

|Dj |+ (m− 2) · ssym + max{basym, |Di|+ |Ni+1|+ ssym} (5.5)

In the folling section we will ori from equation (5.5).

5.5.2.2.2 Estimation

The structures of the three message types are shown in figure 5.43, 5.44 and 5.45. Tothe fields the following lengths will be assigned to:

Figure 5.43, 5.44, 5.46

• Time slice nummer: 30 bit

• Local network identifier: 22 bit

Figure 5.43, 5.44

• compensation mark for paying the long distance net usage: 670 bit

309

5 Security in Communication Networks

• Channel identifier: 28 bit

Figure 5.46

• Discrimination between meaningless/meaningful messages: 1 bit

• Inital value sRG of the PRNG: 128 bit

• Time out limitation for validity (therefore the waiting of Q to S): 20 bit

• Admission flags to denote the priority of a connection request: 670 bit

For both marks we assumed that the priority of the authorization mark resp. the valueof the compensation mark are not only specified implicitly by the signature (RSA: 660bits) but also explicitly (with at least 10 further bits). This will spare the controllingLSC to try all possible keys for validating the signatures.

For a TS or TR channel establishing messages holds for i = 1, ..., m − 1 that each|Di| = 30 bits because the key of the symetric concealation system was already countedfor the hybrid encryption and |Nm+1 = 0|.

For a TS channel establishing messages is |Dm|=750 bits and after (5.5) |Nm|=750bits +ssym. For a TR channel establishing messages |Dm| =58 bit and after (5.5)|Nm| = basym.

From (5.5) follows:

LTS = (m− 1) · (30 bit + ssym) + 750 bit

LTR = (m− 1) · (30 bit + ssym)− ssym + basym(5.6)

For a connection request messages holds for i = 1, ..., m − 1 that each |Di| = 30, and|Dm|=723 bits. After (5.5) is |Nm| = 723 bit + |Nm+1| + ssym. The other way round|Nm+1| = basym is sufficent since the receiver has to keep only 200 bits.

Then

LV = (m− 1) · (30 bit + ssym) + basym + 723 bit (5.7)

After (5.6) and (5.7) is

LTS + LTR + LV = (m− 1) · 90 bit + (3 ·m− 4) · ssym + 2 · basym + 1473 bit (5.8)

If you now insert the values (cp. fig. 5.47) for ssym and basym and b = 12 kbit/s into(5.4) then is

z ≥ 474 bit ·m + 2191 bit

6000 bits/s+

660 bit

12000 bit/s= 0.0079 s ·m + 0.42016 s (5.9)

wherefore m = 10 a value of z ≥ 1.22 s is sufficent.

310

5.5 Upgrading towards a integrated broadband network

5.5.2.3 Maximum amount of interfaces to be served by one MIXing Cascade

The amount n of interfaces that can be served by one MIXing cascade is limited by thedistribution of the connection request messages: for every interface there is only onereceiving channel of capacity b/n available.

How strong this limitation is depends on the traffic situation. We simplify this byassuming that the network is a M/D/1-System where λ is the maximum average ratewith which every participant receives connection requests. We will assume λ = 1

3001s so

at peak loads there are twelve connection requests per participant and hour on average(This value for λ is surly sufficent if not even overrated: In accordance to a long distancecall statistics [SIEM 87] in Germany of 1985 there are less than 3.1 calls per interfaceand day on average. If you trust the expectation of Siemens λ = 1

3001s is even for

peak loads too high: Accoring to [SIEM 88] the most powerful connection switch fromSiemens can perform 4.8 connection attempts per participant and hour. Besides if usingTelephone-MIXes only the incomming calls are intresting towards λ).

Every distributed connection request messages consists of exactly one asymetrically en-crypted block (cp. §5.5.2.2.2) and therefore has the length basym so that there is amaximum of µ := b

basymconnection request messages per second to be distributed.

If Tq is the average system time, and therefore the time a connection requests needson average to get to a interface, then holds [Klei 75, §5.5 especially (5.74)] in peak hourson average

Tq =2 · µ− nλ

2 · µ · (µ− n · λ)(5.10)

This results in

n =µ

λ· µ · Tq − 1

µ · Tq − 0.5(5.11)

For λ = 1300

1s , Tq ≤ 0.5 s, b = 12 kbit/s and basym = 660 bit is makes n ≤ 5137.

Especially for connection request messages the usage of open implicit addresses (cp.§5.4.1.2) would worthwhile: Since the called interface S identifies the calling interfaceQ by it’s open address aQS the connection request only has to contain the number ofthe desired time slice. All other information can be reused by S. If the caller doesn’tuse it’s own interface Q but is calling from another local network so the connection canbe routed over Q to the current interface. If an address is 100 bit long the connectionrequest won’t be bigger than 130 bit. If only those types of addresses are used the limitis n ≤ 27389 due to µ = b/130 bit (cp. (5.11)).

The M/D/1 assumtion is, as explained, somewhat simplyfing. Although this doesn’tmatter: The arrivals are almost “state less” since a redailing by repeating the connectionrequest message is not necessary. The service duration is indeed deterministic apart ofsome repeats due to errors. The available bandwidth doesn’t need to be statisticallylimited. Beside the assumtion that TV ≫ 1/µ has very little influence on n so (5.11) ismainly the trival claim n · λ ≤ µ.

311

5 Security in Communication Networks

5.5.2.4 Computation complexity for the interfaces and MIXes

The computation complexity for the interfaces and MIXes mainly consists of the cryp-tographic operations: Preparing and recognizing implicit addresses, Preparing and rec-ognizing of MIX messages, signing and testing replys, encryption and decryption of datachannels.

But since the encryption rates of both the concealation systems are much higher thanthe transmission rates those operations aren’t problematic.

The remaining effort for the interfaces like managing keys, implicit addresses etc. canbe neglected.

The remaining effort for the MIXes is mainly about buffering and resorting of messagesas well as the testing of time stamps. This affects 6 input batches per time slice withn messages each and therefore less than 27389 messages (cp. §5.5.2.3). This should bepossible, with special hardware, within 0.01 seconds per MIX. To save some additionaltime you may configure the connection request messages so that it firstly establishes asending channel to MQm resp. the LSC over which the additional data for MQm andS is sent. By now the participant may decide later to really connect to whom.

In general you need to take care of that messages of the different time slices areprocessed pipeline-wise by the MIXes. So the time slices may begin on different MIXeson different moments.

5.5.2.5 Connection startup duration

The startup duration mainly consists of

• waiting for the transmission of the connection request message by the MIX cascadeof the calling interface (≤ z)

• the latency of the MIX cascade of the calling interface (m · 0.01 s)

• the latency of the long distance net (≤ 0.2 s)

• the waiting for the distribution to the receiver (during peak loads Tv on average)

• the waiting for establishing the TS channels by the called interface (≤ z, as longas the called interface responds immediatly)

• the latency of the MIX cascade of the calling interface (m · 0.01 s).

Summed up, this makes 2 · (z +m · 0.01 s )+Tv +0.2 s, with the value from above about3.34 s. On average this duration will be halved.

5.6 Network managment

First of all in section §5.6.1 we will discuss who (sets up and) are runs which parts ofthe communication network (and than consequently responsible for the service quality.

312

5.6 Network managment

After that in §5.6.2 it is shown how accounting for usage of the communication networkcan be done securly and compfortable without undermining anonymity, unobservabilityand unrelatability.

The payment for (higher) services provided by the communication network can bedone with any digital payment system desired (cp. §6).

5.6.1 Operating a Network: Responsibility for service qualityvs. threats of trojan horses

Regarding operating a network and responsibility for service quality there is the followingdilemma [Pfi1 85, pg. 129]:

If everybody uses an arbitrary user station (which you propably already guessed sincenothing was said explicitly about) so no organisation is willing (and able) to take respon-sibility for the service quality. This is true at least if mistakes or unsufficent performancemay influence the service quality of other user stations that don’t even communicatedwith the faulty station (that’s why we talk about network participants but not usersbeginning with §5.4). Such a “anarchistic-liberal” network managment may be optimalfor the validation (by the users) of anonymity, unobservability and unrelatability but isalmost totally unusable for service integrated communication networks. But of course itcan be run on any communication network desired for the prize of less cost efficency, lessperformance, less reliability and more distrust by the rest of the society always askingwhat those participant may hide.

On the other hand if one organisation designs, produces, installs and runs all networkcomponents, especially the participant stations, the organisation is may be willing to beresponsible for the service quality. But noboby especially no participant can rule out thatparts of the organisation may install Trojan Horses (cp. §1.2.2). Such an organisationwould be uncontrolable regarding security and confidentiality.

Since every protection measure that is implemented with the communication networkis realized by the participant stations it is useful to split the stations regarding design,production, installation and carriership into two parts: The terminals and the interfaces.

The participant terminals are those parts that have nothing to do with the com-munication with other stations. Consequently the terminal can only be under controlby the participant.

The network interfaces are all parts of the participant station that come in touchwith the communication with other stations. Consequently the carrier is responsible forthe network interface.

For example the CCITT calles it Data Terminal Equipment (X.25 related) for ourparticipant terminal and Data Circuit-Terminating Equipment for our interface (cp.[Tane 81][Tane 88]). With ISDN it’s called Terminal Equipment and Network Termina-tion [ITG 87][Tane 88].

As long as there is to be a carrier that is responsible for service quality the complexityof the interface should be keept as small as possible for better validation of security and

313

5 Security in Communication Networks

confidentiality. Additionally design, production, installation and operation can publiclychecked. This would make it acceptable towards anonymity, unobservability and unre-latability to have only a few or only one vendor of interfaces which quality is acceptedby the carrier. The vendors have to publish the designs inclusive all production aids anddesign criterias. Consumer organisations like “Stiftung Warentest” should continouslylook for trojans inside the interfaces as well as inside the participants stations. This isquite complex at the beginning if design and production aids and all designs have to bechecked. But later on the control of the unmodified production is way easier.

Interfaces should consist of only a few integrated digital circuits and analogue senderand receiver circuits. Consequently maintaining or repair are quite seldom. And mostimportantly there should be necessity for remote maintance, therefore no ability for thevendor or carrier to modify the software aboard the interface over the communicationnetwork. Today many telephone switches and other devices do have this ability sincesoftware failures have to corrected immediatly. As seen in §5 vendor can install orremove trojans per remote maintaining at will. But holds not true if remote maintanceis limited to remote inquiry : With this kind of access no software can be altered over thecommunication network but only existing diagnosis tools can be executed. The results ofthis is that a service technician can order the neede spare part and bring the along. Butto guarentee that remote maintance is “read-only” is the common problem of excludinga Trojan Horse (cp. §1.2, §5).

In the next subsections we will describe which functions the interfaces have to offer forboth the basic techniques for protecting transmission data and interest data. Figure5.48 gives an overview.

5.6.1.1 End-to-End encryption and distribution

Since faulty end-to-end encryption or decryption, generation or recognition of implicitaddresses or faulty receiving of distributed information units only affect the participant’sstation and it’s communication partners this can be put into the participant’s terminal.

5.6.1.2 RING- and TREE-Networks

The interface of RING- and TREE-Networks have to have not only the physical interfacewith digital signal regeneration but also the access techniques and all the fault tolerancetechniques needed. Because if these functions generate errors (reduction of performanceor reliability) all other participants of RING- and TREE-Network are affected.

If there is a Trojan Horse in the interface of the carrier or the vendor so it mayregister the sending of the affected station which overrides the security measures of thetransmission topology and the digital signal regeneration for the sender completely. Withthat thep problem of the Trojan Horses was tranfered from the LSC to the interfaces.Nevertheless the situation is way better (cp. §5.6.1) since interfaces are much moresimplier than LSC. This make a validation for Trojan Horses much easier. Additionally,such a validation must be performed much less often, because maintenance is less oftenneeded.

314

5.6 Network managment

key generation andsuperimposingaccess protocol

user station

user

devices

network connection

all functions needed for quality ofservice for others

Desired: no overlapping of the trusted zones, which are needed

point-to-pointencryption

impicitaddressing

MIX-cascades

RING-nettransfer and

access protocol

superimposed

sending

transferprotocol

trusted zone of theuser: no Trojan Horse

trusted zone of the networkprovider: correct realization

Reality:

all

MIXesany desi-red MIX

point

Figure 5.48: Functions of the interfaces

315

5 Security in Communication Networks

5.6.1.3 DC-Network

Next to the physical network interface the interface of the DC-Network has to havethe PRNG, the access techniques and all required fault tolerance measures. If thosefunctions cause problems on performance and reliability on one participant all otherparticipants of the network are affected too. If there is Trojan Horses in the carriers orvendors interface so it may register the sending of this station. And this would repealthe protection of overlaying sending. The problem of Trojan Horses was therefore onlytranfered from the switches to the interfaces. But as like as in §5.6.1.2 the situation hasimproved.

With the modular structure of the interface corresonding to figure 5.40 the modulesthat represent the physical interface (media and the lower sublayer of layer 1) are uncrit-ical regarding anonymity, unobservability and unrelatability according to the attackermodell of DC-Networks. Here anybody may undertake an observing attack. Even mod-ifing attacks won’t harm the anonymity with a appropriate but only prevent the service.Appropriate protocols for error and attack diagnosis are shown in §5.4.5.6 and §5.4.7 aswell as in [WaPf 89][WaPf1 89].

For a large public DC-Network with carrier it is even more neccessary to prove thecryptographical strength (or at to validate; cp. §3) of the PRNG that for an local orspecial network. Otherwisly the PRNG may contain a hidden trapdoor which wouldenable the network carrier or designer to observe the sending of the participant stations.Or the pseudo random number generation that has been a defacto standard on thisDC-Network for years and is therefore highly expensive to be replaced will be finally beabandoned.

5.6.1.4 Requesting and Overlaying as well as MIX-Networks

If servers or MIXes are run by normal network participants the interface involves a bigdatabase with adders or (small) swichtes and a larger number of very powerful decryp-tion and encryption devices as well as all measures for fault tolerance. As explained in§5.4.6.7 you may try not only to secure the communication relation but also the send-ing and receiving of participant stations by MIXes. Next to the difficulties shown in[Pfit 89, §3.2.2.4] that either many meaningless information units or long latency haveto be accepted those voluminous interfaces can canonial observe the participant stations(but not their communication relations). This attempt is not only very expensive anduncompfortable but it’s outcomes are highly doubtful.

Though the switching function of the MIXes can be eliminated by forcing a fixed orderfor going through the MIXes (cp. §5.4.6.1). But also for everything else maintance andrepair could arise more often than being acceptable for devices stored in private rooms.

The maintance and repair situation is completly unproblematic as long as (cp. §5.5.1)normal participant stations won’t operate as servers or MIXes.

Since requesting and overlaying won’t hamper the service quality of noninvolved par-ticipants by “errors” of other participants, that don’t run server functionality or whoseservers aren’t affected by errors, the measure for protecting each single participant can

316

5.6 Network managment

be done exclusively on the participant terminal. Those measures are the creating of re-questing vectors the encryption and sending to the server, the receiving and decryptionof messages from servers as well as the overlaying.

Since the service quality of noninvolved ones in a MIX-Network won’t be hamperedby errors of others that don’t run MIX functions or whose MIX functions aren’t affectedthe measures for protecting each single participant can be done exclusively on the par-ticipants terminal. Those measures are the multiple encryption and decryption. Withthe technique shown in §5.5.1 of meaningless time slice channels and distribution of theconnection requests sending and receiving can be protected much better than with thetechnique of §5.4.6.7 where every participant station is a MIX.

In case of a large public MIX-Network is much more important than for local or specialnetworks that the cryptographical strength of the concealation systme used is proven inpublic or at least validated (cp. §3 and §5.4.6.8).

5.6.1.5 Combination as well as heterogenous networks

If - as seen in §5.4.4 - several techniques for protecting network and interest data arecombined it is useful (regarding the validation of the protection) not to realize an inter-face that has all the neccessary functions but a cascade of all interfaces neccessary forthe different techniques, corresponding to the layering from §5.4.9. Then an interfacecorresponding to a lower layer of a remote terminal with a Trojan Horse can’t observethe communication.

5.6.1.6 Connected, hierachical Networks

If the transitions between networks (resp. network levels and the upper layers neededby every network part from to time to time) are realized in a manner that a group ofoperators can take over the responsibility for the service quality the remaining networkparts can be designed at will. Carriership and responsibility for service quality can betherefore handled almost independently for each different network part.

5.6.2 Accounting

Within an open communication network an compfortable and secure accounting withthe carrier must be possible. While organizing the accounting you need to take carethat anonymity, unobservability and unrelatability in the communication network aremaintained throughout the accounting data.

Basically there are two opportunities: Individual accounting per usage (or subscriber)where the paying participant is anonymous. Or the general accounting with a lump sumpayment for every participant. This doesn’t need to be anonymously since no intrestingaccounting data is created.

For individual accounting

317

5 Security in Communication Networks

• either non-manipulatable counters [Pfi1 83, pg. 36f]

• or anonymous digital paymentsystems (cp. §6)

can be used.

The non-manipulatable counters are accomodated into the interfaces. They corre-spond to today’s electricity counters with the difference that they may be accessed (readonly) from remote. If the counters are configured so that they can be accessed in bigintervals (once per month) they give only a few person specific data. So they may beaccessed non-anonymously.

The main advantage of this solution is that there is almost no effort for accounting.The big disadvantages are that a circumvention of the counter or the interface as well asthe manipulation have to be detected. But both disadvantages aren’t that fatal because(in contrast to traditional payment, cp. §6) you can only pay one service with thesecounters - and for private usage the amounts are quite small. Even today stamps canbe much more easier by faked than bank notes.

If using anonymous digital payment systems the accounting protocols have to be devel-oped in a way that nobody can cheat. This is important since anonymity, unobservabilityand unrelatability prevent subsequential penal processes [WaPf 85][PWP 90][BuPf 90].This can be easily achieved with the accounting problem: The first of all relatable infor-mation units get a prepended digital currency item. The value of this “digital stamp”is credited for the carrier before he continues to transport the relatable information units.

Even with local communication in a DC- or TREE-Network the carrier can surpressunprepaid messages if he has global control resp. controls the root of the tree. With aRING-Network this is not always possible. Even though with two ring participant A andB either A can send to B without payment or the other way round. With transmissionframes realizing a duplex channel are equally expensive to realizing a simplex channel,a service that requires a duplex channel can’t be aquired without payment.

The main advantage of the “digital stamp” is that there are no uncircumventableand non manipulatable counters. And if using a “good” anonymous paymentsystem noperson specific data is created. The disadvantage is the increased communication effort.To keep this small (especially the latency) the carrier should take on the function of thebank in the digital payment system (cp. §6).

If a participant demands an accounting verification either (depends on the implemen-tation) the terminal or the network interface device creates it for him (and only forhim).

By using general lump sum payments you can avoid all problems regarding cheating andalmost all effort of the accounting system.

As soon as enough bandwidth is available a blanket payment for smallband sendingand receiving TV will be possible.

318

5.7 Public Mobile Radio

5.7 Public Mobile Radio

As supplement to the dicussion about open communication networks on fixed connec-tions we will take a look on the communication between mobile participant stations[Alke 88]. The main focus will be on how to solve the occuring data protection issues[Pfit 93][ThFe 95][FJKP 95][Fede 99].

The differences to the previous chapters are

• that the bandwidith is very limited and surly will since the electromagnetic spec-trum can be occupied only once

• and that network data, intrest data, content are person specific but also the currentlocation of the mobile station.

As seen in §5.4.4 it is unrealistic regarding technically advanced aggressors to demandthat signals sent from different stations can’t be distinguished: Due to the analoguecharacteristics of the sender this is impossible ...

This is why is assume that a mobile station can always be tracked and identified if itsends21 (even if engineers designed it for non being able to do so).

Im contrast to this let’s suppose that participant station can be identified or trackableif it is receiving only.

Because radio transmission is easy to listen to you will need a link encryption betweenthe mobile stations and the first fixed station parallel to the end-to-end encryption. Butonly if layers 1 to 3 of the ISO/OSI reference model create person specific data.

Due to the limitation of the bandwidth and a permanent ability to identify and trackthe mobil station the techniques from §5.4.3 and §5.4.5 for protecting the sender are notappliable.

To force all measures for protecting the network and intrest data to be taken placein the location fixed part of the communication network the following technique seemsuseful (cp. fig. 5.49):

As long as the encryption of the payload data is done in the mobile as well as inthe location fixed participant station the technique of transcoding MIXes (cp. §5.4.6)can be employed as long as the mobile station have enough capacity to perform thecryptographic functions.

If the encoding of the payload data in the mobile stations is different from the fixedlocation stations, to save bandwidth for example (13 kbit/s vs. 64 kbit/s), the receiverand the communication network could use this for relating. In this case, at least as longas mobile stations are only a small fraction of all participant stations, there should belink-encrypted channel to the next location fixed station. From there on you will havethe usual techniques for protecting network and intrest data. This is also true if the

21This is a matter of effort. In [BMI 89, pg. 297] you can read: ”Computer controlled receiving andprocessing units enable the signal intelligence services to pick a certain device from a plurality ofdevices by it’s special characteristics”.

319

5 Security in Communication Networks

LPhExchparticipant T

participant T

participant U

if coding differs

in the radio net or

encryption capacity

is missing

MIXes

1

2

8

7

3

4

5

6

Figure 5.49: Connection of wireless users to a MIX network

encoding of payload data is equal to mobile and fixed stations but the mobile stationsdon’t have enough encryption capacity.

A station that is only receiving should be able to tracked by the communication net-work (even on cellbased systems). If there is a connection requestion or a long message forthis station a corresponding implicit address should be distributed within the network.Then the station will reply activly and therefore becomes locatable. Since addressesonly need to consist of a few bytes the effort for this protection measure is small. Thismay even lead to a effort reduction because the administration of the celluar systems issimplified.

A measure that is independent from the previous ones is to keep the user anonymtowards the sending station. This makes sense if sending stations are not firmly assignedto users like in the GSM Network. There every participant gets a chipcard (SIM =Subscriber Identiy Module) with which he can use every sending station. An examplewould be a rental car where neither the network nor the sending station of the car knowwho is acually using it. This techniques corresponds to the discussion from §5.3.1 aboutusing public interfaces.

Finally it shall be mentioned that mobile participant stations should be designed tooperate in a case of catastrophy without the location fixed station (cp. [Pfit 89, §5 and§6.1]. They also should be able to use other frequencies (AM,FM) for emergency calls.

The things said about mobile radio communication is to be considered when creatinga traffic guidance system: It is uncritical regarding anonymity, unobservability andunrelatability to distribute information to vehicles; it is very critical if vehicles has tosend permanently like at projekt PROMETHEUS [FO 87][Walk 87].

With the arguments from §5.1 the traffic guidance system should designed to recognizevehicles but not the type nor specific instance - otherwisly this kind of traffic data is asunprotected as the traffic data cover in the rest of §5.

320

5.7 Public Mobile Radio

Reading recommendations

• Entcryption: [SFJK 97],[PSWW 98]

• Protecting traffic and intrest data: [Pfit 89]

• MIXes: [FrJP 97],[FJMP 97][FGJP 98],[JMPP 98]

• Classification into a layer model: [SFJK 97]

• Accounting: [Pede 96],[FrJW 98]

• Public mobile radio: [FeJP 96],[FJKPS 96],[Fede 99]

• Up to date literature:

– Computer & Security, Elsevier

– ACM Conference on Computer and Communication Security, IEEE Sympo-sium on Security and Privacy, ESORICS, IFIP/Sec, VIS

• Notes on current research papers and comment about current developments:

– http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher;[email protected]

– Datenschutz und Datensicherheit DuD, Vieweg-Verlag

321

5 Security in Communication Networks

322

6 Value Exchange and Payment Systems

Following a short discussion about degrees of anonymity towards participants, it will beshown how the predictability of legal decisions can be achieved for business processesunder the condition of anonymity. Two examples will be discussed in detail: First, howvalue exchange, i. e. exchange of goods against money, can be transacted fraud secured.Thereafter the realization of digital payment systems will be covered.

This is a revised chapter overtaken from [PWP 90].

6.1 Anonymity of participants

Obviously it is pointless to claim, that business should be unobservable to participants.To prevent the recording of unnecessary personal related data, the aim is to conceal theidentity of a participant from his business parters, to keep his anonymity.

There are three criterias for the grade of anonymity:

The first criteria is based on the chosen model of the attacker (see §1.2.5) describingthe need of anonymity towards specific business partners and the chance of achievingit, even if several partners and non–participants gather their information together. Inparticular, this includes the question of the existence of specially intended instances tolift anonymity on the will of one business partner in the case of disagreements about thebusiness. The latter one should be prevented, because if already some instances can liftanonymity, it will give them a too strong position, while it will be too unreliable if manyinstances are needed to lift anonymity (the borderline case would require the entirety ofusers).

Secondly it can be asked for a specific attacker, in between how many possibleacting participants the true acting participant is hidden. In open systems all usersshould be possible acting participants for all actions. In reality this can be limited bythe capabilities of the communication network, see §5.4, that the declaring user can onlyhide between many users, but not between all, if the business partner and the networkprovider work together. It has to be beard in mind, that the number of possibly actingusers must be big enough, that the damage caused by the loss of anonymity for oneuser, can not be inflicted to all acting users, without having a bigger disadvantage thanadvantage for the damaging users [Mt2, 16]. For example if there is one person out of 10who is client of a publisher of constitutional hostile material, the constitutional loyaltyof all 10 persons gets questioned and could make it difficult for them to get employmentin some areas.

Thirdly it is not enough to pay attention to anonymity for isolated actions, but ithas to be taken into account how the attacker considered is able to catenate businesses

323

6 Value Exchange and Payment Systems

pseudonyms

personal pseudonyms role pseudonyms

public

personal

pseudonym

non-public

personal

pseudonym

anonymous

personal

pseudonym

business-

relational

pseudonym

transactional

pseudonym

Figure 6.1: Classification of pseudonyms according to personal relation

or branches of businesses, see §5.1.2. In particular, catenation appears for businesspartners, knowing that two actions are carried out by the same person. For reasons ofconfidentiality, catenation has to be kept at a low level. On the other hand sometimesit can be wanted to achieve the predictability of legal decisions, especially betweentransactions of the same business or for multiple communication with one credit institutein favor of authentication.

The users level of anonymity is not measured by distinguishing marks the partner getsto know automatically as type and date for a business, but first of all by marks freelychosen as identification numbers or testing keys for digital signatures (§3.1.1.2), so calledpseudonyms.

From a practical point of view a useful classification according to the strength ofrealized anonymity is given in figure 6.1.

A pseudonym is called personal pseudonym, if its owner uses it for several businessrelation during a longer period. Therefore it is a name substitute. With respect tocatenation possibilities with the person using a personal pseudonym, we distinguishbetween three types.

Considering the time a pseudonym is used for the very first time, for public personalpseudonyms the assignment ”person to pseudonym” is, at least by principle, generallyknown (e. g. telephone numbers). For non–public personal pseudonyms this assign-ment is only known to a small number of participants (e. g. secret telephone numbers,account numbers without giving the name) and for anonyme personal pseudonymsthis assignment is only known to the owner of the pseudonym (biometric attributes oreven his DNA). For personal pseudonyms a observer persistently receives data aboutthe owner of the pseudonym and after some time it will be possible to identify him.Therefore each personal related pseudonym is a potentially personal label.

324

6.2 The predictability of legal decisions for business processes protecting anonymity

This disadvantage can be overcome in using action–related pseudonyms, whichare not assigned to a person, but only to a persons present role during a transaction.Business-related pseudonyms, are action–related pseudonyms used for several trans-actions (e. g. account numbers used for many postings on an account or a stage name).Transaction pseudonyms are only used for one transaction, e. g. passphrases foranonymously set ciphered advertisements. Using action–related pseudonyms the differ-ent parties are not able to catenate the collected information for the users pseudonymsimply based on pseudonym equality, but at all events it is possible to detect correlationsof time or the amount of money for a transaction. Nevertheless there is a thread of be-coming identified, if business–related pseudonyms are frequently used and enough infor-mation got collected. From the point of view of confidentiality transaction pseudonymsshould be used whenever possible.

For a detailed view on pseudonyms the following needs to be defined: A person X actsduring the transaction t, in the role R (e. g. the customer) towards another person inrole S (e. g. service provider) is written as pS

R(X, t). Attributes which are not needed,e. g. transactions for business–related pseudonyms, can be left behind.

The communication between in that way from each other separated anonymous partnerscan easily be achieved through a transfer status information securing communicationnetwork. This kind of network allows it to send messages without sender information torecipients only specified by arbitrary pseudonyms, without unfolding the location or theparticipants identity. There is no restriction to the application of cryptographic systems,because the keys of asymmetric systems, for encryption or key exchange, can refer to apseudonym instead of referring to an identifiable user.

After reminding in this chapter, that anonymity is necessary and technically realizable toobtain provable data security in open digital systems, in the following will be covered howthe wanted predictability of legal decisions can be achieved without giving up anonymity,i. e. during authentication.

6.2 The predictability of legal decisions for business

processes protecting anonymity

In this section it gets determined what must not be forgotten during the design processfor business procedures to grant the predictability of legal decisions in spite of anonymitywithout having a closer view towards the current legislation (see therefore [Rede 84,Clem 85, Kohl 86, Rede 86]).

The order of determination will go along with the transaction order of the act in thelaw together with possible damage regulation. Much of what is coming, does not onlyapply to non–anonymous business processes in open digital systems, because we have toassume that business partners do not know each other from the beginning nor have thepossibility to identify themselves. Anonymity (even though non–observable) applies.

325

6 Value Exchange and Payment Systems

6.2.1 Declarations of intention

6.2.1.1 Anonymous declaring or receiving of declarations

The possibility to declare or receive declarations anonymously, is provided by the transferdata secured communication network.

Making corresponding statements under the condition, that either all participantssign or none, this can be handled as a complete business process for the exchangeof signed statements and can be finalized like an exchange ”goods against money”,which is described later in this section. If wanted, special contract closing procedures[BGMR 85, EvGL 85] can be applied, allowing a nearly simultaneous exchange of dec-larations without using third party services. However, the cost of communication risesstrongly by using contract closing procedures.

6.2.1.2 Authentication of declarations

For making a declaration it is often necessary to proof the authorization. Digital sig-natures are an essential tool for doing it in communication networks.

6.2.1.2.1 Digital signatures

Instead of a autograph signature so-called digital signatures §3.1.1.2 will secure for actsin law in open digital systems that declarations are made by specific persons (or evengroups of persons, etc.) associated with a certain pseudonym.

Basic demands towards a digital signature are:

1. Nobody except the owner of the pseudonym should be able to give a pseudonym-specific signature for a document.

2. Everybody is able to test if a declaration provides a pseudonym-specific signature.

The first claim correlates to the will of the user: Certainly it is not possible to hinder auser in digital systems to allow other users to act under his pseudonym. This correspondsto the all-time possibility of giving somebody arbitrary many blank autograph signatureswhich at most does damage to the user itself, because it can be assumed that the givensignatures are thought to be originated at the owner of the pseudonym.

The simple approach to trust on the uniqueness of pseudonyms to submit them to-gether with the declaration like usual signatures, is not sufficient according to the require-ments mentioned above, because everybody who received a signed declaration from auser is able to copy the submitted pseudonym to arbitrary many declarations. Especiallythe sometimes discussed possibility of using a digital copy of the autograph signaturefor authentication of declarations counts to this insufficient approach. For a digital doc-ument it is easy possible to separate a autograph signature from the document, even ifit is written cross the paper based version of the declaration. (The ideas from [Kohl 86]cannot be applied, because facsimile signatures as for bank notes become combined with

326

6.2 The predictability of legal decisions for business processes protecting anonymity

X Y

senderX

receiverY

docu-

ment

N

pA

sA

(N)

signing of

message N:

sA

(N)

tA

(N, sA

(N)) ?

testing ofsignature:

N,

Figure 6.2: Transfer of a signed message from X to Y, functional view (left), graphical view (right)

special paper forms, which cannot be send in digital systems. Furthermore the print-ing on such forms is not more secure than autograph signatures considering the todayavailable copying technology, only maybe the structure of the paper itself could make adifference).

Therefore even for the non–anonymous case a digital pseudonym different from thename is required (public personal pseudonym), which is one reason to understand it asa special case of the anonymous version.

A solution is comes along with a digital signature system.

Figure 6.2 shows the transfer of a signed message from user X to user Y. X uses thepseudonym pS (for sender); at the left side a functional view and at the right a graphicalview which will be used in the following representing a message as a document andsignature as signet.

For some applications it is useful to demand additional properties for digital signaturesystems:

The recipient e. g. of a GMR signed message can show it around to convince otherparticipants of the authenticity of the message. The signee is not able to prevent hissignature from being shown around. This insufficiency becomes remedied by undeni-able signatures (§3.1.1.2 and §3.9.4): The supposed signee has to be ask to make otherparticipants trust in his signature; he is not able to deny signatures he actually gave.

Every signature system (as well as each asymmetric system of secrecy) can be brokenby putting enough effort into it. In common signature system the risk is on the signeesside. In a fail–stop signature system (§3.9.2) the risk can be shifted to the signaturesrecipient: The supposed signee is able to proof to a third party, if a signature is falsified.

Another variation needed to explain what follows are blind signatures, which willbe introduced in §6.2.1.2.2.

Digital signature systems should not be confound with so–called digital identifica-tion systems. While the recipient of a signed message is able to proof the authenticityof a signature to a third party in digital signature systems, digital identification systems

327

6 Value Exchange and Payment Systems

only provide a way for the recipient to identify the declaring party as owner of a spe-cific pseudonym at the moment the declaration takes place [WeCa 79, Bras 83, FiSh 87,Simm 88]. A identification system is therefore a symmetric authentication system forjust on message.

For simplicity the declaring partner submits a pseudonym together with the message(possibly one with a special structure) or encrypts the message using a symmetric cryp-tographic system where only the two communication partners share the knowledge aboutthe key. After receiving the message it is not possible for a third party to check the sig-nature for authenticity. Therefore identification systems appear not to be suitable forthe authentication of a declaration by means of the predictability of legal decisions.

6.2.1.2.2 Types of authentication

Depending on where the authority originates to give a declaration we distinguish betweenself authentication and third party authentication.

Self authentication means referring to an own declaration made earlier while making anew declaration, e. g. ordering goods and referring to a previously requested bindingoffer.

The declaring partner wants to show, that both declarations were made by the sameperson. This represents a wanted catenation of different declarations and can be achievedby using for all declarations the same digital pseudonym and the corresponding signature.For classic systems the same name and autograph signature would have been used inany case and eventually an additional identification number to make the catenation ofdifferent declaration clear for a business process.

Third party authentication means that the declaring participant got his authority todeclare from a previous declaration made by a third party authority.

In this case the declaring partner needs a document like an attestation or a cashguarantee declaration to show that the owner of a pseudonym is authorized to give acertain declaration. This document becomes submitted together with the declarationitself.

The issuer of the document on the other hand can be authorized by other documents.

Without further measures third party authenticators e. g. credit institutes (taken thatthey have to give a guarantee for each purchase) would be able to derive unwantedcatenations in collaboration with the recipient of the declaration. This can be preventedby using credentials ([Chau 84]), which allow to get an authentication issued for apseudonym different from the one used for the declaration.

Therefore it has to be possible to recalculate the authentication for the used pseudo-nym without the cooperation of the issuer. On the one side nobody apart from declaringpartner should be able to see a connection between the two pseudonyms and on the otherside he should not be able to recalculate a authentication for pseudonyms different fromhis own ones, perhaps not for a friends pseudonyms.

The first system based on RSA was suggested in 1985 [Chau 85, ChEv 87]. A specificsigning functions becomes assigned to all possible authentications. To issue a authenti-

328

6.2 The predictability of legal decisions for business processes protecting anonymity

cation for a pseudonym it becomes signed with the corresponding signing function. Thisimplies that the possible authentications have to be strongly generalized.

If it is sufficient to transform an authentication at most once, the technique allows tochoose the pseudonym suitable for signing [Chau 89] for which the authentication oughtto become recalculated. For multiple usage of authentications and multiple recalculationscoming along with it, the resulting pseudonyms are not suitable for signing and aretherefore only usable for identification systems.

While anonymity is completely secured (informational theoretical secure), the securityitself is at most as secure as RSA.

The specific signature system out of [Chau 89], which is unproved too, can be usedadequately.

In [Damg 88] a provable secure system to recalculate authentications was suggested,which is based on an arbitrary cryptographical secure signature system. By means ofexpense it is not usable in real life.

Should the occasion arise that RSA gets broken, but other asymmetric cryptographi-cal and signature systems are still available, a variation of the application suggested in[Chau 81, p.86] of a transfer status information securing communication network can beused according to the following scheme: If there are coming n users together, who want toobtain for pseudonyms p1, ..., pn identical authentications (signatures), the organizationask to provide these, checks that all n users are authorized to get the wanted signature(i. e. if they already have an authorization for an other pseudonym, already made knownto the organization together with the request). Only then the organization allows everyuser to send exactly on message containing their pseudonym onto the transfer status in-formation securing communication network, which has to be logically established usingan already existing physical network. This takes place in arbitrary order. The commu-nication network guarantees that nobody can obtain information about the assignment”pseudonym to user”. Now the organization signs all published pseudonyms.

Any signature system can be used. While the system for recalculation of authentica-tions provides informational theoretical security to hide an user X between all others whoare receiving the same confirmation as X as long as he did not use it, the last systemonly provides him the same security as the logically realized transfer status informa-tion securing communication network can provide the n users receiving the confirmationsimultaneously.

A different approach to prevent users from unwanted catenation through foreign au-thentication is based on the assumption that secure and unanalyzable equipmentexists. It would be possible to add and delete permissions of users through communica-tion with other secure systems. The secure system would be able to confirm registeredpermissions using system–wide provable signatures.

In contrast to the usually expected computers this application requires the system towork based on secret data, e. g. cryptographic keys, which need to be concealed fromthe user. Otherwise the user would be able to change permissions inside the system oreven to clone a system. It would appear to be unchanged to other users, but could holdadditional or different permissions.

Still it has to be controllable for the user it serves, hence it should be in his physical

329

6 Value Exchange and Payment Systems

domain. This is widely assumed in literature [Riha 84, Cha9 85, Davi 85, Riha 85].

The existence of devices which fulfill both requirements is questionable, because theuser is able to manipulate and observe the system, especially while it is working. It couldproof to be highly profitable even applying costly measures, because analyzing one deviceenables the analyst to build many adequate systems. Every processing of informationgoes along with the transport of energy inside the device. To prevent the measurementof the energy transports (e. g. through electromagnetic radiation), the device has to beshielded sufficiently. Because an user could try to analyze his device through destructivemeasurement, the system has to detect when it its safeguard becomes affected from theoutside. In this case (and for affects on a function due to internal errors) the system hasto destroy itself, i. e. it has to delete its secret information, comp. §2.1.2.

This leads to a competition between constructors of secure devices and the developersof measurement technologies, which probably in the long run cannot be won by ether ofboth.

Nevertheless chip cards are used in everyday life as secure devices. Possible applica-tions are identification cards to use databases or public phones (where the permission isset to n-times and will be decreased for each usage) or for digital payment systems (see§6.4).

In our opinion, devices which has to be secure towards their owners, should be usedas little as possible.

6.2.2 Other actions

It is only worth sending and receiving statements about legal transactions anonymouslyif all other actions in that context also guarantee anonymity. The anonymity is alreadysupplied by the transfer status information securing communication network if the abovementioned actions are realized through sending information via the communication net-work e. g. merchandizing database query results. In an open system it should also bepossible to transmit money digitally, e. g. in the form of several consecutive statements,see §6.4. These already cover the most important actions for legal transactions.

If parts of the actions are not done via the communication network (e. g. delivery ofmaterial goods chosen and ordered via the open communication network), anonymitycannot be kept completely; and partly that would not be desirable either. Therefore, thefollowing explanations will be limited to statements and actions in the communicationnetwork.

6.2.3 Securing of evidence

Based on the possibilities for authentication, which were explained in chapter 6.2.1, itmust be explored now in which way enough evidence of the transmission and receptionof a professed intention can be found.

Like in chapter §6.2.1 no major new problems occur in comparison to the public case.

The investigation can be divided according to these two criteria:

330

6.2 The predictability of legal decisions for business processes protecting anonymity

First the aim of the argumentation can be considered. The aim could be the transmissionor the reception of a statement. But it could also be the opposite, i. e. to prove thatthe transmission or the reception has not taken place.

The second aim can be reached by simply achieving the first one completely. If itcannot be shown that a statement was send or received, this statement can be regardedas not sent or received. Accordingly, the second aim will not be followed explicitly here.

As a second criterion it can be considered who is interested in the argumentation: thesender or the receiver. Often the fact of a statement having been transmitted will onlybe advantageous for one of the two parties, so that only this one is interested in theproof at all and has to collect evidence. Finding a proof is easy for the receiver of thestatement, if the statement is a document, i. e it carries a digital signature. Then, alreadythe presentation of the statement is the proof that the owner of the digital pseudonymbelonging to signature made that statement.

In many cases it is not necessary that the sender collects evidence for the transmissionor even the reception of the statement by the recipient, even if the fact that the statementwas given is of advantage for him.

If the exact time of the transmission is not important, it is sufficient to repeat thisstatement as soon as its transmission is doubted if necessary in court. For examplethis is true for transactions in everyday life which take place in open communicationnetworks. Likewise the sender does not have to collect evidence that he has transmittedinformation as goods or messages connected with the transmission of digital money. Incontrast to the delivery of material goods or paper based documents, the supplier stillstore the provided information. The receiver does not obtain additional advantages fromrepeated deliveries, because the follow–up deliveries only represent copies of the originaldelivery which the receiver is easily capable of generating himself.

Should it be necessary to prove the reception of a statement by the recipient, thenthere are similar possibilities like for public statements.

One possibility is to demand a signed receipt; the reception of that can be proveneasily (see above). If the supplier does not get the receipt within a defined time limit, itcan be enforced in court or provide by a court. In such a case the following alternativeis also necessary. This second possibility consist of so–called blackboards in an opencommunication network at which potential receivers of statements with a receipt haveto look out for such statements regularly. As the computer can do this job, it means lessof a burden than the obligation to empty the mailbox daily.

The fact that a statement for a user who uses this pseudonym pE could be found ata blackboard can be proven by witnesses. The statement has to be encrypted with akey only known to the receiver of the statement, so that the witnesses do not get toknow the content of the statement. Assuming, there exists a key cE of an asymmetricsystem of secrecy and a pseudonym pE assigned to it in a checkable way (e. g. througha public trust center or cE being a pseudonym itself), the declaring partner is able toprove that the statement arrived at the user of the digital pseudonym pE by simply pre-senting pseudonym pE and the unsigned statement (and for indeterministic encryptionif applicable a random number, see §3.1.1.1 annotation 2). At best a public institution

331

6 Value Exchange and Payment Systems

is used as witness which publishes and signs regularly (e. g. daily) a list of all incomingencrypted statements, because this does not violate the recipients anonymity and thewitness is controllable as well.

6.2.4 Legal investigations

If the two engage parties find themselves in a situation in which on of them doubtsthe legality of the state of affairs, the substance of these doubts has to be investigatedfurther.

The user who initiates the investigation, does not have to reveal his identity. Forthe moment it is sufficient to establish whether, for example, the owner of a certainpseudonym had really been deceived. If it should be observed that not everybody canbe forced to take part in the proceedings, as the identity of all persons possible involvedshould not be made public in advance in any case (it could lead to abuse).

Accordingly, all necessary evidence should be owned by users whose participation isguaranteed. This includes the party who initiated the proceedings and all public personsinvolved, for example notary publics, but also other anonymous persons in case theycould suffer damage from non–participation. An example is the following: an anonymousdatabase is sued for receiving money but is not providing satisfactory information. Thisdatabase is forced to provide the transmitted response, as otherwise it would be assumedthat nothing had been sent and the database would be given a sentence. (Compare§6.2.5 to see that this is really an threat. Additionally, it has to be considered thatit is possible to send the database the summons within the transfer status informationsecuring communication networks securely under its pseudonym, compare §6.2.3)

6.2.5 Settlement of damages

Has it been determined that an illegal state has occurred, a legal state has to be reestab-lished; if necessary analogous to today’s compulsory execution.

In general the sentence is a fine for a user. The user’s property should be big enough,the connection between this property and the determined demands should be clear andthe access to the property or the special part of it on which the demands are made,should be possible.

It does not depend on the anonymity, but on considerations in principle referring tothe respective legal transactions, wether it can be insured in advance that an user hassufficient property, where at the moment it can not always be guaranteed.

The fulfillment of remaining demands is the main difference between anonymous andpublic systems:

If anonymity is not wanted, the connection can be established through the name (andother data for unambiguous identification). This means, that the administration of theproperty and the finding of evidence can be organized completely independently from

332

6.2 The predictability of legal decisions for business processes protecting anonymity

each other, as all property is associated with a name, and all evidence refers to peopleknown by name.

This independence gets lost if the anonymity has to be protected. That does not meanthat all possibilities to settle damages are gone, but only that business transactionsbecome more complicated. (Already when collecting evidence, one always has to lookfor a connection to the property, which might have to be accessed.)

For that reason the examination of the parts of a business transaction, which was carriedout in this chapter, does not automatically lead to standardized protocols for wholetransactions. It only provides strategies that still need to be combined skillfully.

Rules to individual cases mainly depend on who is obliged to do what and for how longand everything based on the legal transaction.

If the settlement of damages becomes necessary because of i. e. a large credit, theprecondition for the guarantee that the property can be accessed will always be thetransfer of a loan security in form of material goods, e. g. a real estate. This can alsobe done anonymously by using an authorization mechanism (comp. §6.2.1.2.2). Forexample a land registry office could certify an appropriate confirmation of the value ofthe property and note down the property as mortgaged. However, business transactionsof this size are rather unusual for open systems.

A lot more typical are business transactions which are purchases of goods, especiallyinformation, with a low value. Here, somebody merely commits himself to pay a certainsmall amount of money within a short period of time after the delivery of the goods.The purchasing of goods for a small amount of money will be examined in a separatechapter (§6.3), because of its importance for open digital communication networks an inorder to have room for the presentation of complete business transactions.

The performance of services like the administration of an electronic mailbox, can – withonly two exceptions – be regarded as a succession of many small purchases. For example,the administration of a mailbox could always be accounted for when the user wants toempty it. The transfer of the content would then correspond to the delivery of goods.

The two above mentioned exceptions are services that have to be used in order tobe able to make purchases via the open system: the transfer of messages through thetransfer status information securing communication network and the provision and ad-ministration of many be an anonymous digital payment system.

The communication network could be paid as a flat rate or by adding money (”digitalstamps”) to each message, comp. §5.6.2 to mention two examples. In the first case userswho do not pay could be excluded from the communication network. In the second caseunpaid messages would not be delivered.

As the provider of the communication network is not anonymous, the highhandeddamaging of the user would have to be pursued by law outside the system like it is donetoday.

The services of the anonymous digital payment system can be treated likewise: thebanks involved can retain the fees directly, and the customers can take the banks to

333

6 Value Exchange and Payment Systems

court if necessary (this can be done through the communication network to protect thecustomer’s anonymity).

6.3 Fault secure value exchange

From the point of legal security cash purchases in the classical sense could be performedproviding full anonymity. The business partners being at the same location ensures thatether goods and money or none of both can swap possession, and therefor (apart fromadditional reclamations) there will be no need of damage regulation.

Such simultaneity is not given in communication networks. Here the occasion arisesthat one partner will have an advantage towards the other one. Breaking of the connec-tion in a suitable state could result in active debts to be enforced if necessary.

In the following two sections two concepts describe how to preserve possibilities en-forcing active debts. We favor the second one.

6.3.1 A third party guarantees that somebody can be made

public

The first concept guarantees to have the possibility to make debtors public in case theydo not pay for their debts. This leads to the non–anonymous case and allows to accessall the debtor’s possessions.

Technically this is achieved using a special kind of foreign authentication: They showtheir identity to a third party they all regard as trustworthy, which is not anonymous.This instance certifies to be able to identify the owner of a special pseudonym shouldthe occasion arise (non–public personal pseudonym, comp. §6.1).

The authentication can be based on a third party [Herd 85] or a sequence of thirdparties [Chau 81]. The principle is shown in picture 6.3 for one third party participant(but possibly different ones for different users). Be X and Y the users and A and B thenon–anonymous instances, who are capable to identify X and Y , and be

• pG(X, g) the pseudonym used by X during business g with Y

• pG′(Y, g) the pseudonym used by Y during the business (but in a different rolethan X)

• pA and pB the public personal pseudonyms for A or B

The identifiability of X by A is given for X getting his pseudonym pG(X, g) signedfrom A before it is usable. Analogous Y needs to get his pseudonym pG′(Y, g) signed byB. To guarantee that A always identifies the right person, X has to send the message”The pseudonym pG(X, g) belongs to X” signed by himself to A before getting hiscertificate from A (this could carry his own autograph signatures or (more secure) adigital signature belonging to a special personal pseudonym).

334

6.3 Fault secure value exchange

trustworthythird A

identification

A B

trustworthythird B

identification

user X

pA

confir-

mation

pG

(X,g)

docu-

mentfor

pG

(X,g)

user Y

pB

pG

(X,g)

pG'

(Y,g)

pG'

(Y,g)

pG'

(Y,g)

I know

I know

confir-

mationdocu-

mentfor

Figure 6.3: Authenticated anonymous declarations between business partners who could be made public

335

6 Value Exchange and Payment Systems

If you trust the third party not only regarding authentication, but also regarding thepredictability of legal decisions, it is sufficient to use an identification system (comp.§6.2.1.2.1); e. g. X could identify himself with his finger print. This works analog for B.

For X and Y exchanging there signed pseudonyms, they know that should the occasionof fraud arise during a legal act, A is able to identify X and B is able to do so for Y .

In the following it could happen, that X complains towards B about Y not continuingin the communication even he is obliged to do so. Therefore X is handing over themessages from Y which are signed with pG′(Y, g) to B proving a previous existence ofthe communication process. Thereupon, B asks Y to proceed in the communication,with the difference of now sending all messages from Y to X through B. In case of Yrefusing to proceed, B publishes Y ’s identity and the investigation can continue as inthe non–anonymous case.

For X accusing the other one by mistake in not sending all messages he received fromY , the worse which can happen is that Y has to send the remaining messages for asecond time. This time they are forwarded by B, hence X cannot pretend that he didnot receive the messages.

Same applies for Y complaining about X in front of A.

Advantages (+) and disadvantages (-) for this solution:

- Who is able to control A and B that they do not publish the identities of X andY without authorization? A and B have to be completely trustworthy in terms ofthe security of confidentiality, but not in terms of fraud security, because they arenot able to change the assignment ”personality to pseudonym”.

- X and Y are able to cause damage they cannot compensate. E. g. X could orderand use a service provided by Y even so he does not possess enough to cover hisdebts. Even by publishing the identity of X the damage made to Y cannot becompensated.

- Considering the cost of communication this solution suggests personal pseudonymsfor pG(X, g) and pG′(Y, g). As shown in §6.1 this can lead to a gradual revealingof the parters identities.

+ Using personal pseudonyms it is not necessary for A and B to participate in everysingle business between X and Y .

In [Chau 81, p.86] is described how the ability to publish someone’s identity could beshared between several third parties. The given procedure allows only to reveal someone’sidentity if all third party participants cooperate. Some further development is given in[Pfi1 85] to tolerate failures for some third party participants.

Even with the increasing anonymity using several third party authorities, we stillhave the disadvantage, that it is not possible to guarantee the existence of sufficientpossessions.

336

6.3 Fault secure value exchange

6.3.2 Trustees guarantee anonyme partners fraud protection

To prevent damage from uncovered debts an easy procedure can be applied, whicheven works without making someone public: money gets deposited at a non–anonymoustrustee that claims can only arise towards him [Pfi1 83, p.29-33], [Waid 85, WaPf 85].The actual business partner can stay anonymous towards each other and towards thetrustee. In case the trustee exceeds his authority and because he is not anonymous, hecan be sued without the actual business partners revealing there identity.

Instead of an direct exchange ”goods against money” all participants inform the trusteeabout the amount of money and the kind of goods they want to receive and hand overmoney and goods. This has to be performed in exactly this order. Should the trusteereceives the goods, but not the money, he would not be able to pay for the damage of thesupplier. The trustee checks if what he received meets the expectations of his clients.According to the result of this check he hands down the received or aborts the business.At the latest when the trustee received the goods, the customer who ordered cannotcancel the business anymore.

To finalize the business as fast as possible, the trustee must be within the open digitalcommunication network. This could possibly be the network provider, who alreadyprovides similar services.

Having again X and Y as users where, arbitrarily chosen, X is customer and Y is supplierof a good (an information), picture 6.4 shows the whole procedure where the concretenumbers of the documents defines the order of declarations being made. Therefore are:

• pK(X, g) the pseudonym of X as customer in business g

• pL(Y, g) the pseudonym of Y as supplier in business g and

• pT the pubic personal pseudonym of the non–anonymous trustee T

The illustration of the transfer of money is strongly simplified. Of course, money cannotbe pictured as a single document or a declaration – you would be able to produce copiesof it to increase its amount (comp. §6.4).

As described in §6.2.3 it is not necessary to secure evidence for all actions performed bythe active partner (depositing and forwarding or returning of the money and deliveringand forwarding of the merchandize). The proof for the reception of the order can begiven by simply presenting the document (in first place by the trustee and than by thesupplier).

This solution should be the preferred one by means on data security, but has a disad-vantage: the trustee is ask to perform certain checks of the merchandize, which he isand sometime should not be able to perform.

To come up with a solution for this disadvantage, X and Y could agree on a complaintterm. During this time the trustee T keeps the money, but delivers the merchandize(figure 6.5). This way the trustee only has to assure the money to be real and has notto check the merchandize anymore, hence he is not obtaining information about it.

337

6 Value Exchange and Payment Systems

trustee T

client X supplier Y

[1]

order

[3]

delivery

to trustee

pT

pS

(Y,g)

[5]

money

pT

order

of client

delivery

to client

pT

pC

(X,g)

[2]

supplier is

pS

(Y,g)

+

„money” forsupplier

( money isdeposited )

proved by T

[4]

Figure 6.4: Fraud protection for completely anonymous business partners through an active trustee, whois able to check the merchandize

338

6.3 Fault secure value exchange

trustee T

client X supplier Y

[1]

order

[3]

delivery

to trustee

pT

pS

(Y,g)

[5]

money

pT

order

of client

delivery

to client

pT

pC

(X,g)

[2]

supplier is

pS

(Y,g)

+

„money” forsupplier

( money isdeposited )

proved by T

[4]

[5]

wait

Figure 6.5: Fraud protection for completely anonymous business partners through an active trustee, whois not able to check the merchandize

For the customer X not being satisfied with the merchandize, e. g. the query replywas faulty, he can stop the trustee from transferring the money and has to prove thatthe merchandize was indeed faulty.

If you can assure that a possibly consulted court is able to decide upon a claim fastenough, X is not able to make big damage to Y , e. g. loss of money due to not payedinterest. Even this loss of interest can be prevented, if T puts the money out at interestand both X and Y are demanded to transfer the difference between the actual interestand the usual interest payed for debits as security to T . Both securities are later given tothe winner of the trial. To force X and Y to pay this security or to forgo a complaint thewhole amount is given to the other party, should one refuse to pay the payable interestsecurity [BuPf 90].

In case X and Y just want to exchange goods without the trustee T being able tocheck the goods nor having the possibility to check them, the business has to be split up

339

6 Value Exchange and Payment Systems

into two coordinated businesses of the kind ”merchandize against money” to stick to therule ”money before merchandize” for the exchange. Coordination is achieved by T nothanding over money to ether X or Y before not both partners declared that they aresatisfied with the merchandize they received or the case was decided at court [BuPf 90].

This extended solution using a trustee can be rated as follows:

- The trustee always actively participate in a business.

+ It is not necessary for one of the direct participants to trust the trustee, becausehe gets controlled by both participants which are able to sue him for any kind offailure or fraud. The existence of enough evidence is given at any point and theenforcement of demands towards T can be secured analog to the non–anonymouscase.

+ All demands of X and Y or other way around can be satisfied in falling back onthe values deposited at T .

+ The participants in a business can use transaction pseudonyms without additionalcosts.

+ Anonymity is given for direct participants in a business.

In the case that the service provider has no interest in staying anonymous, e. g. anewspaper publisher, he can be chosen as the trustee. This results in the same solutionwhere the customer pays immediately when he orders and the supplier has to refund ifhe is not able or not willing to deliver the ordered merchandize.

All described concepts leave one problem open, namely how to represent money inopen digital systems and how to transfer it anonymously. This will be covered in thefollowing section.

6.4 Anonymous digital payment systems

A payment system helps its user to transfer money in a secure way.

Independently from the particular payment system, money is just a quantified amountof rights. To discuss security and anonymity in payment systems it is not necessary todefine what and where the money is. In the following we will talk about money as rights.

If these rights are based on a true balance or a limited loan is unimportant for thesecurity inside the payment system, where of course for the loan has to exist somecoverage outside the payment system.

A payment system is secure (considering accessability and integrity), if

• a user is able to transfer received rights,

• one only looses a right, if it is his will,

340

6.4 Anonymous digital payment systems

• the one willing to pay decides on a recipient, only this recipient can be the recipientof the transferred rights,

• one is able to account for a performed transfer of rights in front of a third party(problem of a receipt), if this becomes necessary

• the users are not able to increase their amount of rights, even if they work together.

If you do not trust the user completely, the minimum you have to do is to check ifthe sender owns the right he is going to transfer. Because here we only look at paymentsystems where the transfer is accomplished through an exchange of digital messages(i. e. through a communication network) and having the possibility of copying digitalmessages, but the demand of a deletion of rights after the transfer, in this case a simplecheck of the document is not sufficient. This makes a witness necessary to guarantee theactual validity of a certain right.

To make it possible for the witness to fulfill this task, he must be informed aboutevery single transfer of rights and even in case the business partners trust each othercompletely there must not exist a possibility to transfers rights without a confirmationby the witness.

For the sections §6.4.1 to §6.4.4 we assume that the users do not want to trust thewitness completely in terms of integrity or confidentiality.

If the users trust the witness completely the problem is much easier as it will be shownin §6.4.5.

A payment system which is insecure considering the above mentioned criterias will bedescribed in §6.4.6

6.4.1 Basic scheme of a secure and anonymous digital payment

system

In the following it will be discussed how to implement a secure and anonymous digitalpayment system with witnesses.

It will be assumed that user X and user Y want to transfer a right in the paymentsystem and witness B (credit institute) confirms this transfer. To keep it simple we justconsider one witness. This witness is liable for his statements, i. e. should new rights formoney be created due to his wrong certifications, he has to provide the lacking money.If the witness refuses to certify existing rights, the holder of the rights can proof this to athird party (e. g. a court). This is achieved in choosing a witness with enough fund andnot anonymous. He is taking on the role credit institutes have in usual electronic–cashpayment systems.

It can be assumed that the paying partner X and the recipient Y know each other bycertain pseudonyms (comp. §6.1). These pseudonyms are specified from outside of thepayment system and are worth to be secured (in general: role pseudonyms, e. g. clientnumber and name of the service provider). The pseudonym for the witness is specifiedtoo, but must not be anonymous (public personal pseudonym). In the payment system

341

6 Value Exchange and Payment Systems

it got already defined during earlier transactions which pseudonym X is using to identifyhimself towards B as holder of a right he wants to transfer.

Therefore, it be:

• pZ(X, t) the pseudonym of the paying partner X for a transfer t towards therecipient,

• pE(Y, t) the pseudonym of recipient Y for transfer t towards the paying partner,

• pB the pseudonym for witness B valid for many transactions and

• pBZ (X, t) the pseudonym of the paying partner X for transfer t towards the witness

B.

With these preconditions and analogous to today protocols for interbank money trans-fer as a protocol of the transfer t of a right from X to Y , the following steps can bedefined. Step [i+1] has to be performed after step [i]. Only steps 4 and 5 can be carriedout in arbitrary order or even in parallel.

[1] Choosing the pseudonym. Y chooses a pseudonym pBE(Y, t), under which he

wants to be known to B as the recipient of a right in transfer t and tells Xthat he wants to receive the right as pB

E(Y, t). Accordingly, X informs Y aboutthe pseudonym pB

Z (X, t) he wants to use to transfer the right. The necessarydeclarations are authenticated with pE(Y, t) or respectively with pZ(X, t).

[2] The Customer’s transfer order. X orders B to transfer the right to pBE(Y, t).

This order is signed with pBZ (X, t). For third party authentication X encloses a

authorization to the order saying that pBZ (X, t) possesses the right. The authoriza-

tion itself is signed by B with pB. Because all transfers have to be accredited fromB, he is able to check if pB

Z (X, t) still possesses the right or if it got transferredalready.

[3] The witness’s confirmation. The witness B confirms X and Y that the right gottransferred from pB

Z (X, t) to pBE(Y, t), where he addresses them by this pseudonyms.

[4] Receipt for the customer. The recipient Y sends a receipt to X which onlynames pZ(X, t) and pE(Y, t) and is authenticated with pE(Y, t). It confirms thatthe right arrived.

In case Y refuses to return a receipt (which cannot be prevented as Y acts anony-mously), X is able to get this confirmation from B (as in [3]). together withthe confirmation from Y saying that he is willing to receive the right under thepseudonym pB

E(Y, t) (from [1]) as a replacement receipt.

Especially this possibility distinguishes the receipt problem from the value ex-change problem, where a third party, e. g. the trustee, creates one of the exchangeitems for replacement.

342

6.4 Anonymous digital payment systems

[5] Confirmation for the Recipient. X, the one who pays, sends the payee Y aconfirmation on the transfer which only indicates pZ(X, t) and PE(Y, t) and wasauthenticated with pZ(X, t). Also, Y can use the confirmation from B (from step[3]) together with the confirmation from X (from step [1]) saying that he is willingto transfer the right to Y , to have a proof, that he had received the right frompZ(X, t).

[6] Transformation of the confirmation. Y wants to use the confirmation pBE(Y, t)

issued by B in step [2] for future transfers t′, which is saying that he receivedthe right. To prevent the possibility of catenation possibly revealing his identity,pB

E(Y, t) should therefore not be used as pBE(Y, t′). By using transformable autho-

rizations as described in §6.2.1.2.2 it can be assumed that Y is able to recalculatethe authorization for a new pseudonym. But already in step [1] of transfer t he hasto choose pseudonym pB

E(Y, t′) and pseudonym pBE(Y, t) suitable for recalculation.

Implementation of authorizations suitable for one-time recalculation us-ing RSA (or: How to use the attack according to Davida, (compare §3.6.3.1) forblind signatures (compare §3.9.5):

Choose pseudonym p (= testing key in an arbitrary digital signature system).1

Calculate p, h(p) using the collision resistant hash function h.

Be t, n the public testing key of the one authenticating the authorization,e. g. a credit institute.

Choose an random number r normal distributed in Z∗n. (As reminder: This

means 1 ≤ r < n, an r has a multiplicative inverse number mod n.)Calculate (p, h(p)) · rt( mod n).

Get this signed from the owner of t.(Annotation: If (p, h(p)) ∈ Z∗

n, the owner of t does not learn anythingabout (p, h(p)), because rt is normal distributed in Z∗

n . Otherwise thegenerator of the pseudonym does not need the help of the owner of t:ggT ((p, h(p)), n) is a non–trivial factor of n. Therewith the pseudonymgenerator did break RSA completely, compare §3.1.3.1 and §3.6.1, so thathe is able to sign his pseudonym autonomously.)

Get ((p, h(p)) · rt)s ≡n (p, h(p))s · rMultiply with the multiplicative inverse number2 of r and get (p, h(p))s

The protocol is shown in figure 6.6, where authentication again gets symbolized by asignet.

Because one wants to use the receipt (of [6]) proving the reception of the right in futuretransfers to transfer the same right, i. e. the same amount of money, it makes sense to

1The identifier p for pseudonym is not to be mistake with p standing for one of the two prim numbersused in RSA for key generation.

2This will be calculated with the extended euclidian algorithm which is described in §3.4.1.1

343

6 Value Exchange and Payment Systems

XXX Needs to be drawed XXX

Figure 6.6: Basic scheme of a secure anonymous digital payment system

have rights of certain values (similar to real money) which can be combined to add upto the amount to transfer. Of course B had to be allowed to give change.

To hide the value of the transformed receipts for their usage in [2], B will sign thevalue using a different signature, i. e. a separate pseudonym pB,N .

The security of the protocol derives from the transparency of the whole process. At allpoints, each of the three participants possesses enough documents to verify the currentstate of the transfer towards a objective third party or he is able to set it up by showingthe partners messages around an resending his own ones (in case the arrival got denied).This applies to completed transactions as well as for transactions still in progress.

Apart from that, claims to be recovered can only arise towards the not anonymouswitness.

If both X and Y follow the protocol, the protocols anonymity is maximal, becausenon of them obtains during the transfer any new information about the partner: Thewitness does not get to know anything about the use of the pseudonyms X and Yin other occasions, but only two newly generated ones, which are only used for thisparticular transfer. Though X and Y get to know additionally the pseudonyms theyuse in the communication with the witness, they do not gain new information, becausethe pseudonyms cannot become catenated and they already knew that they would usedifferent pseudonyms in the communication with the witness.

The protocols anonymity being maximal, does not mean that the anonymity is alwaysstrong. There are other ways to gain information.

First of all, there are other distinguishing marks for a user apart from the self chosenpseudonyms, which have to be made known to the partners independently from theapplied protocol. In our case this would be the amount and time for a payment. Thecustomer in particular is only hidden in between all other customers having the possibilityto own a right of the same value. From this arises another reason only to permit a limitednumber of fixed values for rights.

On the other side the payment system is certainly not generating additional anonymitywhich could be used in other situations too. In case e. g. X has to proof the transfer,pZ(X, t) and pE(Y, t) will become catenated; this is the reason of the proof.

The users may decide freely how much of there maximal anonymity they are willingto loose in using the same pseudonyms in several transfers or catenating them issuingdeclarations.

If Y refuses to hand over the receipt in [4], X is able to use the confirmation of thewitness from [3] and the confirmation of Y from [1] as and replacement for the receipt.By using this replacement receipt towards a third party (different from Y ), the witnesspossibly get to know the assignments pB

Z (X, t) towards pZ(X, t) and pBE(Y, t) towards

pE(Y, t), which slightly restricts the anonymity in case of a refusal.

344

6.4 Anonymous digital payment systems

If X and Y trust each other and a receipt is not necessary, the receipt of B confirming thecompleted transfer for X does not have to be send too. Nevertheless the confirmation ofY in [1] is still required, because through X gets to know about the pseudonym pB

E(Y, t)of Y in the first place and it has to become authenticated from Y with pE(Y, t) for Xto be sure that the rights are transferred to the correct recipient. Apart from that,already in [1], X and Y have to agree on payment details, or they have to come to anunderstanding on the payment, because otherwise errors during the transfer or throughB would not be recognized.

6.4.2 Restriction of anonymity through predefined accounts

It might be conceivable that fully anonymous payment systems as the one from §6.4.1 arenot wanted, because nobody (especially not the tax office) will be able to estimate some-one’s income or property in a system without additional modifications. Same applies tocash. Even in previous non–cash payment systems the principally available informationabout persons with several accounts at different institutions gets catenated vary rarely.

This could be a reason to restrict users to only one or a fixed number of accounts toperform their transactions. This can be enforced by using non–anonymous accounts orby requiring the possibility to recalculate authorizations used for opening an account.Even under this condition it is possible to construct a secure payment system accordingto §6.4.1 which provides as much anonymity for its users as it is achievable with theseprerequisites.

This prerequisite responds in the fact, that this time pBZ (X, t) or pB

E(Y, t) are equalfor all transactions performed by X or Y , i. e. they are independent from t. Now theprotocol from §6.4.1 is not providing complete anonymity anymore, because X and Yexchange their unique account numbers pB

Z (X, t) or pBE(Y, t) in step [1] witnessed by B

knowing now the two accounts participating in the transaction.

The solution would be to introduce changing intermediate pseudonyms, so that at firstX draws the money from his account and transfers it to his intermediate pseudonym.Under this pseudonym he would transfer the money to the intermediate pseudonymused by Y who transfers the money to his account accordingly. The single transfersfollow §6.4.1 and particulary the authorizations confirming the rights to be received arerecalculated to other pseudonyms between the partial transfers. This gives the payerand the payee intermediate pseudonyms which belong together. Denote

• pK(X) the default pseudonyme for the account of X

• pab(X, t) the pseudonym of the payer X in transfer t, he uses to draw the money

• pZwZ(X, t) the intermediate pseudonym of the payer X in transfer t, he uses forthe payment to Y

• pZwE(Y, t) the intermediate pseudonym of the payee Y in transfer t, he uses toreceive the right from X

345

6 Value Exchange and Payment Systems

• pein(Y, t) the pseudonym he uses to pay the money in his account

• pK(Y ) the default pseudonyme for the account of X

At first X chooses pZwZ(X, t), calculate a fitting pab(X, t), transfers the right frompab(X, t) to pK(X) as the first partial transfer and recalculates the confirmation forpZwZ(X, t). Steps [1], [4] and [5] of the protocol can be omitted, because X does nothave to inform himself about his own pseudonyms and no receipts are required. Theauthorization from [2] does not have to be enclosed, too, because B already knows theaccount balance himself; only in case of a dispute the authorization would be required.

Now Y has to choose his pseudonym pein(Y, t) and has to calculate the correspond-ing pseudonym pZwE(Y, t). Thereupon the rights gets transferred from pZwZ(X, t) topZwE(Y, t) according to the full protocol from §6.4.1 and receipts become created.

After Y recalculated the confirmation saying that he received the right for pseudonympein(Y, t), he transfers it to pK(Y ) again omitting steps [1], [4], [5] and additionally step[6].

The documents issued by B in the first and second partial transfer as receipt forreceiving the money must not look the same. Otherwise Y would be able to use theconfirmation out of the first transfer for other payments immediately, just bypassing theaccount. By the means of the possibility to recalculate authorizations this implies thatB uses two different signatures (pseudonyms) for withdrawals and transfers. He is ableto distinguish between the cases by checking if the provided pseudonyms of the payerbelong to an account.

That Y is forced to transfer his right to pK(Y ) within a assessable time period (whichwould be required for e. g. annual taxation), B’s authorization for a transfer mustnot authorize a deposit without any time limit. B should change signing keys used forauthentication after a specified period of time and refuse to accept authentications createwith old keys after another one. In between the communication partners will have thetime to transfer received rights to their account pseudonyms.

Because it is just a specialization, the security of this payment system derives from thesecurity of the one introduced in §6.4.1.

The anonymity is maximal, because the pseudonyms pK(X), pK(Y ), pZ(X, t) andpE(Y, t), which are worth to be secured, are independent from each other by usingintermediate pseudonyms and recalculation of authorizations. In particular, this meansthat neither Y gets to know pseudonyme pK(X) nor X gets to know pseudonym pK(Y )nor B gets to know pZ(X, t) or pE(Y, t) and that all of them are only able to catenatepseudonyms they know from previous transfers with newly chosen ones which are neverused again apart from this time, i. e. contain no additional information.

Apart from that, same as in the end of §6.4.1 applies to anonymity and receipts. Ithas to be beard in mind, that amount and time of a payment allow broader possibilitiesfor catenation here, as in §6.4.1, namely the relation of the two accounts participating ina payment. To avoid catenation it should be waited an arbitrary period of time betweenthe single partial transfers and the full amount of a payment should not be paid in orwithdrawed at once.

346

6.4 Anonymous digital payment systems

6.4.3 Suggestions from literature

The completely anonymous and secure payment systems from §6.4.1 and §6.4.2 aredeveloped by combining elements of previously discussed anonymous digital paymentsystems. These are described best as weaker variants of the systems above why it is thatthey are only discussed now.

It was David Chaum, the inventor of the mechanism to recalculate authorizations, whocame up with the motivation of using these mechanism in digital payment systems, eventhough he called it differently. Based on the mechanism he developed a series of relatedpayment systems [Chau 83, Cha8 85, Chau 87, Chau 89].

All proposals made by Chaum have in common that they work with fixed non–anonymous accounts and disregard receipts. Instead they allow similar to §6.3.1 thepayer or payee to be made public in cooperation with the partner or the witness.

Another essential difference from payment systems after Chaum to the system de-scribed in §6.4.2, apart from the variant in [Chau 89, §4.2.1], is that they do not usethe possibility to choose intermediate pseudonyms in way that there exists a digital sig-nature for them. This makes it difficult to guarantee security for the users especiallytowards the witness B, because now the holder of a right is not able anymore to declareauthorized by his signature whereto the right should be transferred. The more simplevariations do not achieve the same level of security at all or only with limitations inanonymity. Another variation [Chau 89, §4.2.2] at least achieves security towards thewitness B in asking B to sign, that he will credit the right only to the payee, beforehe gets presented the document certifying the ownership of the right. For this purposethe payer hands over the document to the payee. The validity of B’s signature has tobecome restricted, because otherwise B could refuse to credit the right for good with theexplanation that he already certified somebody that he will transfer the right to him,but the one did not submit the document about the ownership so far.

With payment systems based on anonymous number accounts [Pfi1 83, WaPf 85], for thefirst time digital payment systems were suggested which do without nominal accountsand apart from that uses digital signatures as usual for money transfers. Basically, thisanonymity is not a quality of the payment system, because it is only telling somethingabout catenation possibilities of pseudonyms in their environment and not about securityin general (assuming overdrawing is prohibited). Generally, each strictly secure digitalpayment system with nominal accounts would be secure for anonymous number accountstoo.

The usage of transaction related pseudonyms in the communication with a credit insti-tute (pB

Z (X, t) and pBE(Y, t) according to the protocol above) was suggested in [Burk 86,

BuPf 87, BuPf 89]. This payment system corresponds to the one in §6.4.1 without therecalculation of signatures which does not restrict its security, but its anonymity eventhough only to a small account. It happens, because for each transfer t the pseudonympB

Z (X, t) equals the pseudonym pBE(X, t′) from a previous transfer t′. So the witness B

is able to realize that both times the same person is involved. If the payer W in transfert′ and the payee Y in transfer t work together (which is most likely for W and Y being

347

6 Value Exchange and Payment Systems

the same person), even they could realize it and are able to catenate the pseudonymspZ(X, t) and pE(X, t′), which are normally ought to be secured.

As long as other systems for digital signatures are still secure, this system would re-main, even though if the mechanism for the recalculation of authorizations from §6.2.1.2.2is broken.

At the moment the recalculation of digital authorizations can only become imple-mented with the specific cryptographic and signature system RSA and it is not com-pletely implausible that it gets broken, because it is possible to prove for at least some ofthe systems that it is as difficult to break them as it is to solve mathematical problemswhich have been known as difficult to solve for a long time.

6.4.4 Marginal conditions for anonymous payment systems

The utilization of an anonymous payment system should not come along with disadvan-tages in comparison to traditional payment systems. In particular, a user should be ableto

• choose freely between several credit institutes as witness for his payments,

• transfer money arbitrarily, especially into a conventional payment system outsidethe open system and vice versa,

• put money out at interest anonymously and

• arrange a loan anonymously.

Beside the demands of users towards the provided services, demands of credit institutesand the state have to be considered, e. g. anonymous payment systems must not preventcapital and income depended taxation.

In this context especially the limits of useful anonymity have to be considered. Forthis text they are presumed to be outside the open digital system and are therefore notdiscussed in detail. Without this discussion, it is not possible to give useful statementsabout the sense and possibility for a taxation in spite of anonymity.

6.4.4.1 Transfer between payment systems

In the payments systems described in §6.4.1 and §6.4.2 it was assumed that only onewitness, i. e. one credit institute, is involved. From the point of view of economic policyand anonymity, payment systems should allow the payer and the payee to use differentcredit institutes and even different payment systems.

For a payer and a recipient of a payment using the same anonymous payment system,but different witnesses, the recipient Y has to appoint a witness BE he trusts in step[1] of the protocol in §6.4.1 apart from choosing a pseudonym pB

E(Y, t). Furthermore,in step [2] the payer informs his witness BZ about the witness BE . Because BZ andBE are not anonymous to each other they are now able to cooperate and can be seen a

348

6.4 Anonymous digital payment systems

single witness B where all communication between B and X is performed through BZ

and all communication between B and Y through BE .

The transfer between a non–anonymous and an anonymous payment system can berealized easily: The provider of the non–anonymous payment system, a bank, acts as amediator which can act in both payment systems. The transfer between two users Xand Y would become divided in two separate transfers between X and the bank and thebank and Y respectively. Same works if a user wants to transfer his money into an otherpayment system without revealing his identity in the anonymous system.

6.4.4.2 Interest on the balance and accommodation of loans

For both payment systems in §6.4.1 and §6.4.2 the rights witnessed by B can be treatedas a sight deposits managed by a bank B: The holder of a right can access it a anytime, but B is able to detect access in any case. This leaves for the bank only a weakmotivation to pay interest, which could lead to a combined approach where fees becomeset of against interest.

Both payment system could easily be extended to set up a solution to put money oninterest for a longer period: this is trivial for fixed accounts; without fixed accounts thewitness could use different signatures pz1

B , pz2B , ... to define different expiration dates.

To allow specialized types of investment depending on the investment capital, the capitalhas to become organized in organization units which can be indicated by the bank. Thisis best achieve in using the payment system from §6.4.2 with fixed accounts, whereinterest can be payed as it is usual these days.

Renouncing this kind of investments and using the payment system from §6.4.1, eachsingle right has to be treated as an own account, where the opening date of the accountcan be determined based on the witnesses signature. The interest for a right can bepayed in increasing its value or in paying off the interest at each transfer [Chau 89].

The accommodation of loans and paying interest on it, is principally not more difficultas it is today. Relating to the coverage of a loan it applies what was said in §6.2.5: Ifthe one who takes out a loan wants to stay anonymous, he needs to lodge somethingas security; whereas if he identifies himself towards the bank, he needs to transfer themoney to himself once (e. g. to his second account) in favor of anonymity and afterwardshe can work with the money as if it would be usual capital assets.

As already indicated in §6.2.5, charges for account maintenance and other servicesprovided by a bank are not much more difficult to bill than it is usual today: In the caseof fixed accounts the bank can simply debit the demanded amount from the balancesit manages. For payment systems without fixed accounts the banks have to meet theirdemands during each transfer. The payer includes a fee transfer order (digital stamp)for the witnessing bank in each transfer order.

349

6 Value Exchange and Payment Systems

6.4.5 Secure devices as witnesses

It is easy to design an anonymous payment system which does not provide anonymitytowards the witness: The protocol from §6.4.1 can be altered in a way that user Xalways chooses the pseudonyms pB

Z (X, t) or pBE(X, t′) for all payments same where he

acts as payer or payee. That corresponds to a fixed account number as business relationspseudonym. B gets to know the pseudonyms pB

Z (X, t) and pBE(X, t′). This saves the

problem or replacement receipts in [4] and in [5] as well as the transformation of theconfirmation in step [6].

To justify the assumption of the witness being trustworthy relating to the usersanonymity, in general, the witness is chosen as a secure and not analyzable device (comp.§6.2.1.2.2).

There are two variants which could find application:

The first variation uses a single central secure device, which is witness in all transactions.Already, based on the general reasons listed in §1.2.2 this variant is little desirable.

Additionally, there is no real advantage towards the not trustworthy witness from §6.4.1and §6.4.2. Though the protocol gets very much simplified as mentioned above, this onlyrelieves the load on the computers, not the users of the payment system and it is notrevealing any new possibilities.

In the second variation each user gets an own secure device as ”electronic wallet” to bewitness (on behalf of the provider of the payment system) for his transactions .

The usage of secure devices as electronic wallets was suggested in [EvGY 84, Even 89],but was due to the (apparently necessary) logging not anonymous yet. Anonymousversions of this procedure can be found in [BuPf 87, BuPf 89].

The only real advantage of electronic wallets (anonymous or non–anonymous) is thesupport of off-line transactions to enable a user to pay and receive payments spon-taneously without being forced to contact a central coordinating instance in between.Because the systems discussed are considered to be open systems based on a communi-cation networks this is not a big advantage.

Much more problematic are the disadvantages (-) of electronic wallets:

- Because payments are not performed through a central witness, loosing an elec-tronic wallet would eventually lead to a loss of already received payments. To pre-vent a loss, relatively costly error tolerance measures have to be taken [WaP1 87,WaPf 90].

- The security of payment systems with electronic wallets depends to a consider-able amount on the possibility to analyze such devices. As already discussed in§6.2.1.2.2, the existence of durable secure devices is questionable and the evaluationof real existing devices, which are assumed to be secure, is not feasible.

- Using secure device does not mean that cryptographic techniques used in particularfor mutual authentication of electronic wallets are not needed anymore. Hence,such secure devices will not be an alternative to systems of secrecy or authentication

350

6.4 Anonymous digital payment systems

should these be broken one day. In case only all asymmetric systems of secrecyand authentication are broken, it would make sense to use these secure devices incombination with symmetric cryptographic systems, because this would be enoughto secure communication between the single devices.

Due to the disadvantages listed above and the only little advantages of electronic walletin open digital system of the kind described, their use does not seem to make sense.

6.4.6 Payment systems with security based on the possibility

to make someone public

An other payment system was introduced with [ChFN 90] which only meets a weakerdefinition of security as the one used before:

• A user must not hand down a right he received more than once. However, if hestill hands it down multiple times, it must be possible to reveal his identity.

The basic idea is the same as in §6.3.1: One hopes that after someone is made publichis possessions are sufficient to meet the demands arising from compensation claims.

Despite of its weaker security definition, a short outline of the system will be given:At one hand side the system is not more insecure as e. g. credit cards nowadays. Atthe other side this system seems to be the only alternative to the questionable electronicwallets from §6.4.5. The main difference to the payment systems mentioned above isthat paying partner X, instead of the witness B, confirms the transfer for the recipientof the payment Y . For X acting correctly, his anonymity will be preserved. For X actingincorrectly in transferring the right to another recipient Y ∗, due to a cryptographic trick,the two confirmations for the transfer could be combined to reveal the identity of X witha high likelihood. In case the right is presented to B multiple times, B would be able tomake X public.

Main idea: We denote to be

k the security parameter,

ri randomly chosen (1 ≤ i ≤ k),

I the identity of the issuer of the bank notes,

C a commitment scheme with informational theoretical secrecy.

A blindly3 signed bank note is defined through

sbank(C(r1), C(r1 ⊕ I), C(r2), C(r2 ⊕ I, ..., C(rk), C(rk ⊕ I))

3Before a bank blindly signs a bank note it has to check if the presented note is correctly build and inparticular if it contains the identity I of the issuer of the bank note. In the case the bank does notperform this check, the issuer could include none or a false identity, which would enable him to avoidhis identity to be made public in case of multiple passing on of a right.For the check the bank chooses a number of samples to be disclosed by the one who handed themin and only if these do not contain a attempt of cheating the bank will sign the following samplesblindly [ChFN 90]

351

6 Value Exchange and Payment Systems

The recipient of a bank note decides for i = 1, ..., k independently for each if he wantto have ri or ri ⊕ I disclosed, so he gets k commitments disclosed. (Through thisone-time-pad the identity of the issuer is protected with informational theoreticalsecurity. The bank is due to the commitment scheme only complexity theoreticalsecure. Therefor, it may choose the security parameter.)For a bank note distributed to two good recipients, the probability for the at leastone ri and ri ⊕ I for an i arriving at the bank is in k exponentially close to 1 andthe issuer can become identified.

Because this system does not depend on a witness it can be used for off-line transactions;this was the purpose it was suggested for in [ChFN 90].

To prevent a payer, who got to know about the choice of the payee, from handingdown this information to a second payee (who works with him together), David Chaum

suggested that the identity of the payee should partially define the decisions the recipientof a bank note. Now a bank note cannot be cashed twice without its issuer loosingits anonymity, because this variation guarantees for two different recipients to choosedifferently at least once.

To increase the security this can be combine with secure devices: The users have touse secure devices instead of usual computers to prevent multiple passing down of rights.The combined system fulfills the higher security definition as long as secure devices arereally secure. Otherwise the security is the same as in the system without secure devices.

Payment systems without off-line capabilities where the payee cannot pass down a rightwithout contacting the bank before are described in [ChFN 90, Cha1 89, CBHM 90].(According to David Chaum, an adequate system suggested in [OkOh 89, OkOh1 89]proved to be incorrect.)

Recommended readings

Fraud secure value exchange: [AsSW 97, Baum 99]

Digital payment systems: [AJSW 97, Chau 92, PfWa2 97]

Current research

Computes & Security, Elsevier

Congress Volumes ACM Conference on Computer an Communication Security,IEEE Symposium on Security and Privacy, IFIP/Sec, VIS, ESORICS, Crypto,Eurocrypt, Asiacrypt; Congress Volumes get published from ACM, IEEE,Chapmen & Hall London, in the series ”Datenschutz Fachberichte” ViewegVerlag Wiesbaden, in the series LNCS of Springer-Verlag Heidelberg

Links to recent research and comments to recent development:

http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipheror subscription to [email protected]

352

6.4 Anonymous digital payment systems

Datenschutz und Datensicherheit DuD, Vieweg-Verlag

353

6 Value Exchange and Payment Systems

354

7 Credentials

An overview on credentials is given in [Cha8 85][Chau 87]. However it is not mentionedthere that no organisation can provide the credential signatures because of the commonRSA module. They need to be provided by an signature organisation. Because whoeverknows a pair of RSA exponents that are inversive to each other, can factorize the modulus[Hors 85, pg. 187][MeOV 97, pg. 287].

In my overview the case of an fixed and right from start known set of credentials isshown.

A detailed protocol description can be found in [Chau2 90].[ChEv 87] describes how to conduct the validation if pseudonyms are correctly created

so that the probability of unidentified falses??? are becomes exponentially small withthe validation effort.

[Cha1 88] shows how to provide??? credentials that are not known right from start.In [Bura 88] and [ChAn 90] gives additional types of credentials.

Unfortunatly I don’t know any application of these purely digital identities. Because it isnot sufficent that authorisations like “driver license” can only be converted??? betweenthe pseudonyms of a person. The authorisation “driver license” should only be usableby the biological entity that passed the test. Consequently you have to extend theshown procedure for (almost?) all applications so that pseudonyms are coupled to bodycharacteristics within the protocol part “arrange pseudonyms”. If there are sufficentlymany unrelatable body characteristics that can be measured a with sufficent precisionis a question that stands outside the computer science. If so the extension of the shownprotocols are canonial.

An extension basing on secure and therfore ??? biometric hardware (cp. 6.2.1.2.2)that works with a single sufficently precise body characteristic is shown in [Bleu 99].

355

7 Credentials

356

8 Multiparty Computation Protocols

Let there be the common task that n participants Ti want to perform a computationtogether. Every participant shall contribute it’s input Ii and receive the output Oi whichis a function fi of the input. These security properties are to be achieved:

• Highest???(biggest) possible confidentiality:

No participant must not learn more about the input of other participant than fromhis own output.

• Highest possible integrity:

No participant shall be able to alter the output after handing over his input.

• Highest possible availability:

No participant shall be able to disable other’s participants ability to receive theirown output, after handing over his input.

All three security properties aply too if several (n− 1) participants join:

• They shall not be able to conclude more than they can from their combined output.

• After the last one has handed over it’s input the output can be altered anymore.And the other’s output can’t be withhold.

There are two completly different ways to achieve this:

1. All participants send their input to a central computer everybody (must) trusts.This central computer performs the computation and sends every participants it’sspecific output. Communication between the computer and the participant isencrypted to guarantee concealation and authentication (cp. fig. 8.1). If youtrust the encryption and the central computer as well as the availability of thecommunication network the problem is solved. Regarding the availability andintegrity of the output you can, at least principally, control them afterwards if allmessages were digitally signed and a cooperatation of other participants with thecentral computer is ruled out. Otherwise they could “get” the necessary signaturesfor there doings. Unfortunatly the central computer is almost uncontrolable (cp.1.2.2 and 2.1).

2. The participants conduct a Multi-Party Computation Protocol (cp. 8.2).There is no central instance everybody needs to trust. But for realistic trustfulnessyou need:

357

8 Multiparty Computation Protocols

• Either you have to trust that an attacker is comprising less than a third of allparticipants. Then information theoretical computation protocols are known([BeGW 88],[ChCD1 88]).

• Or you have to work with a cryptographic assumption. Then the amount ofattacking participants is nonrelevant ([ChDG 88]).

• A newer protocol only requires that one of both shown restriction applies tothe attacker ([Chau 90]).

T 1: I11O = f1(I1, I2, ... , I n)

O2 = f2 (I1, I 2, ... , I n)

On = fn(I1, I2, ... , I n)

device, which all

participants need to trust in

encryption for concelation and authentication

T 2 : I 2

3T : I3

4T : I4T n: In

Figure 8.1: Centralized computation protocol between participants Ti. Every Ti sends an input Ii toa central computer everybody trust. It calculates the function fj and sends every Ti it’soutput Oi.

Roughly spoken the calculation protocols work like this:

• The input is distributed within the participants by an threshhold scheme (cp.3.9.6).

• After this parts may be added as well as multiplyed to conduct any desired calcu-lation.

The exact details would go beyond the scope of the script. What is important to meis the message that principally everything that is calculated centrally (with trustedcomputers) can be calculated decentrally and without global trust - even though theeffort with today’s distributed computation protocols is relativly high.

358

T 1: I1

1O = f1(I1, I2, ... , In)O2 = f

2(I1, I2, ... , In)

On = fn(I1, I2, ... , In)

multi party protocol

T 2 : I 2

3T : I3

4T : I4T n: In

Figure 8.2: Distributed computation protocol between participants Ti. Every Ti hands over it’s input Ii

and receives it’s output Oi.

359

8 Multiparty Computation Protocols

360

9 Controlability of Security Technologies

None of use lives on it’s own. You’re reading my text now - and I know this of medoing so by my own. Besides computer networks as well as multi lateral security dorequire several participants. And they do not only have their individual intentions butare also part of very different communities. Individuals not only have legitimate rightsand claims towards their community. But also the communities towards their members.And this creates the following problem:

The security technologies presented here can be used to create democracy promotingmulti lateral security within an open society. But it also can be used to create subculturesthat bar themselves from the outside world. Or you’ll have organised crimial elementsthat use the widly spread security technology to plan, coordinate and conduct theircrimes. To prevent this some politicans demand regulation of security technologies whatis in other words a demand for limiting the creation, distribution and even usage ofsecurity mechanisms.

I don’t want to try to decide what subserves society nor what is proportional forfighting the bad. This is a diffcult question that has no final answer: Democracy toostarted as a concern of some subcultures. And the USA are having strategical andeconomical interests that are supported by their regulation of security technologies.

I will go into a simple matter: Can security technology be regulated at all? Do exportand import restrictions or laws enforcing limited usage and capabilities help or are theycounterproductive?

I don’t think nothing of symbolic politics that limits the capabilities of securitytechnology for the law-abiding user without significantly hampering criminals. Thenyou could also issue a law againt “evil thoughts”. This would be symbolic too since allcriminal activities start with evil thoughts - and no good people is hampered anyway.

In the past two decades many attempts were made to limit the spread of secure com-puters (especially secure operating systems) and cryptographics. For example theUS based DEC cooperation discontinued the development of a secure and distributedoperating system ([GGKL 89],[GKLR 92][Wich 93]) in 1994 because of the high risk ofnot receiving a export license. And they estimated the costs too high to develop twoversions in parallel. There have been discussions in the USA about regulating crypto-graphics (keyword Key-Escrow Key-Recovery as well as Clipper Chip) beginning officallyin 1993 (it has started behind closed door already in 1986 [WaPP 87][Riha2 96]). In Ger-many the Kryptogesetz was discussed in public ([HuPf2 96][WiHP 97][HaMo 98]). Thespreading of insecure computers has widly advanced. Just think about the PCs with op-erating systems without any access control which makes them an ideal place for hostingworms and Trojan Horses.

361

9 Controlability of Security Technologies

As next I will concentrate on the options for regulating crypto - since alot of dam-age not only to indiviuals but on democracy too may arise due to unawareness andoverzealous politics ([Pfit 89]).

Basing on the functional principals of cryptographic and steganographic techniques (cp.3,4) we will discuss the most important requirements and design options.1

9.1 User requirements on signature- and conceala-

tion keys

Digital signature systems are currently being introduced for legal affairs2 (for the elec-tronic communication involved) and for electronic archives. Both areas require certaincertification requirements of the signature keys. This is a prerequisite for the legalbinding of digitally signed documents (along with the cryptographic properties of thesignature system). Not only that the recipient must be able to verify the signature ofthe message with the proper public verification key. But he also must be able to unam-bigiously assign the verification key to a real person3. For this socalled Trust Centers(institutions that provide certain security services) should also provide the functionalityof a Certification Authority. The creation of the signature key pairs in Trust Centers4

can be made a prerequisite for participating in electronically transmitted legal affairs.By this you have the opportunity to store all secret signing keys there. The appliabilityof signature systems within legal affairs bases on the assurance that except for the holderof the secret key nobody else can create valid signatures. An access to the secret keyallows to modify evidences and therefore devalues the whole security infrastructure.

Keypairs for concealation don’t have to be certified in a legal sense. It may be usefulto employ key certifications to guarantee the authenticity of the keys managed. Thiscan prevent people from providing public concealation keys under false names withinthe security infrastructure. It would enable the attacker to read all the data within thatwas encrypted with this key. But the requirement by the users towards a concealationsystem is that they mutally trust the authenticity of each others public concealationkey. They don’t have to prove this on court. As soon as communication partners haveexchanged5 keys whose authentication is credible they may use any other confidential

1The following subchapters are copied from [HuPf 96][HuPf 98]2The law on digital signatures in Germany (Art. 3 of the Informations- und Kommunikationsdienste-

Gesetzes (IuKDG) went into effect Aug. 1st 1997.3Therefore the public key and the identity of the owner are digitally signed by an Certification Authority

what is name certification of the public key. To verify such certificates the receiver may need to obtainfurther certificates. The resulting certification tree is decribed in X.509 and partially in the legaltext for the signature law.

4But this is not recommended since central components where different and independently runningpartial tasks are concentrated tend to be weak points in the security architechture ([FJPP 95]).Beside it doesn’t ease the certification.

5For example with help of govnerment run security infrastructure or by personal exchange.

362

9.2 Digital signatures allow the exchange of concealation keys

communication system desired. The creation of key pairs for concealation in TrustCenters can’t be enforced since nobody is has to rely on certificates of an Trust Center.

As next we will argue that users may conduct an key exchange on their own without anydrawbacks regarding the requirements. The exchange is meant to be confidential towardsany third person while using the security infrastructure for legally binding issues.

9.2 Digital signatures allow the exchange of con-

cealation keys

Every security infrastructure for digital signatures allows a secure exchange of conceala-tion keys: With the known algorithms [Schn 96] that are already implemented in widlyavailible programs (“Pretty Good Privacy” - PGP) everybody can generate concealationkeypairs. Afterwards everybody may certify it’s own public key to make the authenticityverifyable. This is done by signing the message “The public key ... belongs to ...” (mes-sage sA(A, cA) in fig. 9.1). Now the security infrastructure for digital signatures allowseverybody to validate the integrity of the message6. Finally everything encrypted withthe public concealation key can only be decrypted by the owner of the key (if integrityis assured). If this use case of signatures systems is to be ruled out the only option isto undermine the security of the signatures by issuing digitally signed messages wherethe signer is not the owner. Eventually this would prevent any legal binding of docu-ments signed within this system. Concealation keys that were distributed with help of

CA

sA(A,cA)

1. tA

3. sCA(A,tA)

generates (sA,tA)

generates (cA,dA)

Atest CA-certificate

test A-certificate

B

2. t of A

cA(secret message)

Figure 9.1: Secure digital signatures allow autonomous and secure concealation.

an security infrastructure for digital signatures allow the exchange of confidential mes-sages as well as keys for symetric concealation systems. They can be used for efficentconcealation.

6by testing the certificate sCA(A, tA) of the certification authority as well as the certificate the partic-ipant A has produced.

363

9 Controlability of Security Technologies

9.3 Key-Escrow Systems can be used for an un-

dectable key exchange

Similar to using secure digital signatures for exchanging keys for concealation everysystem like Key-Escrow or Key-Recovery systems where intelligence services or otherorganisations have access to the secret key can be used to negotiate keys for additionallyused concealation systems (9.2). Instead of passing the confidential messages directly tothe concealation system an redundently encoded public key is transmitted before. Thereceiver checks the key similar to checking the certificate of digital signatures. As long ashe doesn’t detect an corruption of the public key he either uses the public key to encryptthe message (9.2 2nd arrow) or for exchanging a further key for symetric concealation (9.24th arrow). As next the messages encrypted with the key of the overlying concealationsystem are encrypted with the key of the basing Key-Escrow system (9.2 3rd and 4tharrow). If the Key-Escrow is designed not to be able or allowed to decrypt earlier

variant b)

kesc(A,cA)

A

cA(secret message)

Bkesc(cA(secret message))

kesc(cA(kAB), kAB(secret message))variant c)

variant a)

Figure 9.2: Key-Escrow concealation without permanent monitoring allows exchanging keys and there-fore a) “normal” concealation, b) asymetric concealation without Key-Escrow and c) symetricconcealation without Key-Escrow.

messages you won’t have to take the detour over the public concealation keys. As seenin 9.3 a symetric key can be exchanged directly - but this has to be done in time. The

kesc(A,kAB)

Akesc(kAB(secret message))

B

Figure 9.3: Key-Escrow concealation without retroactive decryption allows direct concealation withoutKey-Escrow.

usage of additional encryption measures will only be discovered when the keys of the Key-Escrow system are revealed and used for decrypting telecommunications. Due to law thiswould require a legal permission. But Key-Escrow is - if keys are issued that seldomly

364

9.4 Symetric authentication allows concealation

as proposed - almost useless for the investigations of the Verfassungsschutz (German foran state run organisation for the defence of the constitution), Bundesnachrichtendienst(federal intelligence service) and the MAD (military intelligence service). That becausethey will probably see the usage of additional encryption systems placed above the Key-Escrow mechanisms. Therefore it is discussed to prohibid any other encryption systemswhat wouldn’t help at all: If there is only cautious surveillence of telecommunicationsviolations of it will be noticed too late. But if serious efforts are made the law of secrecyof telecommunications will be rudly violated. And despite all attemts of surveillenceusers could employ steganographics (9.5) where the usage of crypto systems is almostundetectable.

Besides you should be aware of the fact that all people and organisations that haveknowledge about the secret concealation key may decrypt all corresponding messages.In difference to traditional wiretapping techniques you could permanently observe sinceyou can’t void the knowledge about the key. The central storage of such sensible informa-tion at the Key-Escrow agencies makes them an highly “interesting” point for businessespionage and other criminal activities.

In short: Key-Escrow will be almost noneffective for fighting crime - but requires anunrealistically expensive technical7 and organisational8 effort for keeping the reductionof the people’s rights and the increased vulnerablility of economy and society under acertain level.

9.4 Symetric authentication allows concealation

Every bit of the message to be concealed is transmitted by sending 0 or 1 followed by bya MAC ([Rive 98]). The MAC of the correct bit fits the false one doesn’t (cp. 9.4). Sincenoone except sender and receiver can check which MACs would fit and which not9 themessage is perfectly concealed. Rivests proposal is a little more than twice as expensiveas the solution from exercise 3-22 at page 421. But it has the advantage that both bitswith MACs can be put into seperate messages with equal sequential numbers. Then onthe one hand the authentication protocol of the receiver throws away the false bits; thereceiver won’t have to implement anything. Additionally the sender won’t have to createthe false bits and MACs and put them into the message at all - this can be done bysomeone else as long as it is done before the place where the attacker is observing. Andfor doing so he doesn’t need the symetric authentication key - he just uses random MACswhat will not fit with a very high probability (cp. 9.5). Finally sender and receiver couldargue, even in a country with a total ban on concealation: “Concealation? We didn’t

7How can you implement an correctly working access control to the Key-Escrow databases but allowshort-term access to the keys? Expirence shows that every large technical system contains bugswhich puts the secrets of many people and organisations at risk.

8It is interesting for sure that alot of people demanding Key-Escrow were involved in scandals andsuccesses where “Everybody had it’s price”.

9If someone could do better than with a probability of 0.5 it would be an basic approach towardsbreaking the authentication systems.

365

9 Controlability of Security Technologies

Sender A Receiver BKnows kAB knows kAB

Message to transmitb1, . . . bn with bi ∈ {0, 1}

CalculatesMAC1 := code(kAB, b1) . . .MACn := code(kAB, bn)

Let a1, . . . , an be a bitwise inverted message.

Chooses MAC′1 . . .MAC′

n at random withMAC′

1 6= code(kAB, a1) . . .MAC′n 6= code(kAB, an)

Transmits (braces mean ”randomly ordered”){(b1, MAC1), (a1, MAC′

1)} . . .{(bn, MACn), (an, MAC′

n)} −→ Checks, whether{MAC1 = code(kAB, b1) or{MAC′

1 = code(kAB, a1)}and receives the proper value b1

. . .Checks, whether{MACn = code(kAB, bn) or{MAC′

n = code(kAB, an)}and receives the proper value bn

Figure 9.4: Concealation through symetric authentication by Rivest.

do, didn’t want to, and we didn’t even took notice of!” As mentioned in the solution forexercise 3-22 more than one bit may be authenticated with a MAC. However it requires2l bit values and MACs as well as test steps with l simultanously authenticated bits perMAC. For here this is only a disadvantage while for exercise 3-22 this may turn out tobe an advantage.

An interesting question: What if somebody (like a prosecuting authority) forces you tohand over the authentication key? The answer is: Prevention. This means having a bitsequence among the many with unsuspicious or pleasant content for “them” - and onlyhanding over this key. To make several bit sequences with “real” content protecting eachother some more tricky encodings of the bit sequences are necessary than in [Rive 98].

366

9.5 Steganography: Finding confidential message is impossible

Sender A Receiver Bknows kAB knows kAB

Message to transmitb1, . . . bn with b ∈ {0, 1}

CalculatesMAC1 := code(kAB, b1) . . .MACn := code(kAB, bn)

Transmits(1, b1, MAC1), . . . (n, bn, MACn)

Complement generator

Intercepts message b1, . . . bn.

Creates the bitwise inverted message a1, . . . an

Chooses at random MAC′1, . . .MAC′

n and sets(1, a1, MAC′

1), . . . (n, an, MAC′n)

at the corresponding places in thedata stream from sender A.

Transmitts the modified stream −→ common authentication protocol↓ ignores msg. with wrong seq. nr.

Eavesdropper ignores msg. with wrong auth.cannot distinguish between ai and bi outputs remaining messages

receives with highest possibilityb1, . . . bn

Figure 9.5: Concealtion through symetric authentication without participant activity

9.5 Steganography: Finding confidential message is

impossible

Not only messages can be protected by cryptographic concealation systems but alsowith steganography (cp. 4): The message to protect is hidden within another message(that is inevitably longer). Even if cryptographics - or in general any kind of uncon-trollable communication or storage - would be prohibided you couldn’t prove anythingif the steganographics employed is good enough. If it isn’t good enough every usage ofcryptographics that stands trail can be directly used to actually improve it since lawyersneed to show how to prove the usage (cp. 4.1.3.3).

Steganography makes use of the fact that for every human communication the samemessage may have different meanings for different receivers. With modern digital com-

367

9 Controlability of Security Technologies

munication networks you can realize efficent techniques for exchanging messages withoutthe need to meet personally and without any third person taking notice of. Of course itis possible and recommended to additionally encrypt the message to be transmitted bysteganography.

Data intensive telecommunication applications (mainly multimedia) are perfect carriersfor steganographic techniques.

9.6 Measures to take after key loss

The loss10 of keys for creating confidentiality or integrity for the transmission of messageswon’t be a tough problem for the user: If the data sent can’t be decrypted the receivercreate a new keypair and reinforms the sender about the new authenticated public key.Afterwards the sender encrypts the message again but with the new key and sends itagain. This method is much less expensive than any secure method of key backups11 andway more secure than Key-Escrow and Key-Recovery systems. Those systems actuallyintended to surveill telecommunications are redundant as key backups.

The same applies to keys for signatures (cp. 9.6). If an owner loses his secret signing keyhe can’t sign anything that would also fit to his public key. But he always can generatea new keypair get them certified by a CA. The receiver must be informed over this touse the new public key.

The loss of public keys is almost irrelevant since it’s public status allows widly spreadstorage.

If you lose the secret decryption (confidentiality) or test (protecting or the symetricauthentication of long term storaga data12 ) key you will probably lose the data or it’svalidatability. The key’s own should take precautions like key backups or Key-Escrow.

Key backups or Key-Escrow are only useful for applications where data can’t be retrievedanother way since the data’s security will be harmed in anyway if someone else than thedata owner has a key (or parts of). For archiving long term data key backup techniquescan be useful if allowing certain options. Every key backup should be organized to makethe owner taking notice of a key reconstruction immediatly if he is not directly involved.Only that would guarantee that the owner has full control over his data. This is avery important first line of defence againt misuse ([AABB 97, Cap. 2.3]). However thisrequirment is not fulfilled by Key-Escrow systems.

10We will only consider situations where the key is completly lost. If the key came into unauthorizedhands further measures must be applied like using blacklists by the authorities that have certifiedthe key. Before using a key the user needs to ask whether the key is blacklisted.

11For this you can split your own key with a threshold scheme (cp. 3.9.6) and distribute the parts totrustworthy looking friends, organisations, etc. For reconstruction a certain minimal amount (whichwas defined by the owner prior to the splitting) of parts is necessary. By defining an apropriateminimal amount the few not so trustworthy partners can’t reassamble the key and the lost of a fewpart won’t deny the reassamblation.

12For archiving purpose like land registry or in the public health system.

368

9.7 Conclusions

communication

protection of

long termstorage

concealment

authen-tication

symmetrical(MACs)

asymmetrical(digital signature)

keybackupsensible

key

backup for

functionality

not needed,

but additional security risk

Figure 9.6: Where is key backup useful, unnecessary or an additional risc.

To make Key-Escrow look like an solution for the user’s key backup problem is nonse-rious: Two independent problems that do base on different security interests should beleft divided and solved each on it’s own.

A deeper look into the possiblities of Key-Recovery and it’s risks can be found in[Weck 98]. [Weck1 98] contains the resulting recommendations.

9.7 Conclusions

Cryptographics allow everybody, individual or organisation, to efficently protect con-fidentiality and integrity of his information in areas with no physical access. If thisdoesn’t happen thousands of curious system administrators can read the email traffic aswell as intelligence services can eavesdrop the telecommunication for economic espionage.Without secure cryptographics information can be manipulated at will.

The decentral nature of global digital communication has deprived control and protectioncapabilities from the government of a single state. Cryptographics gives back the abilityfor every individual to protect it’s information on it’s own. Within an information societycryptographics is everybody’s matter since everybody is affected by it’s (in)security!

The usage of cryptographics becomes clearly more secure if users create the neededkeys by their own devices. The oftenly stated argument key generation and storageof the secret keys is a state affair and must be implemented by state-run CertificationAuthorities to effectivly fight crime is non-substantial due to the technical fact presented

369

9 Controlability of Security Technologies

here. The argument this would serve the user’s interests is, if examined accuratly,inapropriate. This applies to Key-Escrow too.

Regulating cryptographic techniques for concealtion with the intent of crime fightingis not useful since you couldn’t enforce it effectivly13. But it would limit the securityinterests of the people, the economy and the society. The supporting regulation ofcryptographics for securing integrity and evidence eligibility of digital documents by theSignaturgesetz is welcomed.

Reading recommendations

• Further litrature on 9.6: [AABB 98]

• Discussion on the crypto controversy in Germany: Wortprotokolle des WiesbadenerForums Datenschutz am 19. Juni 1997:[HaMo 98]

• A current press overview: Nicolas Reichelt: Verhindert ein Kryptographie-Gesetz;http://www.crypto.de/#press

• About legel regulation attempts of cryptographics:

– European Cryptography Resourceshttp://www.modeemi.cs.tut.fi/ avs/eu-crypto.html

– Bert-Jaap Koops: Crypto Law Surveyhttp://cwis.kub.nl/ frw/people/koops/lawsurvy.htm

– Ulf Moeller: Rechtliche Situation, politische Diskussionhttp://www.thur.de/ulf/krypto/verbot.html

13[HaMo 98, pg. 90] brough up the argument all the crypto regulation is not about the users of PCsbut telephones. This is obviously not true since today’s cellphones can alreadly be programmed orperformance hungry multimedia applications can be run on a laptop. Then they can be used likenormal PCs and the cellphones could use PGP to encrypt all communications ([HaMo 98, pg. 121f]).

370

10 Design of Multilateral Secure ITSystems

371

10 Multilateral Secure IT Systems

372

11 Glossary

373

11 Glossary

374

Bibliography

[AABB 97] Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, MattBlaze, Whitfield Diffie, John Gilmore, Peter G. Neumann, Ronald L.Rivest, Jeffery I. Schiller, Bruce Schneier: The Risk of Key Recovery,Key Escrow, and Trusted Third-Party Encryption. The World WideWeb Journal 2/3 (1997) 241-257.

[AABB 98] Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, MattBlaze, Whitfield Diffie, John Gilmore, Peter G. Neumann, Ronald L.Rivest, Jeffery I. Schiller, Bruce Schneier: The Risks of Key Recovery,Key Escrow, and Trusted Third Party Encryption. Update June 1998;http://www.cdt.org/crypto/risks98.

[ACGS 84] Werner Alexi, Benny Chor, Oded Goldreich, Claus P. Schnorr:RSA/Rabin Bits are 0.5 + 1/poly(log N) Secure. 25th Symposiumon Foundations of Computer Science (FOCS) 1984, IEEE ComputerSociety, 1984, 449-457.

[ACGS 88] Werner Alexi, Benny Chor, Oded Goldreich, Claus P. Schnorr: RSAand Rabin functions: Certain parts are as hard as the whole. SIAMJournal on Computing 17/2 (1988) 194-209.

[AJSW 97] N. Asokan, Phillipe A. Janson, Michael Steiner, Michael Waidner:The State of the Art in Electronic Payment Systems. Computer 30/9(1997) 28-35.

[AbJe 87] Marshall D. Abrams, Albert B. Jeng: Network Security: Protocol Ref-erence Model and the Trusted Computer System Evaluation Criteria.IEEE Network 1/2 (1987) 24-33.

[Adam2 92] John A. Adam: Threats and countermeasures. IEEE spectrum 29/8(1992) 21-28.

[AlSc 83] Bowen Alpern, Fred B. Schneider: Key exchange Using ’Keyless Cryp-tography’. Information Processing Letters 16 (1983) 79-81.

[Alba 83] A. Albanese: Star Network With Collision-Avoidance Circuits. TheBell System Technical Journal 62/3 (1983) 631-638.

[Alke 88] Horst Alke: DATENSCHUTZBEHORDEN: Alles registriert - nurbeim Autotelefon? Registrierung durch die DBP beim ”normalen

375

Bibliography

Telefon”; Vollspeicherung beim Autotelefondienst. Datenschutz undDatensicherung DuD 12/1 (1988) 4-5.

[Amst 83] Stanford R. Amstutz: Burst Switching – An Introduction. IEEE Com-munications Magazine 21/8 (1983) 36-42.

[AnKu 96] Ross Anderson, Markus Kuhn: Tamper Resistance - a CautionaryNote. 2nd USENIX Workshop on Electronic Commerce, 1996, 1-12.

[AnLe 81] Tom Anderson, Pete A. Lee: Fault Tolerance - Principles and Practice.Prentice Hall, Englewood Cliffs, New Jersey, 1981.

[AnPe 98] Ross J. Anderson, Fabien A. P. Petitcolas: On the Limits of Steganog-raphy. IEEE Journal on Selected Areas in Communications 16/4(1998) 474-481.

[Ande4 96] Ross Anderson: Stretching the Limits of Steganography. InformationHiding, LNCS 1174, Springer-Verlag, Berlin 1996, 39-48.

[AsSW 97] N. Asokan, Matthias Schunter, Michael Waidner: Optimistic Proto-cols for Fair Exchange. 4th ACM Conference on Computer and Com-munications Security, Zurich, April 1997, 6-17.

[BCKK 83] Werner Bux, Felix H. Closs, Karl Kuemmerle, Heinz J. Keller, Hans R.Mueller: Architecture and Design of a Reliable Token-Ring Network.

[BDFK 95] Andreas Bertsch, Herbert Damker, Hannes Federrath, Dogan Kesdo-gan, Michael J. Schneider: Erreichbarkeitsmanagement. PIK, Praxisder Informationsverarbeitung und Kommunikation 18/3 (1995) 231-234.

[BFD1 95] Bundesbeauftragter fur den Datenschutz (Hrsg.): BFD-Info 1: Bun-desdatenschutzgesetz – Text und Erlauterung. 3. Auflage, PF200112,D-53131 Bonn, can be ordered there for free.

[BFD5 98] Bundesbeauftragter fur den Datenschutz (Hrsg.): BFD-Info 5: Daten-schutz und Telekommunikation. 2. Auflage, PF200112, D-53131 Bonn,can be ordered there for free.

[BGJK 81] P. Berger, G. Grugelke, G. Jensen, J. Kratzsch, R. Kreibich, H. Pichl-mayer, J. P. Spohn: Datenschutz bei rechnerunterstutzten Telekom-munikationssystemen. Bundesministerium fur Forschung und Tech-nologie Forschungsbericht DV 81-006 (BMFT-FB-DV 81-006), Insti-tut fur Zukunftsforschung GmbH Berlin, September 1981.

[BGMR 85] Michael Ben-Or, Oded Goldreich, Silvio Micali, Ronald L. Rivest: AFair Protocol for Signing Contracts. 12th International Colloquiumon Automata, Languages and Programming (ICALP), LNCS 194,Springer-Verlag, Berlin 1985, 43-52.

376

Bibliography

[BMI 89] Bundesministerium des Inneren: Rahmenkonzept zur Gewahrleistungder Sicherheit bei Anwendung der Informationstechnik. Datenschutzund Datensicherung DuD /6 (1989) 292-299.

[BMTW 84] Toby Berger, Nader Mehravari, Don Towsley, Jack Wolf: RandomMultiple-Access Communication and Group Testing. IEEE Transac-tions on Communications 32/7 (1984) 769-779.

[Bott 89] Manfred Bottger: Untersuchung der Sicherheit von asymmetrischenKryptosystemen und MIX-Implementierungen gegen aktive Angriffe.Studienarbeit am Institut fur Rechnerentwurf und Fehlertoleranz derUniversitat Karlsruhe 1989.

[BuPf 87] Holger Burk, Andreas Pfitzmann: Value Transfer Systems EnablingSecurity and Unobservability. Interner Bericht 2/87, Fakultat fur In-formatik, Universitat Karlsruhe 1987.

[BuPf 89] Holger Burk, Andreas Pfitzmann: Digital Payment Systems EnablingSecurity and Unobservability. Computers & Security 8/5 (1989) 399-416.

[BuPf 90] Holger Burk, Andreas Pfitzmann: Value Exchange Systems EnablingSecurity and Unobservability. Computers & Security 9/8 (1990) 715-721.

[Burk 86] Holger Burk: Digitale Zahlungssysteme und betrugssicherer, anony-mer Wertetransfer. Studienarbeit am Institut fur Informatik IV, Uni-versitat Karlsruhe, April 1986.

[BaGo 84] Friedrich L. Bauer, Gerhard Goos: Informatik, Eine einfuhrendeUbersicht. Zweiter Teil; Dritte Auflage vollig neu bearbeitet underweitert von F. L. Bauer; Heidelberger Taschenbucher Band 91;Springer-Verlag, Berlin 1984.

[BaHS 92] Dave Bayer, Stuart Haber, W. Scott Stornetta: Improving the Effi-ciency and Reliability of Digital Time-Stamping. Sequences ’91: Meth-ods in Communication, Security, and Computer Sciences, Springer-Verlag, Berlin 1992, 329-334.

[Bara 64] Paul Baran: On Distributed Communications: IX. Security, Secrecy,and Tamper-Free Considerations. Memorandum RM-3765-PR, Au-gust 1964, The Rand Corporation, 1700 Main St, Santa Monica, Cal-ifornia, 90406 Reprinted in: Lance J. Hoffman (ed.): Security andPrivacy in Computer Systems; Melville Publishing Company, Los An-geles, California, 1973, 99-123.

377

Bibliography

[Baum 99] Birgit Baum-Waidner: Haftungsbeschrankung der digitalen Signaturdurch einen Commitment Service. Sicherheitsinfrastrukturen, DuDFachbeitrage, Vieweg 1999, 65-80.

[BeBE 92] Charles H. Bennett, Gilles Brassard, Artur K. Ekert: Quantum Cryp-tography. Scientific American 267/4 (1992) 26-33.

[BeBR 88] Charles H. Bennett, Gilles Brassard, Jean-Marc Robert: Privacy am-plification by public discussion. SIAM Journal on Computing 17/2(1988) 210-229.

[BeEG 86] F. Belli, K. Echtle, W. Gorke: Methoden und Modelle der Fehlertol-eranz. Informatik Spektrum 9/2 (1986) 68-81.

[BeGW 88] Michael Ben-Or, Shafi Goldwasser, Avi Wigderson: Completeness the-orems for non-cryptographic fault-tolerant distributed computation.20th Symposium on Theory of Computing (STOC) 1988, ACM, NewYork 1988, 1-10.

[BeRo 95] Mihir Bellare, Phillip Rogaway: Optimal Asymmetric Encryption. Eu-rocrypt ’94, LNCS 950, Springer-Verlag, Berlin 1995, 92-111.

[BiSh3 91] Eli Biham, Adi Shamir: Differential Cryptanalysis of DES-like Cryp-tosystems. Journal of Cryptology 4/1 (1991) 3-72.

[BiSh4 91] Eli Biham, Adi Shamir: Differential Cryptanalysis of the Full 16-round DES. Technical Report Nr. 708, December 1991, ComputerScience Department, Technion, Haifa, Israel.

[BiSh 93] Eli Biham, Adi Shamir: Differential Cryptanalysis of the Data En-cryption Standard. Springer-Verlag, New York 1993.

[BlBS 86] Lenore Blum, Manuel Blum, Michael Shub: A Simple UnpredictablePseudo-Random Number Generator. SIAM Journal on Computing15/2 (1986) 364-383.

[BlFM 88] Manuel Blum, Paul Feldman, Silvio Micali: Non-interactive zero-knowledge and its applications. 20th Symposium on Theory of Com-puting (STOC) 1988, ACM, New York 1988, 103-112.

[BlMi 84] Manuel Blum, Silvio Micali: How to Generate CryptographicallyStrong Sequences of Pseudo-Random Bits. SIAM Journal on Com-puting 13/4 (1984) 850-864.

[BlPW 91] Gerrit Bleumer, Birgit Pfitzmann, Michael Waidner: A remark on asignature scheme where forgery can be proved. Eurocrypt ’90, LNCS473, Springer-Verlag, Berlin 1991, 441-445.

378

Bibliography

[Bleu 90] Gerrit Bleumer: Vertrauenswurdige Schlussel fur ein Signatursystem,dessen Brechen beweisbar ist. Studienarbeit am Institut fur Rechner-entwurf und Fehlertoleranz der Universitat Karlsruhe 1990.

[Bleu 99] Gerrit Bleumer: Biometrische Ausweise. Datenschutz und Daten-sicherheit DuD 23/3 (1999) 155-158.

[BrDL 91] Jrgen Brandt, Ivan Damgrd, Peter Landrock: Speeding up PrimeNumber Generation. Asiacrypt ’91, Abstracts, Fujiyoshida, November11-14, 1992, 265-269.

[Bras 83] Gilles Brassard: Relativized Cryptography. IEEE Transactions on In-formation Theory 29/6, 877-894.

[Bras 88] Gilles Brassard: Modern Cryptology — A Tutorial. LNCS 325,Springer-Verlag, Berlin 1988.

[Brau 82] Ewald Braun: BIGFON - der Start fur die Kommunikationstechnikder Zukunft. SIEMENS telcom report 5/2 (1982) 123-129.

[Brau 83] Ewald Braun: BIGFON - Erprobung der optischen Breitbandubertra-gung im Ortsnetz. SIEMENS telcom report 6/2 (1983) 52-53.

[Brau 84] Ewald Braun: Systemversuch BIGFON gestartet. SIEMENS telcomreport 7/1 (1984) 9-11.

[Brau 87] E. Braun: BIGFON. Informatik-Spektrum 10/4 (1987) 216-217.

[Bura 88] Axel Burandt: Informationstheoretisch unverkettbare Beglaubigungvon Pseudonymen mit beliebigen Signatursystemen. Studienarbeit amInstitut fur Rechnerentwurf und Fehlertoleranz der Universitat Karl-sruhe, Mai 1988.

[CBHM 90] David Chaum, Bert den Boer, Eugne van Heyst, Stig Mjlsnes, AdriSteenbeek: Efficient offline electronic checks. Eurocrypt ’89, LNCS434, Springer-Verlag, Berlin 1990, 294-301.

[CTCPEC 92] Canadian System Security Centre; Communications Security Estab-lishment; Government of Canada: The Canadian Trusted ComputerProduct Evaluation Criteria. April 1992, Version 3.0e.

[CZ 98] Internet-Rechner knacken DES-Code; Alte Entschlusselungszeit isthalbiert. Computer Zeitung Nr. 10, 5. Mrz 1998, 7.

[ChAn 90] David Chaum, Hans van Antwerpen: Undeniable signatures. Crypto’89, LNCS 435, Springer-Verlag, Berlin 1990, 212-216.

[ChCD1 88] David Chaum, Claude Crpeau, Ivan Damgrd: Multiparty uncondi-tional secure protocols. 20th Symposium on Theory of Computing(STOC) 1988, ACM, New York 1988, 11-19.

379

Bibliography

[ChDG 88] David Chaum, Ivan B. Damgrd, Jeroen van de Graaf: MultipartyComputations ensuring privacy of each party’s input and correctnessof the result. Crypto ’87, LNCS 293, Springer-Verlag, Berlin 1988,87-119.

[ChEv 87] David Chaum, Jan-Hendrik Evertse: A secure and privacy-protectingprotocol for transmitting personal information between organizations.Crypto ’86, LNCS 263, Springer-Verlag, Berlin 1987, 118-167.

[ChFN 90] David Chaum, Amos Fiat, Moni Naor: Untraceable Electronic Cash.Crypto ’88, LNCS 403, Springer-Verlag, Berlin 1990, 319-327.

[ChRo 91] David Chaum, Sandra Roijakkers: Unconditionally Secure Digital Sig-natures. Crypto ’90, LNCS 537, Springer-Verlag, Berlin 1991, 206-214.

[Cha1 83] David Chaum: Blind Signature System. Crypto ’83, Plenum Press,New York 1984, 153.

[Cha1 84] David Chaum: A New Paradigm for Individuals in the InformationAge. 1984 IEEE Symposium on Security and Privacy, IEEE ComputerSociety Press, Washington 1984, 99-103.

[Cha1 88] D. Chaum: Blinding for unanticipated signatures. Eurocrypt ’87,LNCS 304, Springer-Verlag, Berlin 1988, 227-233.

[Cha1 89] David Chaum: Online Cash Checks. Eurocrypt ’89; Houthalen, 10.-13.April 1989, A Workshop on the Theory and Application of Crypto-graphic Techniques, Abstracts, 167-170.

[Cha3 85] David Chaum: The Dining Cryptographers Problem. UnconditionalSender Anonymity. Draft, received May 13, 1985.

[Cha8 85] David Chaum: Security without Identification: Transaction Systemsto make Big Brother Obsolete. Communications of the ACM 28/10(1985) 1030-1044.

[Cha9 85] David Chaum: Cryptographic Identification, Financial Transac-tion, and Credential Device. United States Patent, Patent Number4,529,870; Date of Patent: Jul. 16, 1985, Filed Jun. 25, 1982.

[Chau2 90] David Chaum: Showing credentials without identification: Trans-ferring signatures between unconditionally unlinkable pseudonyms.Auscrypt ’90, LNCS 453, Springer-Verlag, Berlin 1990, 246-264.

[Chau 81] David Chaum: Untraceable Electronic Mail, Return Addresses, andDigital Pseudonyms. Communications of the ACM 24/2 (1981) 84-88.

380

Bibliography

[Chau 83] David Chaum: Blind Signatures for untraceable payments. Crypto’82, Plenum Press, New York 1983, 199-203.

[Chau 84] David Chaum: Design Concepts for Tamper Responding Systems.Crypto ’83, Plenum Press, New York 1984, 387-392.

[Chau 85] David Chaum: Showing credentials without identification. Signa-tures transferred between unconditionally unlinkable pseudonyms.Abstracts of Eurocrypt ’85, April 9-11, 1985, Linz, Austria, IACR,General Chairman: Franz Pichler, Program Chairman: ThomasBeth.

[Chau 87] David Chaum: Sicherheit ohne Identifizierung; Scheckkartencom-puter, die den Großen Bruder der Vergangenheit angehoren lassen.Informatik-Spektrum 10/5 (1987) 262-277; Datenschutz und Daten-sicherung DuD 12/1 (1988) 26-41.

[Chau 88] David Chaum: The Dining Cryptographers Problem: UnconditionalSender and Recipient Untraceability. Journal of Cryptology 1/1 (1988)65-75.

[Chau 89] David Chaum: Privacy Protected Payments - Unconditional Payerand/or Payee Untraceability. SMART CARD 2000: The Future of ICCards, Proceedings of the IFIP WG 11.6 International Conference;Laxenburg (Austria), 19.-20. 10. 1987, North-Holland, Amsterdam1989, 69-93.

[Chau 90] David Chaum: The Spymasters double-agent problem: Multipartycomputations secure unconditionally from minorities and crypto-graphically from majorities. Crypto ’89, LNCS 435, Springer-Verlag,Berlin 1990, 591-602.

[Chau 92] David Chaum: Achieving Electronic Privacy. Scientific American (Au-gust 1992) 96-101.

[ClPf 91] Wolfgang Clesle, Andreas Pfitzmann: Rechnerkonzept mit digitalsignierten Schnittstellenprotokollen erlaubt individuelle Verantwor-tungszuweisung. Datenschutz-Berater 14/8-9 (1991) 8-38.

[Clar 88] A. J. Clark: Physical protection of cryptographic devices. Eurocrypt’87, LNCS 304, Springer-Verlag, Berlin 1988, 83-93.

[Clem 85] Rudolf Clemens: Die elektronische Willenserklarung - Chancen undGefahren. Neue Juristische Wochenschrift NJW /324 (1985) 1998-2005.

[Cles 88] Wolfgang Clesle: Schutz auch vor Herstellern und Betreibern vonInformationssystemen. Diplomarbeit am Institut fur Rechnerentwurfund Fehlertoleranz, Universitat Karlsruhe, Juni 1988.

381

Bibliography

[CoBi1 95] David A. Cooper, Kenneth P. Birman: The design and implemen-tation of a private message service for mobile computers. WirelessNetworks 1 (1995) 297-309.

[CoBi 95] David A. Cooper, Kenneth P. Birman: Preserving Privacy in a Net-work of Mobile Computers. 1995 IEEE Symposium on Research inSecurity and Privacy, IEEE Computer Society Press, Los Alamitos1995, 26-38.

[Cohe1 89] Fred Cohen: Computational Aspects of Computer Viruses. Computers& Security 8/3 (1989) 325-344.

[Cohe2 92] F. B. Cohen: A Formal Definition of Computer Worms and SomeRelated Results. Computers & Security 11/7 (1992) 641-652.

[Cohe 84] Fred Cohen: Computer Viruses, Theory and Experiments. Proceed-ings of the 7th National Computer Security Conference, 1984, Na-tional Bureau of Standards, Gaithersburg, MD; USA, 240-263.

[Cohe 87] Fred Cohen: Computer Viruses - Theory and Experiments. Computers& Security 6/1 (1987) 22-35.

[Copp2 94] D. Coppersmith: The Data Encryption Standard (DES) and itsstrength against attacks. IBM Journal of Research and Development38/3 (1994) 243-250.

[Copp 92] Don Coppersmith: DES and differential cryptanalysis. appeared in anIBM internal newsgroup (CRYPTAN FORUM).

[Cove 90] Minutes of the First Workshop on Covert Channel Analysis: Over-view, Introduction and Welcomes, Retrospective, Worked Examples,Covert Channel Examples, Research Panel, Vendor Panel, PolicyPanel, General Discussion, Research Working Group Discussion, Re-sults of the Research Working Group, Policy Working Group Discus-sion, Results of the Policy Working Group, Summary Discussion. CI-PHER Newsletter of the TC on Security & Privacy, IEEE ComputerSociety (Special Issue, July 1990) 1-35.

[CrSh 98] Ronald Cramer, Victor Shoup: A Practical Public Key Cryptosys-tem Provably Secure against Adaptive Chosen Ciphertext Attack.Manuscript, Swiss Federal Institute of Technology (ETH), Zurich1998.

[DES 77] Specification for the Data Encryption Standard. Federal InformationProcessing Standards Publication 46 (FIPS PUB 46), January 15,1977.

382

Bibliography

[DINISO8372 87] DIN ISO 8372: Informationsverarbeitung - Betriebsarten fur einen64-bit-Blockschlusselungsalgorithmus.

[DaIF 94] Don Davis, Ross Ihaka, Philip Fenstermacher: Cryptographic ran-domness from air turbulence in disk drives. Crypto ’94, LNCS 839,Springer-Verlag, Berlin 1994, 114-120.

[DaPa 83] D. W. Davies, G. I. P. Parkin: The Average Cycle Size of the KeyStream in Output Feedback Encipherment. Cryptography, Proceed-ings Burg Feuerstein 1982, LNCS 149, Springer-Verlag, Berlin 1983,263-279.

[DaPr 84] D. W. Davies, W. L. Price: Security for Computer Networks, AnIntroduction to Data Security in Teleprocessing and Electronic FundsTransfer. John Wiley & Sons, New York 1984.

[DaPr 89] Donald W. Davies, Wyn L. Price: Security for Computer Networks,An Introduction to Data Security in Teleprocessing and ElectronicFunds Transfer. (2nd ed.) John Wiley & Sons, New York 1989.

[Damg 88] Ivan Bjerre Damgrd: Collision free hash functions and public keysignature schemes. Eurocrypt ’87, LNCS 304, Springer-Verlag, Berlin1988, 203-216.

[Damg 92] Ivan Damgrd: Towards Practical Public Key Systems Secure AgainstChosen Ciphertext Attacks. Crypto ’91, LNCS 576, Springer-Verlag,Berlin 1992, 445-456.

[Davi 85] Danald Watts Davies: Apparatus and methods for granting access tocomputers. UK Patent Application, 2 154 344 A, Application pub-lished 4.9.1985.

[Denn 82] Dorothy Denning: Cryptography and Data Security. Addison-Wesley,Reading 1982; reprinted with corrections, January 1983.

[Denn 84] Dorothy E. Denning: Digital Signatures with RSA and Other Public-Key Cryptosystems. Communications of the ACM 27/4 (1984) 388-392.

[Denn 85] Dorothy E. Denning: Commutative Filters for Reducing InferenceThreats in Multilevel Database Systems. Proceedings of the 1985 Sym-posium on Security and Privacy, April 22-24, 1985, Oakland, Califor-nia, IEEE Computer Society, 134-146.

[DiHe 76] Whitfield Diffie, Martin E. Hellman: New Directions in Cryptography.IEEE Transactions on Information Theory 22/6 (1976) 644-654.

383

Bibliography

[DiHe 79] Whitfield Diffie, Martin E. Hellman: Privacy and Authentication: AnIntroduction to Cryptography. Proceedings of the IEEE 67/3 (1979)397-427.

[Dier3 91] Rudiger Dierstein: Programm-Manipulationen: Arten, Eigenschaftenund Bekampfung. Deutsche Forschungsanstalt fur Luft- und Raum-fahrt (DLR), Zentrale Datenverarbeitung, D-8031 Oberpfaffenhofen;Institutsbericht IB 562/6, Juli 1986, Neufassung 1991.

[Dobb 98] Hans Dobbertin: Cryptanalysis of MD4. Journal of Cryptology 11/4(1998) 253-271.

[DuD3 99] DuD Report: 57. Konferenz des DSB des Bundes und der Landervom 25./26. Marz 1999: Entschließungen. Datenschutz und Daten-sicherheit DuD 23/5 (1999) 301-305.

[DuDR2 92] DuD Report: Dark Avenger Mutation Engine. Datenschutz undDatensicherung DuD 16/8 (1992) 442-443.

[DuD 86] DuD: AKTUELL. Datenschutz und Datensicherung DuD 10/1 (1986)54-56.

[EFF 98] Electronic Frontier Foundation (EFF): ”EFF DES cracker” machinebrings honesty to crypto debate - Electronic Frontier Foundationproves that DES is not secure. Press Release, San Francisco, July17, 1998.

[EMMT 78] W. F. Ehrsam, S. M. Matyas, C. H. Meyer, W. L. Tuchman: A crypto-graphic key management scheme for implementing the Data Encryp-tion Standard. IBM Systems Journal 17/2 (1978) 106-125.

[Eber 98] Ulrich Eberl: Der unverlierbare Schlussel. Forschung und Innovation— Die SIEMENS-Zeitschrift fur Wissenschaft und Technik /1 (1998)14-19.

[EcGM 83] Klaus Echtle, Winfried Gorke, Michael Marhofer: Zur Begriffsbildungbei der Beschreibung von Fehlertoleranz-Verfahren. Universitat Karl-sruhe, Fakultat fur Informatik, Institut fur Informatik IV, InternerBericht Nr. 6/83 Mai 1983.

[Echt 90] Klaus Echtle: Fehlertoleranzverfahren. Studienreihe Informatik, Spri-nger-Verlag, Berlin 1990.

[ElGa 85] Taher ElGamal: A Public Key Cryptosystem and a Signature SchemeBased on Discrete Logarithms. IEEE Transactions on InformationTheory 31/4 (1985) 469-472.

[EnHa 96] Erin English, Scott Hamilton: Network Security Under Siege; TheTiming Attack. Computer 29/3 (1996) 95-97.

384

Bibliography

[EvGL 85] Shimon Even, Oded Goldreich, Abraham Lempel: A Randomized Pro-tocol for Signing Contracts. Communications of the ACM 28/6 (1985)637-647.

[EvGY 84] S. Even, O. Goldreich, Y. Yacobi: Electronic Wallet. 1984 Interna-tional Zurich Seminar on Digital Communications, Applications ofSource Coding, Channel Coding and Secrecy Coding, March 6-8, 1984,Zurich, Switzerland, Swiss Federal Institute of Technology, Proceed-ings IEEE Catalog no. 84CH1998-4, 199-201.

[Even 89] Shimon Even: Secure Off-Line Electronic Fund Transfer Between Non-trusting Parties. SMART CARD 2000: The Future of IC Cards, Pro-ceedings of the IFIP WG 11.6 International Conference; Laxenburg(Austria), 19.-20. 10. 1987, North-Holland, Amsterdam 1989, 57-66.

[FGJP 98] Elke Franz, Andreas Graubner, Anja Jerichow, Andreas Pfitzmann:Comparison of Commitment Schemes Used in Mix-Mediated Anony-mous Communication for Preventing Pool-Mode Attacks. 3rd Aus-tralasian Conference on Information Security and Privacy (ACISP’98), LNCS 1438, Springer-Verlag, Berlin 1998, 111-122.

[FJKPS 96] Hannes Federrath, Anja Jerichow, Dogan Kesdogan, Andreas Pfitz-mann, Otto Spaniol: Mobilkommunikation ohne Bewegungsprofile.it+ti 38/4, 24-29, Republished in: G. Muller, A. Pfitzmann (Hrsg.),Mehrseitige Sicherheit in der Kommunikationstechnik, Addison-Wes-ley-Longman 1997, 169-180.

[FJKP 95] Hannes Federrath, Anja Jerichow, Dogan Kesdogan, Andreas Pfitz-mann: Security in Public Mobile Communication Networks. Proc. ofthe IFIP TC 6 International Workshop on Personal Wireless Commu-nications, Verlag der Augustinus Buchhandlung Aachen, 1995, 105-116.

[FJMPS 96] Elke Franz, Anja Jerichow, Steffen Moller, Andreas Pfitzmann, IngoStierand: Computer Based Steganography: How it works and whytherefore any restrictions on cryptography are nonsense, at best. In-formation Hiding, First International Workshop, Cambridge, UK,May/June 1996, LNCS 1174, Springer Verlag, Heidelberg 1996, 7-21.

[FJMP 97] Hannes Federrath, Anja Jerichow, Jan Muller, Andreas Pfitz-mann: Unbeobachtbarkeit in Kommunikationsnetzen. Verlaßliche IT-Systeme, GI-Fachtagung VIS ’97, DuD Fachbeitrage, Vieweg 1997,191-210.

[FJPP 95] Hannes Federrath, Anja Jerichow, Andreas Pfitzmann, Birgit Pfitz-mann: Mehrseitig sichere Schlusselerzeugung. Proc. ArbeitskonferenzTrust Center 95, DuD Fachbeitrage, Vieweg 1995, 117-131.

385

Bibliography

[FMSS 96] S. M. Furnell, J. P. Morrisey, P. W. Sanders, C. T. Stockel: Applica-tions of keystroke analysis for improved login security and continuoususer authentication. 12th IFIP International Conference on Informa-tion Security (IFIP/Sec ’96), Chapman & Hall, London 1996, 283-294.

[FO 87] FO: Ein Fall fur Prometheus. ADAC motorwelt /4 (1987) 50-53.

[FaLa 75] David J. Farber, Kenneth C. Larson: Network Security Via DynamicProcess Renaming. Fourth Data Communications Symposium, 7-9 Oc-tober 1975, Quebec City, Canada, 8-13-8-18.

[FeJP 96] Hannes Federrath, Anja Jerichow, Andreas Pfitzmann: Mixes in mo-bile communication systems: Location management with privacy. In-formation Hiding, LNCS 1174, Springer-Verlag, Berlin 1996, 121-135.

[FePf 97] Hannes Federrath, Andreas Pfitzmann: Bausteine zur Realisierungmehrseitiger Sicherheit. in: Gunter Muller, Andreas Pfitzmann (Ed.):Mehrseitige Sicherheit in der Kommunikationstechnik, Addison-Wes-ley-Longman 1997, 83-104.

[FePf 99] Hannes Federrath, Andreas Pfitzmann: Stand der Sicherheitstech-nik. in: Kubicek et. al. (Hrsg.): Multimedia@Verwaltung, JahrbuchTelekommunikation und Gesellschaft 1999, Hutig, 124-132.

[Fede 98] Hannes Federrath: Vertrauenswurdiges Mobilitatsmanagement inTelekommunikationsnetzen. Dissertation, TU Dresden, Fakultat In-formatik, Februar 1998.

[Fede 99] Hannes Federrath: Sicherheit mobiler Kommunikation. DuD Fach-beitrage, Vieweg 1999, 1-263.

[FiSh 87] Amos Fiat, Adi Shamir: How to Prove Yourself: Practical Solu-tions to Identification and Signature Problems. Crypto ’86, LNCS263, Springer-Verlag, Berlin 1987, 186-194.

[FoPf 91] Dirk Fox, Birgit Pfitzmann: Effiziente Software-Implementierungdes GMR-Signatursystems. Proc. Verlaßliche Informationssysteme(VIS’91), Informatik-Fachberichte 271, Springer-Verlag, Berlin 1991,329-345.

[Folt 87] Hal Folts: Open Systems Standards. IEEE Network 1/2 (1987) 45-46.

[FrJP 97] Elke Franz, Anja Jerichow, Andreas Pfitzmann: Systematisierung undModellierung von Mixen. Verlaßliche IT-Systeme, GI-Fachtagung VIS’97, DuD Fachbeitrage, Vieweg 1997, 171-190.

386

Bibliography

[FrJW 98] Elke Franz, Anja Jerichow, Guntram Wicke: A Payment Scheme forMixes Providing Anonymity. Trends in Distributed Systems for Elec-tronic Commerce (TREC), LNCS 1402, Springer-Verlag, Berlin 1998,94-108.

[FrPf 98] Elke Franz, Andreas Pfitzmann: Einfuhrung in die Steganographieund Ableitung eines neuen Stegoparadigmas. Informatik-Spektrum21/4 (1998) 183-193.

[GGKL 89] Morrie Gasser, Andy Goldstein, Charlie Kaufman, Butler Lampson:The Digital Distributed System Security Architecture. Proc. 12th Na-tional Computer Security Conference 1989, 305-319.

[GGPS 97] Gunther Gattung, Rudiger Grimm, Ulrich Pordesch, Michael J.Schneider: Personliche Sicherheitsmanager in der virtuellen Welt. in:Gunter Muller, Andreas Pfitzmann (Hrsg.): Mehrseitige Sicherheitin der Kommunikationstechnik, Addison-Wesley-Longman 1997, 181-205.

[GKLR 92] M. Gasser, C. Kaufman, J. Linn, Y. Le Roux, J. Tardo: DASS: Dis-tributed Authentication Security Service. Education and Society, R.Aiken (ed.), Proc. 12th IFIP World Computer Congress 1992, Infor-mation Processing 92, Vol. II, Elsevier Science Publishers B.V. (North-Holland), 1992, 447-456.

[GaJo 80] Michael R. Garey, David S. Johnson: Computers and Intractability- A Guide to the Theory of NP-Completeness. 2nd Printing, W.H.Freeman and Company, New York 1980.

[GiLB 85] David K. Gifford, John M. Lucassen, Stephen T. Berlin: The Applica-tion of Digital Broadcast Communication to Large Scale InformationSystems. IEEE Journal on Selected Areas in Communications 3/3(1985) 457-467.

[GiMS 74] E. N. Gilbert, F. J. MacWilliams, N. J. A. Sloane: Codes which detectdeception. The Bell System Technical Journal 53/3 (1974) 405-424.

[GlVi 88] Cezary Glowacz, Heinrich Theodor Vierhaus: Der Las-Vegas-Chip.Der GMD-Spiegel 18/2-3 (1988) 6-8.

[GoGM 86] Oded Goldreich, Shafi Goldwasser, Silvio Micali: How to constructrandom functions. Journal of the ACM 33/4 (1986) 792-807.

[GoKi 86] Shafi Goldwasser, Joe Kilian: Almost All Primes Can be QuicklyCertified. 18th Symposium on Theory of Computing (STOC) 1986,ACM, New York 1986, 316-329.

387

Bibliography

[GoMR 88] Shafi Goldwasser, Silvio Micali, Ronald L. Rivest: A Digital Signa-ture Scheme Secure Against Adaptive Chosen-Message Attacks. SIAMJournal on Computing 17/2 (1988) 281-308.

[GoMT 82] Shafi Goldwasser, Silvio Micali, Po Tong: Why and How to Establish aPrivate Code on a Public Network. 23rd Symposium on Foundationsof Computer Science (FOCS) 1982, IEEE Computer Society, 1982,134-144.

[Gross 91] Michael Groß: Vertrauenswurdiges Booten als Grundlage authentis-cher Basissysteme. Proc. Verlaßliche Informationssysteme (VIS’91),Informatik-Fachberichte 271, Springer-Verlag, Berlin 1991, 190-207.

[HoPf 85] Gunter Hockel, Andreas Pfitzmann: Untersuchung der Datenschutz-eigenschaften von Ringzugriffsmechanismen. 1. GI Fachtagung Daten-schutz und Datensicherung im Wandel der Informationstechnologien,IFB 113, Springer-Verlag, Berlin 1985, 113-127.

[Hock 85] Gunter Hockel: Untersuchung der Datenschutzeigenschaften vonRingzugriffsmechanismen. Diplomarbeit am Institut fur InformatikIV, Universitat Karlsruhe, August 1985.

[HaMo 98] Rainer Hamm, Klaus Peter Moller (Hrsg.): Datenschutz durch Kryp-tographie - ein Sicherheitsrisiko? Nomos Verlagsgesellschaft, Baden-Baden 1998.

[HaSt 91] Stuart Haber, W. Scott Stornetta: How to Time-Stamp a DigitalDocument. Journal of Cryptology 3/2 (1991) 99-111.

[Hast 86] Johan Hstad: On Using RSA with low exponent in a public key net-work. Crypto ’85, LNCS 218, Springer-Verlag, Berlin 1986, 403-408.

[Hast 88] Johan Hstad: Solving simultaneous modular equations of low degree.SIAM Journal on Computing 17/2 (1988) 336-362.

[HeKW 85] Franz-Peter Heider, Detlef Kraus, Michael Welschenbach: Mathe-matische Methoden der Kryptoanalyse. DuD Fachbeitrage 8, Vieweg1985.

[HePe 93] Eugne van Heyst, Torben Pryds Pedersen: How to Make Efficient Fail-stop Signatures. Eurocrypt ’92, LNCS 658, Springer-Verlag, Berlin1993, 366-377.

[Herd 85] Siegfried Herda: Authenticity, Anonymity and Security in OSIS. AnOpen System for Information Services. Proceedings der 1. GI Fach-tagung Datenschutz und Datensicherung im Wandel der Information-stechnologien, Munchen, Oktober 1985, herausgegeben von P.P.Spies,IFB 113, Springer-Verlag, Berlin 1985, 35-50.

388

Bibliography

[Hopk 89] John Hopkinson: Information Security - ”The Third Facet”. Proc.1st Annual Canadian Computer Security Conference, 30 January - 1February, 1989, Ottawa, Canada, Hosted by The System Security Cen-tre, Communications Security Establishment, Government of Canada,109-129.

[Horg 85] John Horgan: Thwarting the information thieves. IEEE spectrum22/7 (1985) 30-41.

[Hors 85] Patrick Horster: Kryptologie. Reihe Informatik/47, Herausgegebenvon K. H. Bohling, U. Kulisch, H. Maurer, Bibliographisches Institut,Mannheim 1985.

[HuPf2 96] Michaela Huhn, Andreas Pfitzmann: Krypto(de)regulierung. Daten-schutzNachrichten (DANA), Herausgeber: Deutsche Vereinigung furDatenschutz e. V., ISSN 0173-7767, 19/6 (1996) 4-13.

[HuPf 96] Michaela Huhn, Andreas Pfitzmann: Technische Randbedingungenjeder Kryptoregulierung. Datenschutz und Datensicherheit DuD 20/1(1996) 23-26.

[HuPf 98] Michaela Huhn, Andreas Pfitzmann: Verschlusselungstechniken furdas Netz. in: Claus Leggewie, Christa Maar (Hrsg.): Internet & Poli-tik, Bollmann-Verlag, Koln 1998, 438-456.

[ISO7498-2 87] ISO: Information processing systems — Open Systems Interconnec-tion Reference Model — Part 2: Security architecture. DRAFT IN-TERNATIONAL STANDARD ISO/DIS 7498-2; ISO TC 97, Submit-ted on 1987-06-18.

[ISO8731-1 87] ISO: Banking — Approved algorithms for message authentication —Part 1: DEA. ISO International Standard, First edition 01.06.1987.

[ITG 87] ITG-Empfehlungen: ISDN-Begriffe. Nachrichtentechnische Zeitschriftntz 40/11 (1987) 814-819.

[ITSEC2 91] European Communities - Commission: ITSEC: Information Technol-ogy Security Evaluation Criteria. (Provisional Harmonised Criteria,Version 1.2, 28 June 1991) Office for Official Publications of the Eu-ropean Communities, Luxembourg 1991 (ISBN 92-826-3004-8).

[ITSEC2d 91] Europaische Gemeinschaften - Kommission: Kriterien fur die Bewer-tung der Sicherheit von Systemen der Informationstechnik (ITSEC).(Vorlaufige Form der harmonisierten Kriterien, Version 1.2, 28. Juni1991) Amt fur amtliche Veroffentlichungen der Europaischen Gemein-schaften, Luxemburg 1991 (ISBN 92-826-3003-X), erschien auch imBundesanzeiger, Herausgegeben vom Bundesminister der Just.

389

Bibliography

[JMPP 98] Anja Jerichow, Jan Muller, Andreas Pfitzmann, Birgit Pfitz-mann, Michael Waidner: Real-Time Mixes: A Bandwidth-EfficientAnonymity Protocol. IEEE Journal on Selected Areas in Communi-cations 16/4 (1998) 495-509.

[KFBG 90] T. Kropf, J. Froßl, W. Beller, T. Giesler: A Hardware Implementationof a Modified DES-Algorithm. Microprocessing and Microprogram-ming 30 (1990) 59-65.

[Kohl 86] Helmut Kohler: Die Problematik automatisierter Rechtsvorgange, ins-besondere von Willenserklarungen (Teil I). Datenschutz und Daten-sicherung DuD 10/6 (1986) 337-344.

[Kahn 96] David Kahn: The History of Steganography. Information Hiding, FirstInternational Workshop, Cambridge, UK, May/June 1996, LNCS1174, Springer-Verlag, Heidelberg 1996, 1-5.

[Karg 77] Paul A. Karger: Non-Discretionary Access Control for DecentralizedComputing Systems. Master Thesis, Massachusetts Institute of Tech-nology, Laboratory for Computer Science, 545 Technology Square,Cambridge, Massachusetts 02139, May 1977, Report MIT/LCS/TR-179.

[KeMM 83] Heinz J. Keller, Heinrich Meyr, Hans R. Muller: Transmission DesignCriteria for a Synchronous Token Ring. IEEE Journal on SelectedAreas in Communications 1/5 (1983) 721-733.

[Ken1 81] Stephen T. Kent: Security Requirements and Protocols for a Broad-cast Scenario. IEEE Transactions on Communications 29/6 (1981)778-786.

[Kesd 99] Dogan Kesdogan: Vertrauenswurdige Kommunikation in offenen Um-gebungen. Dissertation, RWTH Aachen, Mathematisch-Naturwissen-schaftliche Fakultat, 1999.

[KlPi 97] Herbert Klimant, Rudi Piotraschke: Informationstheoretische Bew-ertung steganographischer Konzelationssysteme. Verlaßliche IT-Sys-teme, GI-Fachtagung VIS ’97, DuD Fachbeitrage, Vieweg 1997, 225-232.

[Klei 75] Leonard Kleinrock: Queueing Systems - Volume I: Theory. John Wileyand Sons, New York 1975.

[Knut 97] Donald E. Knuth: The Art of Computer Programming, Vol. 2:Seminumerical Algorithms (3rd ed.). Addison-Wesley, Reading 1997.

[Koch 98] Paul C. Kocher: On Certificate Revocation and Validation. 2nd In-ternational Conference on Financial Cryptography (FC ’98), LNCS1465, Springer-Verlag, Berlin 1998, 172-177.

390

Bibliography

[Kran 86] Evangelos Kranakis: Primality and Cryptography. Wiley-Teubner Se-ries in Computer Science, B. G. Teubner, Stuttgart 1986.

[Krat 84] Herbert Krath: Stand und weiterer Ausbau der Breitbandverteilnetze.telematica 84, 18.-20. Juni 1984, Stuttgart, Kongreßband Teil 2 Bre-itbandkommunikation, Herausgeber: W. Kaiser, VDE-Verlag GmbH,Neue Mediengesellschaft Ulm mbH, 3-17.

[Lune 87] Heinz Luneburg: Kleine Fibel der Arithmetik. Lehrbucher und Mono-graphien zur Didaktik der Mathematik, Band 8, BI Wissenschaftsver-lag, Mannheim 1987.

[LaMa 92] X. Lai, J. L. Massey: Hash functions based on block ciphers. Euro-crypt ’92, Extended Abstracts, 24.-28. Mai 1992, Balatonfured, Un-garn, 53-66.

[LaSP 82] Leslie Lamport, Robert Shostak, Marshall Pease: The Byzantine Gen-erals Problem. ACM Transactions on Programming Languages andSystems 4/3 (1982) 382-401.

[Lamp 73] Butler W. Lampson: A Note on the Confinement Problem. Commu-nications of the ACM 16/10 (1973) 613-615.

[LeMa1 89] Arjen K. Lenstra, Mark S. Manasse: Factoring - where are we now?IACR Newsletter, A Publication of the International Association forCryptologic Research, 6/2 (1989) 4-5.

[LeMa1 90] Arjen K. Lenstra, Mark S. Manasse: Factoring by electronic mail.Eurocrypt ’89, LNCS 434, Springer-Verlag, Berlin 1990, 355-371.

[Leib 99] Otto Leiberich: Vom diplomatischen Code zur Fallturfunktion. Spek-trum der Wissenschaft /3 (1999) 106–108.

[LiFl 83] John O. Limb, Lois E. Flamm: A Distributed Local Area NetworkPacket Protocol for Combined Voice and Data Transmission. IEEEJournal on Selected Areas in Communications 1/5 (1983) 926-934.

[Lips 81] John D. Lipson: Elements of algebra and algebraic computing. Addi-son-Wesley Publishing Company, Advanced Book Program; Reading,Mass. 1981.

[LuPW 91] Jorg Lukat, Andreas Pfitzmann, Michael Waidner: Effizientere fail-stop Schlusselerzeugung fur das DC-Netz. Datenschutz und Daten-sicherung DuD 15/2 (1991) 71-75.

[LuRa 89] Michael Luby, Charles Rackoff: A Study of Password Security. Journalof Cryptology 1/3 (1989) 151-158.

391

Bibliography

[MoPS 94] Steffen Moller, Andreas Pfitzmann, Ingo Stierand: RechnergestutzteSteganographie: Wie sie funktioniert und warum folglich jede Regle-mentierung von Verschlusselung unsinnig ist. Datenschutz und Daten-sicherung DuD 18/6 (1994) 318-326.

[MuPf 97] Gunther Muller, Andreas Pfitzmann (eds.): Mehrseitige Sicherheit inder Kommunikationstechnik - Verfahren, Komponenten, Integration.Addison-Wesley-Longman Verlag GmbH, 1997, 5-579.

[MaPf 87] Andreas Mann, Andreas Pfitzmann: Technischer Datenschutzund Fehlertoleranz in Kommunikationssystemen. Kommunikation inVerteilten Systemen, GI/NTG-Fachtagung, Aachen 1987, IFB 130,Springer-Verlag, Berlin 1987, 16-30; Uberarbeitung in: Datenschutzund Datensicherung DuD /8 (1987) 393-405.

[Mann 85] Andreas Mann: Fehlertoleranz und Datenschutz in Ringnetzen. Diplo-marbeit am Institut fur Informatik IV, Universitat Karlsruhe, Oktober1985.

[Marc 88] Eckhard Marchel: Leistungsbewertung von uberlagerndem Empfan-gen bei Mehrfachzugriffsverfahren mittels Kollisionsauflosung. Diplo-marbeit am Institut fur Rechnerentwurf und Fehlertoleranz, Univer-sitat Karlsruhe, April 1988.

[MeMa 82] Carl H. Meyer, Stephen M. Matyas: Cryptography - A New Dimensionin Computer Data Security. (3rd printing) John Wiley & Sons, 1982.

[MeOV 97] Alfred J. Menezes, Paul C. van Oorschot, Scott A. Vanstone: Hand-book of Applied Cryptography. CRC Press, Boca Raton 1997.

[Merr 83] Michael John Merritt: Cryptographic Protocols. Ph. D. Dissertation,School of Information and Computer Science, Georgia Institute ofTechnology, February 1983.

[Meye 81] Meyers Lexikonredaktion: Meyers Großes Taschenlexikon in 24 Ban-den (Lizenzausgabe ”Farbiges Großes Volkslexikon”, Mohndruck,Gutersloh). B.I.-Taschenbuchverlag, Mannheim 1981.

[Meye 87] Meyers Lexikonredaktion: Meyers Großes Taschenlexikon in 24 Ban-den. 2. neu bearbeitete Auflage, B.I.-Taschenbuchverlag, Mannheim1987.

[Mill3 94] Benjamin Miller: Vital signs of identity. IEEE spectrum 31/2 (1994)22-30.

[Mt2] Matthaus: 2,16; The Bible.

392

Bibliography

[NaYu 90] Moni Naor, Moti Yung: Public-key Cryptosystems Provably Secureagainst Chosen Ciphertext Attacks. 22nd Symposium on Theory ofComputing (STOC) 1990, ACM, New York 1990, 427-437.

[NeKe 99] Heike Neumann, Volker Kessler: Formale Analyse von kryptographis-chen Protokollen mit BAN-Logik. Datenschutz und DatensicherheitDuD 23/2 (1999) 90-93.

[Nied 87] Arnold Niedermaier: Bewertung von Zuverlassigkeit und Sendera-nonymitat einer fehlertoleranten Kommunikationsstruktur. Diplomar-beit am Institut fur Rechnerentwurf und Fehlertoleranz, UniversitatKarlsruhe, September 1987.

[OkOh1 89] Tatsuaki Okamoto, Kazuo Ohta: Disposable Zero-knowledge Au-thentications and Their Applications to Untraceable Electronic Cash.Crypto ’89, August 20-24 1989, Abstracts, 443-458.

[OkOh 89] Tatsuaki Okamoto, Kazuo Ohta: Divertible Zero Knowledge Interac-tive Proofs and Commutative Random Self-Reducibility. Eurocrypt’89; Houthalen, 10.-13. April 1989, A Workshop on the Theory andApplication of Cryptographic Techniques, Abstracts, 95-108.

[PPSW 95] Andreas Pfitzmann, Birgit Pfitzmann, Matthias Schunter, MichaelWaidner: Vertrauenswurdiger Entwurf portabler Benutzerendgerateund Sicherheitsmodule. Verlaßliche IT-Systeme, GI-Fachtagung VIS’95, DuD Fachbeitrage, Vieweg, Braunschweig 1995, 329-350.

[PPSW 97] Andreas Pfitzmann, Birgit Pfitzmann, Matthias Schunter, MichaelWaidner: Trusting Mobile User Devices and Security Modules. Com-puter 30/2 (1997) 61-68.

[PSWW 96] Andreas Pfitzmann, Alexander Schill, Guntram Wicke, Gritta Wolf,Jan Zollner: 1. Zwischenbericht SSONET. http://mephisto.inf.tu-dresden.de/RESEARCH/ssonet/reports.html, 31.08.96.

[PSWW 98] Andreas Pfitzmann, Alexander Schill, Andreas Westfeld, GuntramWicke, Gritta Wolf, Jan Zollner: A Java-Based Distributed Platformfor Multilateral Security. Trends in Distributed Systems for ElectronicCommerce (TREC), LNCS 1402, Springer-Verlag, Berlin 1998, 52-64.

[PWP 90] Birgit Pfitzmann, Michael Waidner, Andreas Pfitzmann: Rechtssi-cherheit trotz Anonymitat in offenen digitalen Systemen. Datenschutzund Datensicherung DuD 14/5-6 (1990) 243-253, 305-315.

[Papa 86] Stavros Papastavridis: Upper and Lower Bounds for the Reliability ofa Consecutive-k-out-of-n:F System. IEEE Transactions on Reliability35/5 (1986) 607-610.

393

Bibliography

[Paul 78] Wolfgang J. Paul: Komplexitatstheorie. Teubner Studienbucher In-formatik, Band 39, B. G. Teubner Verlag, Stuttgart 1978.

[Pede 96] Torben P. Pedersen: Electronic Payments of Small Amounts. SecurityProtocols 1996, LNCS 1189, Springer-Verlag, Berlin 1997, 59-68.

[PfAß1 90] Andreas Pfitzmann, Ralf Aßmann: More Efficient Software Imple-mentations of (Generalized) DES. Interner Bericht 18/90, Fakultatfur Informatik, Universitat Karlsruhe 1990.

[PfAß 93] Andreas Pfitzmann, Ralf Aßmann: More Efficient Software Imple-mentations of (Generalized) DES. Computers & Security 12/5 (1993)477-500.

[PfPW1 89] Andreas Pfitzmann, Birgit Pfitzmann, Michael Waidner: Telefon-MIXe: Schutz der Vermittlungsdaten fur zwei 64-kbit/s-Duplexkanaleuber den (2 64 + 16)-kbit/s-Teilnehmeranschluß. Datenschutz undDatensicherung DuD 13/12 (1989) 605-622.

[PfPW 89] Andreas Pfitzmann, Birgit Pfitzmann, Michael Waidner: GarantierterDatenschutz fur zwei 64-kbit/s-Duplexkanale uber den (2 64 + 16)-kbit/s-Teilnehmeranschluß durch Telefon-MIXe. Tagungsband 3 der4. SAVE-Tagung, 19.-21. April 1989, Koln, 1417-1447.

[PfPf 90] Birgit Pfitzmann, Andreas Pfitzmann: How to Break the Direct RSA-Implementation of MIXes. Eurocrypt ’89, LNCS 434, Springer-Verlag,Berlin 1990, 373-381.

[PfPf 92] Andreas Pfitzmann, Birgit Pfitzmann: Technical Aspects of Data Pro-tection in Health Care Informatics. Advances in Medical Informatics,J. Noothoven van Goor and J. P. Christensen (Eds.), IOS Press, Am-sterdam 1992, 368-386.

[PfWa2 97] Birgit Pfitzmann, Michael Waidner: Strong Loss Tolerance of Elec-tronic Coin Systems. ACM Transactions on Computer Systems 15/2(1997) 194-213.

[PfWa 86] Andreas Pfitzmann, Michael Waidner: Networks without user observ-ability – design options. Eurocrypt ’85, LNCS 219, Springer-Verlag,Berlin 1986, 245-253.

[PfWa 91] Birgit Pfitzmann, Michael Waidner: Fail-stop-Signaturen undihre Anwendung. Proc. Verlaßliche Informationssysteme (VIS’91),Informatik-Fachberichte 271, Springer-Verlag, Berlin 1991, 289-301.

[Pfi1 83] Andreas Pfitzmann: Ein dienstintegriertes digitales Vermittlungs-/Verteilnetz zur Erhohung des Datenschutzes. Fakultat fur Informatik,Universitat Karlsruhe, Interner Bericht 18/83, Dezember 1983.

394

Bibliography

[Pfi1 85] Andreas Pfitzmann: How to implement ISDNs without user observ-ability - Some remarks. Fakultat fur Informatik, Universitat Karl-sruhe, Interner Bericht 14/85.

[Pfit8 96] Birgit Pfitzmann: Digital Signature Schemes - General Frameworkand Fail-Stop Signatures. LNCS 1100, Springer-Verlag, Berlin 1996.

[Pfit 83] Andreas Pfitzmann: Ein Vermittlungs-/Verteilnetz zur Erhohung desDatenschutzes in Bildschirmtext-ahnlichen Neuen Medien. GI ’83 13.Jahrestagung der Gesellschaft fur Informatik 3. bis 7. Oktober 1983,Universitat Hamburg, IFB 73, Springer-Verlag, Berlin, 411-418.

[Pfit 85] Andreas Pfitzmann: Technischer Datenschutz in diensteintegrieren-den Digitalnetzen - Problemanalyse, Losungsansatze und eine an-gepaßte Systemstruktur. Proceedings der 1. GI Fachtagung Daten-schutz und Datensicherung im Wandel der Informationstechnologien,Munchen, Oktober 1985, herausgegeben von P.P.Spies, IFB 113,Springer-Verlag, Berlin 1985, 96-112.

[Pfit 86] Andreas Pfitzmann: Die Infrastruktur der Informationsgesellschaft:Zwei getrennte Fernmeldenetze beibehalten oder ein wirklich daten-geschutztes errichten? Datenschutz und Datensicherung DuD 10/6(1986) 353-359.

[Pfit 89] Andreas Pfitzmann: Diensteintegrierende Kommunikationsnetze mitteilnehmeruberprufbarem Datenschutz. Universitat Karlsruhe, Fakul-tat fur Informatik, Dissertation, Feb. 1989, IFB 234, Springer-Verlag,Berlin 1990.

[Pfit 93] Birgit Pfitzmann: Sorting Out Signature Schemes - and some Theoryof Secure Reactive Systems. Extended Abstract, Institut fur Infor-matik, Universitat Hildesheim, 4.1.1993; submitted for publication.

[PoKl 78] G. J. Popek, C. S. Kline: Design Issues for Secure Computer Networks.Operating Systems, An Advanced Course, Edited by R. Bayer, R. M.Graham, G. Seegmuller; LNCS 60, 1978; Nachgedruckt als SpringerStudy Edition, 1979; Springer-Verlag, Berlin, 517-546.

[PoSc 97] Ulrich Pordesch, Michael J. Schneider: Sichere Kommunikationin der Gesundheitsversorgung. in: Gunter Muller, Andreas Pfitz-mann (Hrsg.): Mehrseitige Sicherheit in der Kommunikationstechnik,Addison-Wesley-Longman 1997,61-80.

[RDFR 97] Martin Reichenbach, Herbert Damker, Hannes Federrath, Kai Ran-nenberg: Individual Management of Personal Reachability in MobileCommunication. 13th IFIP International Conference on InformationSecurity (IFIP/Sec ’97), Proc., Chapman & Hall, London 1997, 164-174.

395

Bibliography

[RSA 78] Ronald L. Rivest, Adi Shamir, Leonard Adleman: A Method for Ob-taining Digital Signatures and Public-Key Cryptosystems. Commu-nications of the ACM 21/2 (1978) 120-126, reprinted: 26/1 (1983)96-99.

[RWHP 89] Alexander Roßnagel, Peter Wedde, Volker Hammer, Ulrich Pordesch:Die Verletzlichkeit der ”Informationsgesellschaft”. SozialvertraglicheTechnikgestaltung Band 5, Westdeutscher Verlag, Opladen 1989.

[RaPM 96] Kai Rannenberg, Andreas Pfitzmann, Gunter Muller: Sicherheit, ins-besondere mehrseitige IT-Sicherheit. it+ti 38/4 (1996) 7-10.

[Rann 94] Kai Rannenberg: Kolleg ”Sicherheit in der Kommunikationstechnik”eingerichtet. Manuskript, V. 2.0, 9.9.1994, per e-mail erhalten am28.11.1994.

[Rann 97] Kai Rannenberg: Tragen Zertifizierung und Evaluationskriterien zumehrseitiger Sicherheit bei? in: Gunter Muller, Andreas Pfitz-mann (Hrsg.): Mehrseitige Sicherheit in der Kommunikationstechnik,Addison-Wesley-Longman 1997, 527-562.

[Rede 84] Helmut Redeker: Geschaftsabwicklung mit externen Rechnern imBildschirmtextdienst. Neue Juristische Wochenschrift NJW /42 (1984)2390-2394.

[Rede 86] Helmut Redeker: Rechtliche Probleme von Mailbox-Systemen. GI -16. Jahrestagung II, ”Informatik-Anwendungen - Trends und Perspek-tiven”; Berlin, Oktober 1986, Proceedings herausgegeben von G. Hom-mel und S. Schindler, IFB 127, Springer-Verlag, Berlin 1986, 253-266.

[Rei1 86] Peter O’Reilly: Burst and Fast-Packet Switching: Performance Com-parisons. IEEE INFOCOM ’86, Fifth Annual Conference ”Computersand Communications Integration - Design, Analysis, Management”,April 8-10, 1986, Miami, Florida, 653-666.

[Riel 99] Herman te Riele: Factorization of a 512-bits RSA key using NumberField Sieve. e-Mail of Aug. 26, 1999.

[Riha2 96] Karl Rihaczek: Die Kryptokontroverse: das Normungsverbot. Daten-schutz und Datensicherheit DuD 20/1 (1996) 15-22.

[Riha 84] Karl Rihaczek: Datenverschlusselung in Kommunikationssystemen -Moglichkeiten und Bedurfnisse. DuD Fachbeitrage 6, Vieweg 1984.

[Riha 85] Karl Rihaczek: Der Stand von OSIS. Datenschutz und DatensicherungDuD 9/4 (1985) 213-217.

[Rive 98] Ronald L. Rivest: Chaffing and Winnowing: Confidentiality withoutEncryption. MIT Lab for Computer Science, March 18, 1998.

396

Bibliography

[RoSc 96] Alexander Roßnagel, Michael J. Schneider: Anforderungen an diemehrseitige Sicherheit in der Gesundheitsversorgung und ihre Erhe-bung. it+ti 38/4 (1996) 15-19.

[Roch 87] Edouard Y. Rocher: Information Outlet, ULAN versus ISDN. IEEECommunications Magazine 25/4 (1987) 18-32.

[Rose 85] K. H. Rosenbrock: ISDN - Die folgerichtige Weiterentwicklung des dig-italisierten Fernsprechnetzes fur das kunftige Dienstleistungsangebotder Deutschen Bundespost. GI/NTG-Fachtagung ”Kommunikation inVerteilten Systemen - Anwendungen, Betrieb und Grundlagen -”, 11.-15. Marz 1985, Tagungsband 1, D. Heger, G. Kruger, O. Spaniol, W.Zorn (Hrsg.), IFB 95, Springer-Verlag, Berlin, 202-221.

[Rul1 87] Christoph Ruland: Datenschutz in Kommunikationssystemen. DAT-ACOM Buchverlag, Pulheim, 1987.

[SFJK 97] Reiner Sailer, Hannes Federrath, Anja Jerichow, Dogan Kesdogan,Andreas Pfitzmann: Allokation von Sicherheitsfunktionen in Telekom-munikationsnetzen. in: Gunter Muller, Andreas Pfitzmann (Hrsg.):Mehrseitige Sicherheit in der Kommunikationstechnik, Addison-Wes-ley-Longman 1997, 325-357.

[SIEM 87] SIEMENS: Internationale Fernsprechstatistik 1987. Stand 1. Januar1986; SIEMENS Aktiengesellschaft N OV Marketing, Postfach 70 0073, D-8000 Munchen 70, Mai 1987.

[SIEM 88] SIEMENS: Vermittlungsrechner CP 113 fur ISDN. SIEMENS telcomreport 11/2 (1988) 80.

[ScS1 84] Christian Schwarz-Schilling (ed.): ISDN - die Antwort der DeutschenBundespost auf die Anforderungen der Telekommunikation von mor-gen. Herausgeber: Der Bundesminister fur das Post- und Fern-meldewesen, Bonn, 1984.

[ScSc 73] A. Scholz, B. Schoeneberg: Einfuhrung in die Zahlentheorie. (5. Aufl.)Sammlung Goschen, de Gruyter, Berlin 1973.

[ScSc 83] Richard D. Schlichting, Fred B. Schneider: Fail-Stop Processors: AnApproach to Designing Fault-Tolerant Computing Systems. ACMTransactions on Computer Systems 1/3 (1983) 222-238.

[ScSc 84] Christian Schwarz-Schilling (ed.): Konzept der Deutschen Bundespostzur Weiterentwicklung der Fernmeldeinfrastruktur. Herausgeber: DerBundesminister fur das Post- und Fernmeldewesen, Stab 202, Bonn,1984.

397

Bibliography

[ScSc 86] Christian Schwarz-Schilling (ed.): Mittelfristiges Programm fur denAusbau der technischen Kommunikationssysteme. Herausgeber: DerBundesminister fur das Post- und Fernmeldewesen, Bonn, 1986.

[Scho 84] Helmut Schon: Die Deutsche Bundespost auf ihrem Weg zum ISDN.Zeitschrift fur das Post- und Fernmeldewesen /6 1984.

[Scho 86] Helmut Schon: ISDN und Okonomie. Jahrbuch der Deutschen Bun-despost 1986.

[Schn 96] Bruce Schneier: Applied Cryptography: Protocols, Algorithms, andSource Code in C. (2nd ed.) John Wiley & Sons, New York 1996.

[Sedl 88] H. Sedlak: The RSA cryptography processor. Eurocrypt ’87, LNCS304, Springer-Verlag, Berlin 1988, 95-105.

[Sha1 49] C. E. Shannon: Communication Theory of Secrecy Systems. The BellSystem Technical Journal 28/4 (1949) 656-715.

[Sham 79] Adi Shamir: How to Share a Secret. Communications of the ACM22/11 (1979) 612-613.

[Shan 48] C. E. Shannon: A Mathematical Theory of Communication. The BellSystem Technical Journal 27 (1948) 379-423, 623-656.

[Silv 91] Robert D. Silverman: Massively Distributed Computing and Factor-ing Large Integers. Communications of the ACM 34/11 (1991) 95-103.

[Sim3 88] Gustavus J. Simmons: A Survey of Information Authentication. Pro-ceedings of the IEEE 76/5 (1988) 603-620.

[Simm 88] Gustavus J. Simmons: Message authentication with arbitration oftransmitter/receiver disputes. Eurocrypt ’87, LNCS 304, Springer-Verlag, Berlin 1988, 151-165.

[Steg 85] H. Stegmeier: Einfluß der VLSI auf Kommunikationssysteme. GI/-NTG-Fachtagung ”Kommunikation in Verteilten Systemen - Anwen-dungen, Betrieb und Grundlagen -”, 11.-15. Marz 1985, Tagungsband1, D. Heger, G. Kruger, O. Spaniol, W. Zorn (Hrsg.), IFB 95, Springer-Verlag, Berlin, 663-672.

[Stel 90] D. Stelzer: Kritik des Sicherheitsbegriffs im IT-Sicherheitsrahmenkon-zept. Datenschutz und Datensicherung DuD14/10 (1990) 501-506.

[SuSY 84] Tatsuya Suda, Mischa Schwartz, Yechiam Yemini: Protocol Archi-tecture of a Tree Network with Collision Avoidance Switches. Linksfor the Future; Science, Systems & Services for Communications, P.

398

Bibliography

Dewilde and C. A. May (eds.); Proceedings of the International Con-ference on Communications - ICC ’84, Amsterdam, The Netherlands,May 14-17, 1984, IEEE, Elsevier Science Publishers.

[Sze 85] Daniel T. W. Sze: A Metropolitan Area Network. IEEE Journal onSelected Areas in Communications 3/6 (1985) 815-824.

[Tane 81] Andrew S. Tanenbaum: Computer Networks. Prentice-Hall, Engle-wood Cliffs, N. J., 1981.

[Tane 88] Andrew S. Tanenbaum: Computer Networks. 2nd ed., Prentice-Hall,Englewood Cliffs 1988.

[Tane 96] Andrew S. Tanenbaum: Computer Networks. 3rd ed., Prentice-Hall,Upper Saddle River 1996.

[Tele 00] Deutsche Telekom: Das kleine Lexikon (Beilage zu ”Der Katalog.Telekommunikation fur zu Hause und unterwegs.”. Fruhjahr/Sommer2000, 8.

[Tele 98] Telemediarecht: Telekommunikations- und Multimediarecht. Deu-tscher Taschenbuch Verlag, 1. Auflage 1998, 1-321.

[ThFe 95] Jurgen Thees, Hannes Federrath: Methoden zum Schutz von Ver-kehrsdaten in Funknetzen. Verlaßliche IT-Systeme, GI-FachtagungVIS ’95, DuD Fachbeitrage, Vieweg, Braunschweig 1995, 180-192.

[Thom 84] Ken Thompson: Reflections on Trusting Trust. Communications ofthe ACM 27/8 (1984) 761-763.

[Thom 87] Karl Thomas: Der Weg zur offenen Kommunikation. nachrichten elek-tronik+telematik net special ”ISDN - eine Idee wird Realitat”; Son-dernummer Oktober 87, R. v. Decker’s Verlag, 10-17.

[ToWo 88] Martin Tompa, Heather Woll: How to Share a Secret with Cheaters.Journal of Cryptology 1/2 (1988) 133-138.

[VaVa 85] Umesh V. Vazirani, Vijay V. Vazirani: Efficient and Secure Pseudo-Random Number Generation (extended abstract). Crypto ’84, LNCS196, Springer-Verlag, Berlin 1985, 193-202.

[VoKe 83] Victor L. Voydock, Stephen T. Kent: Security Mechanisms in High-Level Network Protocols. ACM Computing Surveys 15/2 (1983) 135-171.

[VoKe 85] Victor L. Voydock, Stephen T. Kent: Security in High-level NetworkProtocols. IEEE Communications Magazine 23/7 (1985) 12-24.

399

Bibliography

[WaP1 87] Michael Waidner, Birgit Pfitzmann: Anonyme und verlusttoleranteelektronische Brieftaschen. Interner Bericht 1/87 der Fakultat fur In-formatik, Universitat Karlsruhe 1987.

[WaPP 87] Michael Waidner, Birgit Pfitzmann, Andreas Pfitzmann: Uber dieNotwendigkeit genormter kryptographischer Verfahren. Datenschutzund Datensicherung DuD 11/6 (1987) 293-299.

[WaPf1 89] Michael Waidner, Birgit Pfitzmann: The Dining Cryptographers inthe Disco: Unconditional Sender and Recipient Untraceability withComputationally Secure Serviceability. Universitat Karlsruhe 1989.

[WaPf 85] Michael Waidner, Andreas Pfitzmann: Betrugssicherheit trotz Ano-nymitat. Abrechnung und Geldtransfer in Netzen. 1. GI Fachta-gung Datenschutz und Datensicherung im Wandel der Informa-tionstechnologien, IFB 113, Springer-Verlag, Berlin 1985, 128-141;Uberarbeitung in: Datenschutz und Datensicherung DuD 10/1 (1986)16-22.

[WaPf 89] Michael Waidner, Birgit Pfitzmann: Unconditional Sender and Re-cipient Untraceability in spite of Active Attacks - Some Remarks.Fakultat fur Informatik, Universitat Karlsruhe, Interner Bericht 5/89,Marz 1989.

[WaPf 90] Michael Waidner, Birgit Pfitzmann: Loss-Tolerance for ElectronicWallets. 20th Int. Symp. on Fault-Tolerant Computing (FTCS) 1990,IEEE Computer Society Press, Los Alamitos 1990, 140-147.

[Waer 67] B.L. van der Waerden: Algebra II. Heidelberger Taschenbucher 23,Springer-Verlag, Berlin 1967, 5. Auflage.

[Waer 71] B.L. van der Waerden: Algebra I. Heidelberger Taschenbucher 12,Springer-Verlag, Berlin 1971, 8. Auflage.

[Waid 85] Michael Waidner: Datenschutz und Betrugssicherheit garantierendeKommunikationsnetze. Systematisierung der Datenschutzmaßnahmenund Ansatze zur Verifikation der Betrugssicherheit. Diplomarbeit amInstitut fur Informatik IV, Universitat Karlsruhe, August 1985, In-terner Bericht 19/85 der Fakultat fur Informatik.

[Waid 90] Michael Waidner: Unconditional Sender and Recipient Untraceabilityin spite of Active Attacks. Eurocrypt ’89, LNCS 434, Springer-Verlag,Berlin 1990, 302-319.

[Walk 87] Bernhard Walke: Uber Organisation und Leistungskenngroßen einesdezentral organisierten Funksystems. Kommunikation in Verteilten

400

Bibliography

Systemen; Anwendungen, Betrieb, Grundlagen; GI/NTG-Fachta-gung, Aachen, Februar 1987, IFB 130, herausgegeben von N. Gernerund O. Spaniol, Springer-Verlag, Berlin, 578-591.

[WeCa 79] Mark N. Wegman, J. Lawrence Carter: New Classes and Applicationsof Hash Functions. 20th Symposium on Foundations of Computer Sci-ence (FOCS) 1979, IEEE Computer Society, 1979, 175-182.

[WeCa 81] Mark N. Wegman, J. Lawrence Carter: New Hash Functions andTheir Use in Authentication and Set Equality. Journal of Computerand System Sciences 22 (1981) 265-279.

[Weck1 98] Gerhard Weck: Key Recovery - Empfehlungen. Informatik-Spektrum21/3 (1997) 157-158.

[Weck 98] Gerhard Weck: Key Recovery - Moglichkeit und Risiken. Informatik-Spektrum 21/3 (1997) 147-157.

[Wein 87] Steve H. Weingart: Physical Security for the microABYSS System.Proc. 1987 IEEE Symp. on Security and Privacy, April 27-29, 1987,Oakland, California, 52-58.

[West 97] Andreas Westfeld: Steganographie am Beispiel einer Videokonferenz.in: Gunter Muller, Andreas Pfitzmann (Hrsg.): Mehrseitige Sicherheitin der Kommunikationstechnik, Addison-Wesley-Longman 1997, 507-526.

[WiHP 97] Gutram Wicke, Michaela Huhn, Andreas Pfitzmann, Peter Stahl-knecht: Kryptoregulierung. Wirtschaftsinformatik 39/3 (1997) 279-282.

[Wich 93] Peer Wichmann: Die DSSA Sicherheitsarchitektur fur verteilte Sys-teme. Datenschutz und Datensicherung DuD 17/1 (1993) 23-27.

[Wien 90] Michael J. Wiener: Cryptanalysis of Short RSA Secret Exponents.IEEE Transactions on Information Theory 36/3 (1990) 553-558.

[ZFPW 97] Jan Zollner, Hannes Federrath, Andreas Pfitzmann, Andreas West-feld, Guntram Wicke, Gritta Wolf: Uber die Modellierung stegano-graphischer Systeme. Verlaßliche IT-Systeme, GI-Fachtagung VIS ’97,DuD Fachbeitrage, Vieweg 1997, 211-223.

[ZSI 89] Zentralstelle fur Sicherheit in der Informationstechnik (Hrsg.): IT-Sicherheitskriterien; Kriterien fur die Bewertung der Sicherheit vonSystemen der Informationstechnik (IT). 1. Fassung vom 11.1.1989;Koln, Bundesanzeiger 1989.

401

Bibliography

[ZSI 90] Zentralstelle fur Sicherheit in der Informationstechnik (Hrsg.): IT-Evaluationshandbuch; Handbuch fur die Prufung der Sicherheit vonSystemen der Informationstechnik (IT). 1. Fassung vom 22.2.1990;Koln, Bundesanzeiger 1990.

[Zimm 93] Philip Zimmermann: PGP - Pretty Good Privacy, Public Key En-cryption for the Masses, User’s Guide; Volume I: Essential Topics,Volume II: Special Topics. Version 2.2 - 6 March 1993, now Version2.6.2, 11 Oct. 1994; see http://www.mantis.co.uk/pgp/pgp.html¿ forsources.

402

A Exercises

1 Exercises for “Introduction”

1-1 Context between spatial distribution and distribution regarding to con-trol– and implementation structure.Information technical systems (IT–systems) that are spatially distributed are alsoexclusively realized in a distributed way regarding their control– and implementa-tion structure. Why?(As explanation: control structure = Which instances are issuing orders, instruc-tions etc. to other instances; Convince yourself whether and if necessary how theywere executed. implementation structure = How are these instances implemented,e.g. many on one computer, e.g. single instances on many cooperating computers,etc.)

1-2 Advantages and disadvantages of a distributed system regarding security

a) Which advantages and disadvantages provides a distributed system regardingsecurity?

b) Which security properties are supported by spatial distribution, which bydistribution regarding control– and implementation structure?

1-3 Advantages and disadvantages of an open system regarding security

a) Which advantages and disadvantages provides an open system regarding se-curity?

b) How are these advantages and disadvantages combined regarding security inopen distributed systems?

1-4 Advantages and disadvantages of a service–integrating system regardingsecurtity

a) Which advantages and disadvantages provides a service–integrating systemregarding security?

b) How are these advantages and disadvantages regarding security combined inopen distributed service–integrating systems?

1-5 Advantages and disadvantages of a digital system regarding to security

403

A Exercises

a) Which advantages and disadvantages provides a digital system regarding se-curity? Differentiate between digital transmission/storage on the one handand computer-driven transmission, computer-driven processing on the otherhand. What is changing if computers can be programmed freely (programbeing in the RAM instead of being in the ROM)?

b) Which coherences exist between digital systems and the properties 1) dis-tributed, 2) open and 3) service–integrating?

1-6 Requirement definition for four exemplary applications in the medicaldomainCreate security aims that are as complete as possible for the following applications!

a) Medical records of many patients are stored in a database and are accessedby medicating doctors from all over the world.

b) During medical surgeries external experts are consulted via videophone. Whichadditional requirements are arising if the surgery team is dependent on ad-vices on site?

c) Patients shall be medically monitored in their private flats. On the one handit makes them able to leave hospital earlier and on the other hand it can becalled fast and automatically for help, e.g. in case of unconsciousness.

d) Medical and psychological fact–databases can be queried by patients to getinformed more comfortable, more detailed, and more up-to-date than it ispossible with books and to save the doctor’s time (which they won’t takeanyway).

Are you sure of the completeness of your security aims?

1-7 Where and when can computers be expected?Mark the locations in the following figure by filling out the number n where youexpect computers as of year n. Mark by adding / <role declaration >, whoprogrammed this computer and who is able to program this computer. Whatdoes this mean for the security?

404

2 Exercises to “Security in single computers and its limits”

Visual Telephone

Network

Connection Participant’s

Access Line

Switching

Center 2

Videotext

Telephone

Television

Radio

Switching

Center 1

Participant 1

Participant 2

Bank

Interactive Videotext Server

Interactive

1-8 Deficits of legal provisionsWhy are legal provisions not sufficing for e.g. legal certainty and data protection?

1-9 Alternative definitions/characterizations of multilateral securityIn §1.4 a definition of multilateral security is given. Since this definition is rathera colloquial characterization than a strict definition of a scientific issue: Think outat least 3 alternative definitions/characterizations or collect them from literature.You can make a find in [Cha8 85, Chau 92, PWP 90, MuPf 97, FePf 99].

2 Exercises to “Security in single computers and its

limits”

2-1 Attackers outside of schoolbook’s wisdomThink out attackers that do not obey to the currently known physical laws ofnature, as demanded in §2.1.1, and before whom security cannot be achieved inprinciple.

2-2 (In)Dependence of confidentiality, integrity, availability with identifica-tion as exampleIn §1.2.1 it was differentiated between the security aims confidentiality, integrityand availability. Discuss on the basis of the example identification (§2.2.1) if ”se-cure” IT-systems can be expected for which no requirements regarding at least oneof the three security aims exist.

405

A Exercises

2-3 Send random number, expect encryption – is this working conversely,too?In §2.2.1 a small example protocol is given for identification:

generate a random number, send it;

await encryption of the random number. Proof it.

Is it possible to execute this protocol conversely:

generate random number, encrypt it, send the encrypted random number;

await decrypted random number. Proof it.

What will be changed by this inversion? (As a help: Which assumptions aboutthe generation of the random numbers must be made at a time?)

2-4 Necessity of logging in case of access control

a) Protocol data (secondary data) are personal related data as well as primarydata. Therefore they must be protected against unauthorized access like theprimary data. It’s a vicious circle – we are now again supposed to record theaccess to the secondary data (tertiary data) and so on. Why is this problemsolved in a satisfactory way in spite of recursivity?

b) What should one aspire?

c) Regarding authorization similar things as described in a) apply for the pro-tocol data: To control who is allowed to set rights (authorization), rulesthemselves must be set (and enforced), who is allowed to do this, etc. It lookssuspiciously like a vicious circle again: Do similar things (as in a) ) applyregarding the solution?

2-5 Limitting the success of self-modifying attacksIn §1.2.5 it was mentioned that self-modifying attacks can be recognized in principlebut their success can’t be prevented completely. Which measures must be takento make self-modifying attacks as ineffective as possible? Consider especially howyou would organize backups for data and programs.

2-6 No hidden channel anymore?Some militaries that are extremely worried about keeping their plans for defence oftheir country secretly decide after having read the first two chapters of this script:We will never trust in delivered devices being free of trojan horses that want toreveal our secrets yet before a case of defence. Hence we will put these devices inairtight rooms and close all channels that go outside by

a) disconnecting wires with physical switches (reasons are clear!) in times ofpeace,

b) keeping the current power consumption constantly in the computer centre(thus information cannot escape outside),

406

3 Exercises for “Basics of Cryptology”

c) not giving the devices to the manufacturer for repairs (because informationcan be stored in them, too).

They announce to their minister of defence: No bit will escape from our computersin times of peace finally! Are they right?

2-7 Change of perspective: the diabolical terminal

a) The security of each participant can’t be bigger than the terminal, i.e. thedevice one is interacting with directly, is secure for it. Suppose you are Advo-catus Diaboli: Think out a terminal with as many properties as possible thatundermines the security of its user but the user likes to use it even though.

b) Compare your suggestions from a) with PCs that are offered to you today.

3 Exercises for “Basics of Cryptology”

3-0 Terms

Term trees are shown in the following figure. Explain the textual structure ofthose terms and the advantages and disadvantages of using the short terms crypto–system resp. encryption and decryption in general for cryptographic systems ormore specially only for cryptographic systems of secrecy.

Cryptography Cryptoanalysis

!!

!!

aa

aa

Cryptology

cryptographicsystem of secrecy

symmetricauthentication system

digital signature system

�������

XXXXXXX

cryptographicauthentication system

((((((((((((

PPPPPP

cryptographic system

3-1 Regions of trust for asymmetric systems of secrecy and digital signaturesRegions of trust and attacks are marked in a way in figure 3–1 that no statementis made about the generation of the key. It is clear that it must take place withina region of trust. Whether in the region of trust of the encrypter, of the decrypteror in an additional one stays open and should be different in dependence of theused application:

In case of encryption of a local storage medium the generation of the key, encryptionand decryption should take place in the same device – then the key does notneed to leave the device ever. Thus the key can be optimally protected accordingto confidentiality and integrity. (Here it is reminded that encryption does not

407

A Exercises

contribute anything to availability and that backup copies of the plain text that isstored on this medium must be made if necessary or of the ciphertext as well as ofthe decryption key.) In case of direct exchange of the keys between encrypter anddecrypter outside of the considered communication network, e.g. via usb stick, thekey should be generated within one of the regions of trust.

In case of key exchange within the communication network according to figure 3–3the generation of the key can take place at the encrypter, the decrypter or at thecentral of key exchange. It does not matter in which of the three regions of trustthis takes place because the key exists unencrypted in each region of trust anywaywhile the keys are distributed.

Everything that has been said up to now is of course not only valid for asymmetricsystems of secrecy, but also for symmetric authentication.

And now the question: How would you draw in the regions of trust in case ofasymmetric systems of secrecy and systems of digital signature? Do it – andexplain your decision.

3-2 Why keys?It is alleged in §3.1.1 that cryptographical systems use keys.

a) Would it work without keys, too? If yes, how?

b) What disadvantages would arise?

c) What would be advantages of using keys in your opinion?

3-2a Adjustment of keys in case of autonomous key generation by partici-pants?It is elucidated in exercise 3–1 why the key generation should take place in the

device where the secret part of the key pair is used in case of asymmetric crypto-graphical systems.

Occasionally it is objected to it: Since the generated key pairs must be unique (ob-viously especially important when using digital systems for signatures) they can’tbe generated autonomously in a decentralized way by participants because doubledkeys (two participants generate independently from each other the same key pair)can’t be eliminated completely. In order to achieve this so–called TrustCenters –technically organized units that provide different services that are trustworthy andsubservingly for security – are supposed to take over the function of key generation,too.

What do you think about this objection and suggestion?

3-3 Increasing security by using multiple cryptographic systemsWhat can you do when you have multiple cryptographic systems that are basedon different security assumptions but you don’t want to trust one of these crypto-graphical system (resp. one of the security assumptions) solely? Differentiate thesecurity aims secrecy (i.e. you have several systems of secrecy) and authentication

408

3 Exercises for “Basics of Cryptology”

(i.e. you have several systems of authentication). Which effort for computation,storage and transmission does this measure cause? (The last one only roughly forsecrecy)

3-4 Protocols for key exchange

a) All described protocols for key exchange assume that both participants thatwant to exchange a key have exchanged keys with a common central for keyexchange resp. with a common public key register – in case of a central forkey exchange keys of a symmetric cryptosystem and in case of a public keyregister keys of an asymmetric cryptosystem.What must be changed if this is not the case? For instance both participantscould live in different countries and exchanged their keys only with centralsfor key exchange resp. public key registers in their actual country.

b) Optional: How should your solution for the exchange of symmetric keys bedesigned if each participant wants to use multiple centrals for key exchangein order to have to trust the centrals for key distribution as less as possibile?

c) Parallel connection of centrals for key distribution was demonstrated and ex-plained for symmetric cryptosystems. Is such a parallel connection useful forasymmetric cryptosystems? How should it happen? What does it accom-plish?

d) Optional: Does a protocol for key exchange exist that is secure despite of thefollowing attacker model: Either the attacker receives a passive assistance ofall centrals for key distribution, i.e. they reveal all exchanged keys, or (exclu-sive!) the attacker is complexity–theoretical unlimited, i.e he is able to breakany asymmetric system of secrecy (and any normal digital signature system).It is implied additionally that the attacker wiretaps all communication in thenetwork. What does a protocol for key distribution, if applicable, look like?Which cryptographical system is one supposed to use after key exchange forsecrecy, which for authentication?

e) What does happen if an attacker attacks diversifyingly while exchanging sym-metric keys? This could happen by changing messages on the wire or by a cen-tral for key distribution that distributes inconsistent symmetric keys. Whatshould be done to defend participants from this? Treat the case of a centralfor key distribution first, then the case of multiple centrals in sequence (forinstance in case of multiple hierarchical key distribution, cp. exercise part a)) and then the case of multiple parallel centrals (cp. §3.1.1.1).

f) Topicality of keys: How far and how can it be guaranteed that an attacker isnot able to persuade participants to use old (and for instance compromised)keys that were meanwhile replaced by new keys? (It is implied that keys ofthe ”right” partner are involved too)

g) Exchange of public keys without public key register: anarchic key exchangea la PGP

409

A Exercises

We assume that we can use no public key registers (be it that the countryor the telecommunication provider are not operating one, be it that somecitizens don’t want to trust central–hierarchical public key registers, be itthat there are public key registers but these are generating the key pairs rightin the way – evil to him who evil thinks) Nevertheless (or hence) public keysare supposed to be exchanged in a public participant group.As repetition: What must be payed attention to in case of exchanging publickeys? Who could be alloted for the function of the trustworthy public keyregister?How could the key exchange take place then?What must be done when not everybody trusts everybody in the same way?What must be done when a participant discovers that his secret key is knownby somebody else.What should a participant do when his secret key got destroyed?Optional: What must be done when a participant discovers that he hadcertified a false assignment of participant ⇔ public key?Is it possible to combine the method created in this exercise part with themethod created in exercise part b)?

3-4a Which sort of Trust is necessary for which function of a TrustCenter?Some authors (as we will see in a moment) assign the following functions to so–called TrustCenters:

a) key generation: The generation of key pairs for participants

b) key certication: Certification of correlation between public key and partici-pant, cp. §3.1.1.2

c) directory service: Keeping certified public keys ready for request from arbi-trary participants.

d) block service: Blocking public keys, among others on request of the key owner,i.e. prescribing a certificated statement that (and from when on etc.) thepublic key is blocked.

e) time stamp service: Digital certifying of submitted messages incl. currentdate and time.

Discuss which sorts of trust of the participant this requires. The distinction be-tween blind trust(participant has no possibility to proof if his trust is justified)and checking trust(participant is able to proof if his trust is justified) could beuseful. How are these properties related to the attacker model described in §1.2.5,and especially to the distinction between observing and self-modifying attacks?

3-5 Hybrid encryptionYou want to exchange huge data and secure this exchange – for efficiency reasons– with a hybrid cryptosystem. How will you proceed if

410

3 Exercises for “Basics of Cryptology”

a) only secrecy

b) as well authentication

c) only authentication

is important for you?Contemplate case a) first, where a sender S sends files to receiver R, then case b),where S sends files one after another to R, and finally case c), where E answers,i.e. sends files to S.

3-6 Situations from practice (for systematics of cryptographic systemsSpecify for the situations which cryptosystem you would use (authentication/se-crecy, symmetric/asymmetric) and how you would carry out key distribution. Ad-ditionally you could specify roughly how critical according to security and time thesystem is. (e.g. Vernam’s cipher in hardware: 10 Gbit/s, in software 100 Mbit/s;information theoretically secure authentication codes in hardware 5 Gbit/s, insoftware 100 Mbit/s; DES in hardware: 200 Mbit/s, in software up to 10 Mbit/s,according to the PC; RSA in hardware: 200 kbit/s, in software 6000 bit/s (GMR,s2 − mod − n-generator probably likewise).)

a) Andreas looks for flats in Hildesheim. He wants to write Birgit an email withadvantages and disadvantages of the flats and wants to receive a decision ifhe is supposed to sign a tenancy agreement. (Both are so hoarse that theyare not able to make a phone call.)

b) David has again a brilliant idea for a new cryptographic system and wants tosend it to a few trustworthy cryptographers (via file transfer). Unfortunatelyless ingenious cryptographers are lurking in the most computer centers inorder to steal David’s ideas.

c) A secure operating system flushes out data to a removable disk. When thedata is read back, it has to be sure, that it hasn’t changed.

d) A fan of computers wants to order a new computer on the basis of a digitalcatalog with a digital message.

e) A mechanical engineering company sends a fax to another company with anunbinding inquiry, whether the company was able to manufacture a certainpiece and at what price.

3-7 Vernam’s cipher

a) Calculate a small example, whereby we agree upon for now and for the fol-lowing that processed bits are written from left to right.

b) The security of Vernam’s cipher (with adding modulo 2) was proven in thelecture. Proof this for addition in arbitrary groups.

c) Decrypt the ciphertext 01011101, which was encrypted with the key 10011011.Is the result unique?

411

A Exercises

d) An attacker intercepts the ciphertext 010111010101110101011101 and wantsto invert the third bit from the right of the corresponding plain text. Whatmust one do if it is computed modulo 2? What must one do if it is computedmodulo 4, modulo 8 resp. modulo 16, thus 2, 3 resp. 4 bits are added”together” each. In which sense is this exercise for modulo 4 and modulo 16solvable, in which sense not?

e) Why is no adaptive active attack on the key in case of Vernam’s cipher pos-sible?

f) There are 16 functions of 0, 1x0, 1→ 0, 1. Which are suitable as encryption-resp. decryption function for Vernam’s cipher? Specify general necessary andsufficient conditions for functions of arbitrary Vernam’s ciphers first, thenapply them to the binary case.

g) A younger cryptographer has the following idea for decreasing the effort ofkey exchange: Instead of a random key I do use a part of a book which mypartner owns too. Then we only have to agree on book, page, side and row ofthe beginning and we can add modulo (number of characters in an alphabet)resp. subtracting as long as the book is long enough.Which advice and why will you give as an older cryptographer to your youngerfriend?

h) In some countries it is allowed to send encrypted messages, but one has tostore the used key and has to hand it out on request to an authority of lawenforcement or the like. Does this make sense for Vernam’s cipher? (It isimplied of course that the authority wiretaps to all ciphertexts and storesthem in the right order and is able to assign them to sender and receiver.)

i) How long is it possible to encrypt with the information theoretical secureVernam’s cipher

text communication (typing velocity:40 bit/s),

voice communication (with the normal ISDN-encoding: 64 kbit/s) resp.

high-resolution full-video-communication (140 Mbit/s),

if one has exchanged

a 3.5” floppy (1.4 MByte),

CD-R (700 MByte),

DVD-RAM onesided (4.7 GByte),

Dual-Layer-DVD double-sided (17 GByte)

Blueray-ROM (50 GByte)

full of random bits?As subsumption: The first two mediums are standard technology at leastsince 1990 and widespread in large quantities. DVD-RAM (or DVD+/-R)is very popular today, the next generation optical media is Blueray with acapacity of 50 GB and first recorders have been announced for 2006. It serves

412

3 Exercises for “Basics of Cryptology”

as an example for estimating what will be available for the end-user in thenext years.

3-7a Symmetric encrypted Message Digest = MAC ?

a) Every now and then one hears about the following recipe for generating theMessage Authentication Code (MAC):

Create a check digit for the message using a publicily known procedureand encrypt the check digit with a symmetric system of secrecy.The receiver decrypts the check digit and proves if it fits by calculatingthe appropriate check digit for the received message using a publicilyknown procedure. If both check digits are equal, the message is regardedas authentic.

What do you think about this recipe?(As a hint: Take the most possible secure procedure for creation of checkdigits, thus a collision–resistant hash–function, and the most possibly securesystem of secrecy, Vernam’s cipher, and then contemplate if the combinationis secure.)

b) What do you think about the following improvement: Instead of encrypt-ing the hash-value with a symmetric key the symmetric key is hashed to-gether with the message (as for example in the Transport Layer Security(TLS) protocol). If you have the opinion that is not a secure constructionfor all collision–resistant hash–functions: Which additional property must thecollision–resistent hash–function have? (As a starting point for considering:Could the hash–value reveal something about the symmetric key?)

c) If construction (given in b) ) for symmetric authentication is secure for aparticular hash–function: Will you then be able to construct a symmetricsystem of secrecy using this hash–function? (This would be exciting, becausethe following opinion is widespread in the secret service: good hash–functionare allowed to be known by enemies, while good systems of secrecy must bekept in secret from them. In Germany, for instance, scientific publications inthe cryptographic area of the BSI refer to hash–functions exclusively, cp. e.g.[Dobb 98].)

3-8 Authentication codes

a) Calculate a small example for an authentication code with 4 bits by usingthe authentication code represented in the following table. (=figure 3–15). H(Head) and T (Tail) are the plain text each.

Text with MAC

413

A Exercises

H,0 H,1 T,0 T,1

00 H - T -01 H - - T10 - H T -11 - H - T

b) Why is the authentication code information theoretical secure? (you cancontinue or summarize the proof shown in the lecture, or represent the codeeasier first.)

c) If one uses this code in order to authenticate a message that consists of severalbits (as in exercise part a) or example 2 from the lecture), will an attackerthen be able to change the message unnoticed just by permuting the order ofthe bits including their MACs?

d) Calculate a small example for secrecy and authentication of 4 bits using theauthentication code given in the following table. (it affects secrecy and au-thentication as well )Why is this code information theoretical secure for arbitrary attacks on au-thentication and only information theoretical secure for observing attacks onsecrecy? (It may be implied that the attacker gets to know if its attack wassuccessful or not in case of an self-modifying attack.)

ciphertext00 01 10 11

k 00 H - T -e 01 T - - Hy 10 - T H -s 11 - H - T

e) Optional: Modify the given secrecy and authentication code in order to makeit resist self-modifying attacks regarding to secrecy. Is your system moreefficient than just using an authentication code and Vernam’s cipher?

f) The given authentication codes have two disadvantages: Both are working onsingle bits (H and T) so that the MAC must be attached on each bit whatis not efficient for many applications. Furthermore forging attempts are onlydetected and denied with a probability of 0.5. A probability of nearly 1 wouldbe desirable.The following authentication code by Mark N. Wegman and J. LawrenceCarter [WeCa 81] does not have this disadvantage: Messages m are elementsof a huge finite field K (for instance addition and multiplication modulo ahuge prime number, which is known to everybody). Keys be pairs (a,b) ofelements of K (and are only used once). MACs are created as following:

MAC := a + b ∗m

414

3 Exercises for “Basics of Cryptology”

Proof: If an attacker observes m and MAC then he is not able to create theappropriate MAC for another message m′ more purposeful than by guessing.His success probability is maximal 1/|K|.

3-9 Mathematical secrets

a) (Roughly:) Prime number generation is often programmed in the followingway: Only at the beginning an odd random number p is chosen. If thisnumber is not prime it is processed in 2–steps.

Is it clear that systems that are based on the factorization assumption arestill secure in this case?

(if somebody wants to make this more exactly he should use a Cramer as-sumption which says π(x + log2 x) − π(x) > 0. Thereby π(y) stands for thenumber of prime numbers up to the maximum y)

b) Somebody programs – because of the procedure mentioned in a) – his primenumber creation accidentially in this way: after having found a prime numberp the search for the second prime number q goes on in 2–steps from the acutedigit. How will you then factorize the product n?

c) Calculate using Square and Multiply: 319 mod 101. (sometimes it is usefulthat 100 ≡ −1 )

d) Calculate 3123 mod 77 using Fermat’s little theorem.

e) Calculate the inverse of 24 mod 101 using the extended Euclidean algorithm.

f) What about the inverse of 21 mod 77

g) Calculate y mod 77 with y ≡ 2 mod 7 ∧ y ≡ 3 mod 11, and z with z ≡ 6 mod7 ∧ z ≡ 10 mod 11.

h) Test if the following cosets are quadratic remainders (Calculating with nega-tive cosets simplifies your work): 13 mod 19, 2 mod 23,−1 mod 89. If yes andif the simple algorithm from the script is applicable extract the root.

i) Extract all 4 roots from 58 mod 77. (You know the secret 77 = 7 ∗ 11.)

j) Utilize that 20352 ≡ 4 mod 10379 to factorize 10379.(For fans: How to generate such an exercise by hand?)

k) Be n = p ∗ q, where p and q are prime numbers., p ≡ q and p ≡ q ≡ 3 mod 4,and be x a quadratic remainder modulo n. Show that exactly one of the 4roots of x modulo n is again a quadratic remainder.

l) Show: If the factorization assumption would be false then the quadratic–remainder–assumption would be false, too.

3-10 Which attacks on which cryptosystems are to be expected?

a) A notary brands arbitrary documents that are presented to him with a times-tamp and signs them digitally in order to make it possible that everybodyis able to proof the notarization of each document. Which cryptographical

415

A Exercises

system class is he supposed to use, which attacks is the used cryptographicsystem supposed to resist?

b) A newspaper receives articles by its correspondents confidentially. Whichcryptographical system can be used, which attacks must be resisted by eachsystem? (We do ignore that in practical life articles should be authentical,too)

3-11 How long must files be protected? Dimensioning of cryptographic sys-tems; reactions to false dimensioning; Publishing public keys of a sig-nature system or asymmetric system of secrecy?

a) Which context between time, within which data is supposed to be protected bycryptographic systems and the dimensioning of those cryptographic systemsdoes exist? What does that mean for systems of secrecy for personal medicaldata? And what does that mean for digital signature systems if signaturesare supposed to stay valid for life?

b) What must be done in case of systems of secrecy, in case of digital signaturesystems when it is conceivable that the system will be broken soon. (Hint:Don’t forget to specify how to proceed with too weak encrypted resp. signeddata.)

c) We suppose that the authentic passing on of public keys is costly, cp. forinstance exercise 3–4c), so that you can only afford to pass on either a publickey of an asymmetric signature system or a public key of an asymmetricsystem of secrecy. Which one will you pass on and why?

d) Is your solution for c) which you probably find under the aspect of func-tionality of cryptographic protocols, also right in the light of the solution ofb)?

3-12 crypto policy by choice of a convenient length of key?Often a convenient, fixed key length is proposed to simplify crime combat (espi-onage too, but it isn’t talked about that publicily), but also to enable citizensand companies to achieve a ”sufficient”’ protection of confidentiality of data bycryptographic systems of secrecy for companies and citizens: So long that curiousneighbors and competitive companies cannot afford the effort for breaking. Butthe state that has more financial power and more computing capabilities can af-ford this. Do you think that this proceeding is reasonable or hopeless in principle?Please give reasons for your answer. (As background information: States like theUSA and Germany surely have no legal restrictions for production and nationalsales of cryptographic products but put their export under the provision of autho-rization. USA used to grant export licenses only if the key length of symmetricsystems of secrecy was maximal 40 bits. Even nowadays certain high-performancecomputers underlie export restrictions. For this purpose nothing is officially knownin Germany– but the proceeding is probably the same.)

416

3 Exercises for “Basics of Cryptology”

3-13 Authentication and secrecy in which order?Confidentiality and integrity must be guaranteed for many applications. In whichorder should secrecy and authentication take place? Does it make a differenceif symmetric or asymmetric cryptosystems are used? (It is implied for the wholeexercise that secrecy and authentication run in the same device environment. Thusboth is implemented trustworthily from the user’s view. It is equally possible toconfigure the user interface independently from the order, for instance plain textis always shown in case of authentication.)

3-14 s2 − mod − n− generator

a) Calculate a small example for the s2− mod −n−generator used as symmetricsystem of secrecy. Suggestion: The key be n = 77 and the seed s = 2, theplain text be 01012.

b) Decrypt the message 10012, s4 = 25, which was encrypted with the s2 −mod − n − generator as asymmetric system of secrecy. Your secret decryp-tion key is p = 3, q = 11 (Only use algorithms for decryption that are stillefficiently executable for |p| = |q| ≥ 300. Simply trying out all possible seedsfor instance, only shows that you understood how it is possible to break thes2 − mod − n− generator if the key length is chosen too small. This is alsovalid for passing on to exponentiate with 2 and observing how the cycle isclosing. You are supposed to show how decryption works!)

c) What happens when one uses s2− mod −n−generator as symmetric systemof secrecy and sends the ciphertext across a wire that is not fault–tolerant,and a bit switches completely by a physical error? And what will happen ifa bit gets lost?

d) Are you able to construct a symmetric authentication system using the s2 −mod −n− generator that needs less key exchange than a real authenticationcode? If applicable, how does it work? How secure is your system?

e) How many key exchanges are necessary if one wants to send several messagesconsecutively using the s2 − mod − n − generator as symmetric system ofsecrecy? What must be stored by everyone?

f) Optional: If the s2 − mod − n − generator reaches the inner state si = 1sometime then all next states are = 1 because 12 = 1. Thus from there allbits are 1. It is clear that this is allowed to happen only very rarely otherwisethe s2 − mod − n− generator would be cryptographical insecure. Calculatethe probability that the s2 − mod − n− generator reaches the inner state 1after i steps. (Tip: There’s hardly something to calculate, you just have tofind a simple argument.)

3-15 GMR

a) Your secret key be p = 11, q = 7 Sign the message m = 01 with the referenceR = 28. (The length of the message be always 2 bit.) Is this possible withR = 28? If no, take R = 17 instead.

417

A Exercises

b) Assume that altogether 8 messages are to be signed. Which parts of thereference tree and appropriate signatures is the signer supposed to createnewly when reaching the 4th message? And at the 5th? Which parts will besend, what is the receiver supposed to test? (Best would be to paint a pictureand mark.)

c) Sketch a proof for the collision resistance of the family of fpref(m) mentionedin §3.5.3 . Where is the prefix–free coding needed in your proof?

3-16 RSA

a) Calculate a small example for RSA as a system of secrecy.

b) Construct a small example for RSA as a signature system.

c) How will the plain texts 0 and 1 be encrypted, and how will the ciphertexts0 and 1 be decrypted? What do we learn?

d) How would you make an indeterministically encrypting system of secrecy fromRSA? How many random bits do you need? Why are, for instance, 8 randombits not enough? Why is it not a good idea to use time as random bits?

e) Optional: Can you see if your example from d) is secure against active attacks,cp. §3.6.3? What must be done against active attacks, if applicable? Whatwould the message format then look like?

f) Optional: How would you create an indeterministically signing signature sys-tem from RSA? What would that be useful for? Or could this even causeproblems?

g) How grows the computation effort per encrypted bit in dependence of thesecurity parameter l if the algorithms shown in §3.4.1.1 are used for imple-mentation? Distinguish as edge cases if extremely short or extremely longmessages are involved.

h) Sketch why the secret operation of RSA, thus signing resp. decrypting, is4 times faster when one calculates modulo p and q solely than calculatingmodulo n.

i) By which factor is the public operation of RSA faster if one uses 216+1 asexponent instead of a random number?

j) What do you think about the following change of key generation of RSA(taken from [MeOV 97, page 287]: instead of φ(n) = (p− 1)(q − 1) the leastcommon multiple of (p-1)(q-1) , i.e. lcm(p − 1, q − 1)? Does this changeanything according to security of RSA? What is changing according to com-putation effort of RSA?

k) Optional: Calculate a small example for the attack of Johan Hastad.

l) You can find a proof, for instance in [MeOV 97, page 287], that breakingRSA completely, i.e. finding the secret key d, is equivalent to factorizing themodulus. Why does this say nothing essentially about the security of RSA?

418

3 Exercises for “Basics of Cryptology”

3-17 DES

a) A simple DES-like cryptosystem is given by: block length 6; 2 iterationrounds; no starting permutation; f be described by f(R, Ki) = R ∗Ki mod 8;K be (K1, K2).

The key be 011010. Encrypt the plain text 000101 and decrypt it again.

b) How would you realize S–Boxes in software to be able to decrypt as fast aspossible?

c) Optional: And how would you realize the 32–bit–permutations?

d) known–plain text attack: If you know some plain text–ciphertext–pairs thatwere encrypted using DES, how and with which effort (according to numberof en–and decryptions of DES) could you find out the key?

e) chosen–plain text attack: Does something change in opposite to d) if you areallowed to choose an additional plain text block and receive the appropriatecipherblock? How big is the effort?

f) How can you break DES if DES contains no substitutions but only permuta-tions and additions modulo 2? Contemplate if your attack would work moregenerally for each linear block cipher?

3-18 Modes

a) Confidentiality and authentication is reachable when using CBC: the first bytransmitting plain text blocks exactly, the latter by transmitting the plaintext and the last ciphertext block only. One could think both is possible atthe same time by simply transmitting the ciphertext block: On the one handsecrecy would be secured, but on the other hand the receiver would be ableto compute the plain text and thus he would have all pieces of informationfor authentication at his disposal. Why does that not suffice in principle? ’

b) Show with a simple attack that the scheme is still insecure when one attachesa block full of zeros to the plain text before it is encrypted to receive at leasta minimum of redundancy.

c) Contemplate the mode PCBC in §3.8.2.5. Show that your attack won’t workhere (when one proceeds in the same way, thus attaching a block full of zeros,then transmitting the ciphertext and wanting secrecy as well as authentica-tion)

d) Determine for the modes described in §3.8.2 what an attacker is able to achievewhen he has possibilities for adaptive active attacks. Assume that he is notable to break the used block cipher. (It is assumed as usually that the attackerknows the used cryptosystems. In particular he knows the length of the usedcryptosystems and the mode used.)

e) You want to work with ECB, CBC (may it be for secrecy, may it be forauthentication) or PCBC, but you don’t know if you will receive plain texts

419

A Exercises

of appropriate length (i.e. length of plain text is divisible by the block lengthwithout a remainder). The advice “Then we fill with zeros up to the blocklength” given often is not satisfying for you, because it is supposed to make adifference if one receives “I owe you e 10” or “I owe you e 100000” . Designa coding

1. that maps plain texts of arbitrary length to such an appropriate length,

2. that is uniquely invertable ,

3. which expansion of length is minimal.

f) OFB allows neither random access nor ability of parallelizing because thestate of the shift register must be computed from scratch each time. Designan obvious modification for OFB that allows random access and parallelizing.

3-18a Mirror attackWe assume that we want to substitute passwords for LOGIN by something better,but we have stupid terminals and an insecure network between terminals andmainframe. Therefore each participant has a “pocket calculator” which implementsa secure symmetric block cipher (cp. footnote at §2.2.1) though it does not havean electrical access to the terminal. Therefore all information transfer via Displayand keyboard has to run over the participant. What will you do if you assume thatthe mainframe itself is secure? Which attacks won’t be tolerated by your designedsystem?

3-19 End–to–end–encryptionWhich problems do arise if one wants to carry out an end–to–end–encryption usingan office computer, thus a computer where he is not the only one who has accessto? Which approaches do you see for a solution?

3-20 End–to–end–encryption without exchanging keys beforeOur couple of cryptographers Bob and Alice is – as always – separated and thistime Alice travels by air.

a) And once again the luggage got lost, thus key and crypto devices are notavailable. How could both communicate somewhat confidential and authen-tical? (A tip: You have to assume that the attacker can’t be everywhere atanytime.)

b) The baggage was recovered and returned. But someone wanted to find outthe keys in the crypto devices, thus they were erased automatically. Howcould Alice and Bob improve their communication now?

3-21 Identification and authentication by an asymmetric system of secrecyHow can a participant P , whose public key cP of an asymmetric system of secrecyis known by a participant U , be identificated by U , cp. §2.2.1?

a) Design a simple cryptographic protocol. (The existence of such a protocolis amazingly since asymmetric systems of secrecy were just defined with theaim confidentiality. But actually the protocol that is searched for exists.)

420

3 Exercises for “Basics of Cryptology”

b) Against which attacks is the system of secrecy in your small cryptographicprotocol supposed to be secure?

c) Are you able to improve your small cryptographic protocol in a way that itidentifies not only T but also authenticates messages Ni? (As incentive: Sucha protocol exists.) Is your authentication “system” asymmetric or symmetric,i.e. does it comply with digital signatures? Where is the disadvantage to“normal” authentication systems?

d) Is the asymmetric system of secrecy using your improved protocol supposedto resist such potent active attacks as in b)?

3-22 Secrecy by symmetric authentication systemsHow can a participant A who has only a symmetric authentication system incommon with a participant B communicate with him secretly? Please don’t justtake the confidentially exchanged key kAB for a symmetric system of secrecy, itis actually just a matter of an authentication system. (The existence of such aprotocol is amazingly since symmetric authentication systems were just definedwith the aim integrity. But actually the protocol that is searched for exists.)

3-23 Diffie–Hellman key exchange

a) Main idea: What is the main idea of the Diffie–Hellman key exchange?

b) Anonymity: Diffie–Hellman key exchange, as it was described in §3.1.1.1,distinguishes itself, according to §3.9.1, from asymmetric systems of secrecyin the ability of the sender to stay anonymously (or not) towards the receiverof the encrypted message.

In case of asymmetric systems of secrecy this is the trivial case: The senderobtains the public key for itself, encrypts the message using this key, sendsit to the receiver. The receiver decrypts the message independently from theone who had encrypted it.

This is different in case of Diffie–Hellman key exchange. The secret key whichis assigned to the relationship of sender and receiver is used for symmetricen–and decryption. Thus it seems that no anonymity is possible for senderto receiver.

Consider a modification of the Diffie–Hellman key exchange where anonymityfor the sender to the receiver is possible. And what must be done if the senderwants to receive an answer anonymously?

c) Individual choice of parameters: Diffie–Hellman key exchange, as described in§3.9.1, distinguishes from asymmetric systems of secrecy according to §3.1.1.1in the property that parameters that are critical for security (concretely pand g) are known in general and are therefore equal for all participants. Thequestion who chooses them and if the one is able to get special power arisesinstantly. Because it is not manageable if one chooses p and g randomly orsomehow specially that extracting the discrete logarithm is particularly easy

421

A Exercises

for one. If one lets a external instance choose p and g one needs a strongersecurity assumption than the discrete logarithm assumption. Alternativelymany instances can choose p and g together, but this requires a cryptograph-ical protocol again.

Consider in the scope of this exercise if you are able to modify the Diffie–Hellman key exchange in a way that each participant is able to choose allparameters that are critical for security.

d) DSA (DSS) as a basis: At May 19th 1994 a standard for digital signatureswithin all and with all public places in the USA was set once and for all bythe National Institute of Standards and Technology (NIST), which is the suc-cessor of the National Bureau of Standards that standardized DES in 1977:the Digital Signature Standard (DSS), given by the Digital SignatureAlgorithm (DSA) [Schn 96, page 483-495]. The steps in case of key gener-ation, - certification and -publishing that are interesting for us are:

a huge prime number p and a randomly chosen number g ∈ Z∗p

are public(and possibly equal for all participants)

Each participant chooses a number x with a length of app. 159 bit andkeeps it secretly as a part of his signature key.

Each participant computes gx mod p which will be certified and publishedas a part of his test key.

Is this enough to perform Diffie–Hellman key exchange?

3-24 Blindly achieved Signatures with RSACalculate a small example for blindly achieved signatures with RSA. You can savecalculating effort by using results of exercise 3–16b).

3-25 Threshold schemeAnalyze the secret G = 13 in 5 parts that k = 3 are necessary to reconstructit. Use the smallest prime number p that is possible and as random coefficientsa1 = 10, anda2 = 2. (In practise it is not allowed to choose p in such strongdependence to G. This choice of G is supposed to ease your calculating and toshow that you understood the conditions for p in relation to G.)Reconstruct the secret by use of the parts (1, q(1)), (3, q(3))and(5, q(5)).

3-26 Provably secure usage of several systems of secrecy?Now that you know that there are informational secure and efficient thresholdschemes, it is worth to pay attention to exercise 3–3 again. How do you combinemultiple systems of secrecy that the resulting total system is at least provableas secure as the most secure used system of secrecy? (Thereby you are allowedto suppose that the used systems of secrecy are designed for uniform distributedplain texts – as they arise in case of minimal length coding, and that “Securityof system of secrecy” means security in case of uniform distributed random plaintexts. Likewise you are allowed to suppose that the parts that are generated

422

4 Exercises for “Basics of Steganography”

by the information theoretical secure and efficient threshold scheme are uniformdistributed random plain texts.)

Compare the effort of the provable secure combination with the one described inexercise 3-3.

Compare each security achieved by the combination.

Does an especially simple implementation of a threshold scheme come into yourmind for exactly this application?

4 Exercises for “Basics of Steganography”

4-1 TermsIn the following picture term trees are shown that were similarly created to thetrees shown in the section about cryptography, cp. exercise 3-0. Newly created– and needs to get used to – is the term Steganology. I thought about takingthe shorter term Stegology but decided to use the longer but grammatical correctterm. Some authors use the term information hiding for the area of Steganologyand therefore especially for the area of Steganography. They divide informationhiding into two parts: steganography (that what I call steganographic secrecy) andcopyright marking (that what I call steganographic authentication). Similar tocalling cryptoanalysis as cryptanalysis in German the shorter term steganalysis isused for stegoanalysis.

Describe the textual structure of the trees of terms and the advantages and dis-advantages of using the shorter term stegosystem in general for steganographicsystems or especially for steganographic systems of secrecy.

Steganography Stegoanalysis

��

��

aa

aa

Steganology

steganographic system of secrecy

Watermarking Fingerprinting

��

��

PP

PP

P

steganographic authentication system

((((((((((

hhhhhhhhhh

steganographic system

4-2 Properties of in– and output of cryptographic and steganographic sys-tems

a) Draw a scheme (block diagram) which contains general parameters (all inputand output values) of a cryptographic system of secrecy. Where are thedifferences between symmetric and asymmetric systems of secrecy? Whichassumptions about the inputs is a good cryptosystem allowed to make, whichguidelines is it supposed to fulfill regarding to its outputs?

423

A Exercises

b) Draw a scheme (block diagram) which contains general parameters (all inputand output values) of a steganographic system of secrecy. Where are thedifferences between symmetric and asymmetric systems of secrecy? Whichassumptions about the inputs is a good stegosystem allowed to make, whichguidelines is it supposed to fulfill regarding to its outputs?

c) Where are the main differences between cryptographic and steganographicsystems of secrecy?

4-3 Is encryption superfluous when good steganography is used?Why is it actually unnecessary to encrypt the secret message before it is embeddedwhen a secure steganographic system is used? And why would you do it eventhough in practice?

4-4 Examples for steganographic systemsName some examples for steganographic systems. How do you evaluate the securityof your enumerated systems?

4-4a Modeling steganographic systemsA group of students discusses about modeling of steganographic systems shown inthe figures 4–2, 4–3, 4–4 and 4–5: Substantial things were forgotten, says AnjaForesight. We need an analysis before embedding that is supposed to cooperatewith a database of all so far used wrappers. This is the only way to avoid thatimproper wraps are used for embedding.

Right, agrees Berta Viewback. But it would be much better to do the analysisafter embedding – let’s call this algorithm Evaluation. The task of evaluation isto check if the stegotext is inconspicuous. If yes the stegotext is outputted. If no,it is rejected.

Don’t argue, notices Caecilie Baseman. We just do the analysis before as well asthe evaluation after embedding. This is the general case and we do agree, don’twe: Secure is secure! I plot it for you:

424

4 Exercises for “Basics of Steganography”

extract

random number

secretkey

wrap

stego-text

Hembed

keygeneration

k(H,x)

content

x

wrap*

H*

content

x

ana-lysis

wrap

H

content

x

evalua-tion

rejection

database

k

k

Doerthe Reinhardt raises her eyebrows and contradicts: No, I don’t like any ofyour improvement suggestions. The original model is sufficient.

Who do you agree with – and above all: How do you justify who you agree with?

4-5 Steganographic secrecy with public keys?Ross Anderson describes in [Ande4 96, AnPe 98, page 480] the following procedurefor steganographic secrecy:

Given is a wrap in which any key can be embedded anyhow. Then usually arate exists at that bits can be embedded without the stegoanalyzer noticingit. . . . Alice knows Bob’s public cipherkey. She can take her message whichis to be kept in secret, encrypt it with Bob’s public key, and then embed theresulting ciphertext. Each possible receiver will then simply try to decryptthe extracted message, but only Bob will succeed. In practise the value en-crypted with the public key could be a control block consisting of a sessionkey and some filling material. The session key would be used for operating aconventional steganographic system.

Ross Anderson calls this public key steganography. Do you agree with this term?Please give reasons for your decision.

4-6 Enhancement of security by using several steganographic systemsWhat can you do if you have several steganographic systems, each underlying a dif-ferent security assumption, and you don’t want to trust one of these steganographic

425

A Exercises

systems solely (resp. one of the security assumptions)? Differentiate between thesecurity aims secrecy(i.e. you have several systems of secrecy) and authentication(i.e. you have several authentication systems).

4-7 Crypto– and steganographic secrecy combined?Steganographic secrecy overlaps the functionality of cryptographic secrecy, so that,if steganographic secrecy can be assumed as secure, no combination with crypto-graphic secrecy is useful. Is a combination useful and how should it happen ifapplicable when doubts exist concerning the security of the steganographic se-crecy?

4-8 Crypto– and steganographic authentication in which order?Steganographic authentication does not protect the plaintext from unrecognizedsmall altering, a digital signature is doing this but can be removed easily by anyone.Can you combine both? If applicable, in which way?

4-9 Stego–capacity of compressed and non–compressed signalsIs it possible to embed unrecognized percentally more in compressed or in non–compressed signals? Please differentiate between lossy and lossless compressionand give reasons for your answer.

5 Exercises for chapter “Security in communication

networks”

5-0 Quantification of unobservability, anonymity and unlinkabilityIn §5.1.2 definitions for unobservability, anonymity and unlinkability are givenand it is emphasized that their reliability depends on the underlying attackermodel and where appropriate on a classification. It is defined that unobservability,anonymity as well as unlinkability are perfect if the relevant probability is the samefor the attacker before and after his observations. Try to quantify unobservability,anonymity and unlinkability, i.e. find useful a) parameters resp. b) gradationsbetween each extreme:

• the attacker finds out nothing – the attacker finds out everything, and

• the attacker knows nothing – the attacker knows everything

5-1 End–to–End or connection encryptionDue to disclosures about plentiful wiretapping by secret services (e.g. former GDRsecret service Stasi) the management of a company decides to encrypt all futuretelecommunication between the offices. How can, how should this happen? Areyour measures dealing rather with connection encryption than with end–to–endencryption? Or do you suggest to use even both?

5-2 First encrypting and then encoding fault-tolerantly or the other wayround?

426

5 Exercises for chapter “Security in communication networks”

In many communication networks error–detecting and even error–correcting co-des are used that are adapted to the dumb error behaving of the transmissionchannels. According to the channel one has to reckon that zeros become onesor ones become zeros, that errors arise separately or in bursts, etc. Should oneencrypt first and then encode fault-tolerantly in case of networks with adaptedfault–tolerant encoding, or should one prefer the reverse order? Explain youranswer.

5-3 Treatment of address– and error–recognizing fields in case of encryptionGiven a plaintext message 00110110, which is supposed to be sent do the address0111 (in this exercise each coding given is binary). The usual message formatconsists of a 4–Bit–address, an 8–Bit–message and a 2–bit–check–character fordetecting transmission errors in the address or the message.

Address Message Check-char.

The 2–bit–check–character is created by two–bit–wise addition modulo 4 of addressand message. For the above address and plaintext–message the check characterwould result in 10, thus altogether the message 01110011011010.

a) The message above be encrypted end–to–end with a one–time–pad. The one–time–pad be 0110011111110001101010001101 . . .. What does the end–to–endencrypted message look like if it is encrypted using one–time–pad modulo 2?

b) The message above be connection–encrypted using one–time–pad. What doesthe encrypted message look like if it is encrypted using one–time–pad modulo2?

c) The message above be encrypted end–to–end and connection–encrypted withthe above one–time–pad. What does the encrypted message look like if it isencrypted using one–time–pad modulo 2?

5-4 Encryption in case of connection–oriented and connection–less com-muication servicesIn case of communication services it is distinguished between connection–orientedand connection–less communication services. The first guarantee that messagesare received in the same order as they were sent, the latter don’t guarantee this.Do you have to consider this when choosing an encryption algorithm and its mode?If yes, where do you have to pay attention to?

5-4a Requesting and Overlaying

a) Compute a small example for the method “Requesting and Overlaying”.

b) Why must be encrypted between participant and server? Must the answer ofthe server to the participant be encrypted too?

427

A Exercises

c) How do you implement “Requesting and Overlaying” by use of usual crypto-graphic means if the transmission bandwith from participant to server is verysmall? How if the transmission bandwith from server to participant, but notamong the servers is very small? Can you combine both solutions skillfully?

d) What do you think about the following suggestion: To confuse the serversthe participant generates randomly an additional request–vector, sends it toan additional server, receives its answer and ignores this. None of the useds + 1 servers know if it is used as normal or as an additional, ignored server.

5-4b Comparison of “Distribution” and “Requesting and Overlaying”

a) Compare “Distribution” and “Requesting and Overlaying”. Consider espe-cially how opened implicit addresses can be implemented as efficiently aspossible in case of “Distribution” and “Requesting and Overlaying”.

b) Can you imagine communication services where “Requesting and Overlaying”are necessary because “Distribution” is not applicable?

5-5 Overlayed Sending: an example

Given the example for overlayed sending shown in the following figure. Inscribethe missing values in the upper part of the figure in which it is calculated modulo2. After that, calculate the same example for modulo 16 in the bottom part of thefigure. Which values would you be allowed/supposed to copy from the upper partof the figure? Why?

428

5 Exercises for chapter “Security in communication networks”

Station A

real message of A 10110101

key 00101011

00110110

sum A sends

empty message of B 00000000

00101011

with C 01100111

sum B sends

empty message of C 00000000

key with A 00110110

with B 01101111

sum C sends

superimposed and broadcasted message: sum

(in the example: real message of A)

Station B

Station C

Station A

A

C

Station B

Station C

bin

ary

sup

erim

po

sed

se

nd

ing

(Alp

ha

be

t: 0

, 1)

ge

ne

raliz

ed

sup

erim

po

sed

se

nd

ing

(Alp

ha

be

t: 0

, 1,

2,

3,

4,

5,

6,

7,

8,

9,

A,

B, C

, D

, E,

F)

with B

with C

key with A

real message of A

key

sum sends

empty message of B

with C

sum sends

empty message of C

key with A

with B

sum sends

superimposed and broadcasted message: sum

(in the example: real message of A)

with B

with C

key with A

B

429

A Exercises

5-6 Overlayed sending: Determing a fitting key combination for a alterna-tive message combination

How must the keys look like in the example of the previous exercise to have thestation A have sent the empty message 00000000 and station B have sent the realmessage 01000101 in case of same local sum and output (and of course therewithwith same global sum)? Which real message must station C have sent then? Is thekey combination unique? Will it be if it is given additionally that the key betweenA and B stays the same because the attacker knows it? (A has given the key toB on a floppy which B, after having copied its content to disk, has not physicallydestroyed or at least not written on multiple times but he has deleted it, i.e. theoperating system has signed the key as deleted. After that B has recorded thealmost new floppy with very few data and gave it to a friendly man, which actedunexpectedly as an attacker.) What are the other keys in this case?Hint: If you calculate this exercise only regarding to addition modulo 2 you shoulddifferentiate between + and - too. This plays of course only an important role withbigger modulus, thus in the exercise only with modulus 16.

5-7 Several–access–methods for additive channels, e.g. overlayed sending

a) Reservation methodIt is shown in the following figure how reservation in a message frame avoidscollisions of message frame if after reservation message frames are only usedby such stations that reservation resulted in 1 as sum. What would happenin this example if the reservation frames did not overlay modulo a greaternumber but modulo 2?

0 1 0 0

0 1 0 0 0

0 0 0 0 0

0 1 0 1 0

0 1 0 00

0

0 3 1 1 0

T4

T 1

T 2

T 3

T 4

T 5

reservation frame message frame

T5

b) pairwise overlayed receivingIn a DC-net in which pairwise overlayed receiving is not supported, twofriends, Scrimpy and Scrapy, are agreeing to literally double the bandwithof their communication by pairwise overlayed receiving. What do they haveto find out at first? Perhaps the following example will help you: Scrimpyhas sent 01101100 and receives 10100001 as sum. He asks himself: What hasScrapy sent to me? What is the answer if the overlayed sending takes placemodulo 2, what if modulo 256?(We assume that the several–access–method does not make difficulties for

430

5 Exercises for chapter “Security in communication networks”

Scrimpy and Scrapy. For instance one may allow that one of both reserves a(simplex–)channel which both are using for pairwise overlayed sending.)

c) Global overlayed receiving: collision detection algorithm with creating theaverage valueThe participant stations 1 to 5 send the messages 5, 13, 9, 11, 3. How does thecollision detection with creating the average value and overlayed receivingtake place? The local decision criterion be informationunit ≤ ⌊⊘⌋. Use thenotation given in figure 5-14 of the script.How does it happen if the participant stations send 5, 13, 11, 11, 3?

5-8 Key-, overlaying-, and transmission–topology in case of overlayed send-ing

a) Why is it possible to specify the key topology independently from overlaying-and transmission topology?

b) Why should overlaying- and transmission topology be adjusted with eachother?

5-9 Coordination of overlaying– and transmission–alphabets in case of over-layed sendingWith which “trick” was it possible to achieve a huge overlaying alphabet (useful,for example, for collsion detection algorithm with average value creation) and aminimal delay of time, i.e. a minimal transmission alphabet, at the same time?

5-10 Comparison of “Unobservability of adjacent wires and stations as wellas digital signal regeneration” and “overlayed sending”

a) Compare the two concepts for sender anonymity “Unobservability of adjacentwires and stations as well as digital signal regeneration” and “overlayed send-ing” regarding to strength of considered attackers, effort, flexibility. Whichconcept is the most elegant one? Why?

b) Do these concepts cover receiver’s anonymity? If applicable, what does thecooperation of sender and receiver anonymity look like?

5-11 Comparison of “overlayed sending” and “MIXes that change the encod-ing”Compare the concepts “overlayed sending” and “MIXes that change the encod-ing” regarding to security aim, strength of considered attackers, effort, flexibility.Which of these is the more practicable concept in your opinion? In which respectand why?

5-12 Order of basic function of MIXesHow far is the order of basic functions of a MIX in figure 5–22 inevitably?

5-13 Does the condition suffice that output messages of MIXes must be ofsame length?

431

A Exercises

What do you think about the following suggestion (according to the maxims of theretired chancellor Kohl: The result is counting): MIXes can mix input messagesof different length together. Most important thing is that all appropriate outputmessages have the same length.

5-14 No random numbers → MIXes are bridgeableExplain how an attacker could bridge a MIX if the direct scheme of changing theencoding to guarantee sender’s anonymity (cp. §5.4.6.2) is used without randomnumbers.

5-15 Individual choice of a symmetric system of secrecy at MIXes?MIXes publish their public cipher keys and are therefore specifying the appropriateasymmetric system of secrecy. In §5.4.6.1 (symmetric secrecy at the first and/orat the last MIX) and in §5.4.6.3 (indirect scheme of changing the encoding) twoapplications of symmetric secrecy in case of MIXes are introduced. Should eachuser of the MIX–network be able to choose the used symmetric system of secrecyor should it be determined by the MIX?

5-15a Less memory effort at the generator of anonymous return–addressesIt is described in §5.4.6.3 how anonymous return addresses can be created. Hereit is costly for the generator that it has to memorize all symmetric keys under theunique name e of the return address until the return address is used. This canlast a while and is therefore circumstantial. Consider how the generator of returnaddresses could avoid memorizing symmetric keys and even unique names.

5-16 Why changing the encoding without changing the length of a messageExplain how an attacker could bridge the MIXes shown in the upper part of figure5-26. (It is trivial, but just try to explain it precisely).

5-17 Minimal expanding of length in a scheme for changing the encodingwithout changing length of a message for MIXes

a) Describe shortly the effort for communication and encryption of MIX nets.

b) You have an asymmetric system of secrecy, which is secure against adaptiveactive attacks, and an efficient symmetric system of secrecy at disposal. Bothare working (after having chosen keys) on blocks of the same length. Designa scheme for changing the encoding for MIXes which expands the messageminimally at the sender and which does not change the length of the messageduring the steps of changing the encoding.

Derive for simplicity this minimal expanding scheme for changing the encod-ing from symmetric system of secrecy with the properties k−1(k(x)) = x andk(k−1(x)) = x, i.e. not only en–and decryption but also de– and encryptionare inverse towards each other, given in §5.4.6.5, cp. figure 5-31.

5-18 Breaking the direct RSA–implementation of MIXesWhat is the main idea of the described attack on the direct RSA–implementationof MIXes in §5.4.6.8?

432

5 Exercises for chapter “Security in communication networks”

5-19 Wherefore MIX–channels?

a) Why and for what are MIX–channels necessary?

b) What is limiting their use resp. is leading to the “development” of time–slice–channels?

5-19a Free order of MIXes in case of fixed number of used MIXes?Ronny Freeroute and Andi Cascadius are arguing again if free choice of MIXesand their order should be preferred– or so–called MIX–cascades should be usedinstead, i.e. MIXes and their order are specified fixely. Against the usual schemeof discussion (

Ronny Freeroute: Free choice of the MIX–order would be better becauseeveryone is free in his choice that way. (Confidence, usage of MIXes on theway) and, if only a few MIXes attack, greater groups of anonymity wouldcome up.

Andi Cascadius: MIX cascades must be preferred, because the biggest groupof anonymity possible would come up in case of maximal number of attackingMIXes as well as averages of anonymity groups would be empty or containthe whole anonymity group

) a pale looking Ronny sayes someday: “What if the attacker controls a maximalnumber of MIXes and knows that everyone uses a fixed number of MIXes in case offree choice of the order of MIXes. Then the attacker has good a chance to bridgethe MIX that is not attacking, too.” What does Ronny mean, i.e. what could asimple attack on a MIX–net with free choice of route and participants with a fixedchosen route–length look like? And what would you do against it?

5-20 How to ensure that MIXes receive messages from enough differentsenders?As it was explained at the beginning of §5.4.6.1 MIXes have to work on informationunits from sufficient senders in one batch – are they from too few the danger ishigh that one of them can be observed from the few others over the MIXes too.

How can the compliance of the claim “information units from sufficient senders”be ensured

a) at the first MIX of a MIX cascade,

b) at a later MIX of a cascade, i.e. not the first one?

Otherwise a MIX could substitute, for instance, in its output batch all informa-tion units except for one information unit produced by itself and trace the non–substituted information unit despite all following MIXes this way.

5-21 Coordination protocol for MIXes: Wherefore and where?For which schemes of encryption is a coordination protocol between MIXes neces-sary? Give reasons for your answer.

433

A Exercises

5-22 Limitation of responsibility areas – Functionality of network terminalsWhy should the necessary areas of confidence between participant and networkprovider overlay as less as possible? Explain this for all introduced security mea-sures in communication networks. (Figure 5–48 might be a help for you).

5-23 Several sided secure Web accessConsider several sided secure web access: Which security problems exist? Whichparticipant has which protection interests? Then apply all procured concepts tothis new problem. (Your curiosity is supposed to be created and the procured issupposed to be practiced with this larger scaled exercise. Therefore it is okay tohave a look in the web for existing concepts after having analysed the problem.And even I will allow it myself to link to literature when giving the solution.)

5-24 FirewallsMany companies and authorities control the traffic between public networks, e.g.the Internet, and their in–house network, e.g. LAN or Intranet, by so–calledFirewalls (the name comes from the fire protection in buildings and means thereis a wall which impeds the spreading of fire or at least delays the spreading). Doyou think this is useful? What does it exactly bring? Where are the limits? Is thisable to substitute admission and access controls for workplace computer?

6 Exercises for “value exchange and payment sys-

tems”

6-1 Electronic Banking – comfort version

a) What do you think about the following paper-banking comfort version: Thecustomer receives pre-filled forms from his bank in which only the amountmust be filled in. It saves especially time for the customer because he has notto sign the forms anymore.

b) Does this paper-banking comfort version differ from the electronic bankingversion which is usual today?

c) Will this be changed if customers receive a chip card from their bank withpre-loaded keys for MACs or digital signatures?

6-2 Personal relation and linkability of pseudonymsWhy do role–pseudonyms offer more anonymity than personal–pseudonyms andtransaction–pseudonyms more anonymity than business–relation–pseudonyms?

6-3 Self–authentication vs. foreign–authenticationExplain self–authentication and foreign–authentication and the difference betweenboth. Give typical examples. Why is the distinction between self–authenticationand foreign–authentication important for configuration of several–sided IT–Sys-tems?

434

9 Exercises for “Regulation of security technologies”

6-4 Discussion about security properties of digital payment systemsThe security properties described in §6.4 do only consider availability and integrityof payment procedures, thus only the most essential ones. Create a list of neces-sary security aims that you consider as necessary. Order them by confidentiality,integrity and availability.

6-5 How far do concrete digital payment systems fulfill your security aims?Proof known digital payment systems against your security aims that you elabo-rated in exercise 6-4. Examples could be

• tele-banking

• homebanking computer interface (HBCI); http://www.bdb.de/, search forHBCI

9 Exercises for “Regulation of security technologies”

9-1 Digital signatures at “Symmetric authentication allows concealment”Is it possible to use digital signatures instead of symmetric authentication codesin the procedure of “Symmetric authentication allows concealment” (§9.4)? If no,why not?

9-2 “Symmetric authentication allows concealment” vs. SteganographyWhat do “Symmetric auhtentication allows concealment” (§9.4) and Steganogra-phy (§4) have in common? What is the difference?

9-3 Crypto–regulation by prohibition of freely programmable computers?Due to the procedures in §9.1 - §9.6 it is clear that crypto–regulation is not ap-plicable as long as freely programmable computers are available. We assume thatall states agree to prohibit the production of freely programmable computers andthey could eventually succeed to enforce this prohibition. We assume the decadesin which freely programmable computers already sold still worked have elapsed– thus there are no freely programmable computers anymore. Is it then possibleto enforce a crypto–regulation? Asked more exactly: Is it then possible to en-force a crypto–regulation which allows to decrypt email traffic of a suspect after apermission for observation has been given – but not before?

435

A Exercises

436

B Answers

1 Answers for “Introduction”

1-1 Context between spacial distribution and distribution regarding thecontrol- and implementation structure.

Spatially distributed IT-systems usually allow input at more than one place. Thoseinputs (if not ignored) take effect on the system state. If you want to implementa central control structure through an instance with a global view every incominginput needs to be to communicated to it. Only after this the state transition maytake place. Since information can only be transported with the limit of the speedof light and since today’s computers work at more than 107 instructions per secondyou have:

300000 km/s

107 instruction/s= 30 m/instruction

You would consume more time for transmission than for processing1. While thecomputation performance is steadily increasing (and the strutural size is decreas-ing) but the light speed remains constant this argument becomes an issue moreand more. Physical maintaince in spatial distributed IT-Systems is quite costlyespecially right after an failure. But IT-Systems having a distributed control- andimplementation structure allow to postpone certain maintance tasks (even after aserious failure) since a failure won’t bring down the whole system. Further argu-ments can be found in the next answer.

1-2 Advantage and disadvantages of a distributed system regarding security.

a) Advantages:Confidentiality can be supported by everybody having it’s data under it’sown physical control → End of external determination on information: It isrendered obsolete to give away your own data and hope for the best.

1You could argue that you have to consider the instructions of the users but not the computer whichwould result in a completely different picture (since man is acting much slower than any computer).But from my point of view this is not right: Since it isn’t about if it’s a distributed system fromthe user’s point of view but the implementation structure (and therefore at least one level deeper)machine operations must be regarded as units.

Another objection could be that not machine instructions but hard disc accesses must be regardedas operations since they make the state transition persistent and they can be set back for faulttolerance reasons anyway. This is surely the right view for system shaping - by implicitly introducinga distributed control- and implementation structure for computers and systems.

437

B Answers

Integrity can be supported by everybody seeing it’s own data before some-body other accesses them and confirming if they are valid or not. Of coursethis process can be automized to make a PC validating the data. Besidesthe protocoling can be distributed to complicate faking or surpressing of theprotocolation.Availability can be supported by not having a single point of failure due toattacks with blasting agents or logical bombs.Disadvantages:Confidentiality can be endangered by the fact that data can be moved freelywhich makes controls difficult. It is doubtful that there is always an legally re-sponsible entity, named “speichernde Stelle” (storing place) in privacy laws.Integrity and availablity can be endangered by just having more possiblepoints of attack. Just think of the many physical connections (especiallywireless one’s) that can hardly be secured or the many computers stayingaround unsupervised.

b) Spacial distribution supports availablity against physical threats. Distribu-tion as well as control and implementation structures support confidentialityand integrity.

1-3 Advantages and disadvantages of an open system regarding security

a) Advantages:Open systems can be connected more easily to cooperating distributed sys-tems. By that the advantages of distributed systems mentioned above can beachieved more easily. The compatible interfaces of open (sub)systems allowan more easy creation of over-all systems using different designers, producersand maintainers. As seen in §1.2.2 diversity can heavily increase security:Disadvantages:Open systems can be easily connected to cooperating and distributed sys-tems. So the disadvantages of distributed systems mentioned above can beachieved more easily. In particular a user of an open system usually doesn’tknow who has which rights and under which role name.

b) Therefore they amplify each other so anything said about distributed or opensystems holds true for distributed open systems.

1-4 Advantages and disadvantages of a service integrating system regardingsecurity

a) Advantages:Service integrating systems can be allowed to be more costly since all effortsare shared by several services. Relativly seen this allows more effort regardingsecurity. On the one hand this allows a more careful design and it’s validation(like testing for Trojan Horses). On the other hand you can employ moreredudnency for increasing the availability like additional physical connections.

438

1 Answers for “Introduction”

May be even the expensive measures for protecting the anonymity of senderand receiver (cp. §5) are more likly to be achieved.Disadvantages:If a security property is violated this would surly have an impact on severalservices concurrently. This turns out to be catastrophic if no spare systemsare at hand (this would be a totally service integrated system [Pfit 89, pg.213ff][RWHP 89]). Access must be granted even to those users that only makeuse of parts of the services offered. For the unused services those participantsare an unnecessary security risc.

b) Service integration amplifies anything told here (independent from the typeof the system) but this applies especially to distributed open systems.

1-5 Advantages and disadvantages of a digital system regarding security

a) Digitally transmitted or stored information can be altered or copied perfectly(=without trace) if the attacker is willing to take a certain minimal amountof effort. This amplifies the problems regarding confidentiality and integrityas long as the encryption techniques from §3 are used (which are availabile byactually using digital medias). In common availability is supported by digitaltransmission and storage.

Computer aided relaying and processing are vulnerable on confidentiality,integrity and availability. On the other hand it’s tremendous capabilities allowprotection measures that wouldn’t be pratical elsewhere. With an appropriatesystem configuration this disadvantage can be more than balanced out. Ifyou’re confident about the correctness and completeness of the protectionmeasures they should rather be implemented in hardware than in software.Remind that: Software implemented protection often turns out to be soft !

b) The arguments from 1-5a show that the property digital supports the con-struction of distributed systems.

As long as an open system is not complete (especially being incompletelyspecified) computer aided relaying and processing (both freely programmable)are necessary to make the systems adaptable for future developments.

Today it is inevitable that service integrated systems are digital.

1-6 Requirement definitions of four example applications in the medical field

a) Availability : Medium critical, as long as there are local originals printed onpaper. In case of failure the situation remains the same.

Integrity : Critical since nobody wants to make additional lookups at thepaper backups. Neither anybody wants to update them by hand.

Confidentiality against external doctors: critical; against third persons: high-ly critical. The patient may request admission or deletion rights that are tobe granted. An international harmonization would be difficult.

439

B Answers

b) Availability : Uncritical, as long as the surgery team on site doesn’t rely onthe help. Otherwisly: Highly critical.

Integrity : Little integrity violations (i.e. some broken pixels) are uncritical.Larger violations (bad images) are critical. To make the patient (or his rela-tives!) able to prove professional blunder it is required to reproduce who sawwhat and who gave the advises (even after many years). Especially there isa clear identification (cp. §2.2.1) of the external adviser needed.

Confidentiality : Confidentiality of the images: less critical; if transmitted atthe beginning of the case history the content needs to be protected of course.The protection of the sender and receiver identity is redundent ([PfPf 92,§4.2]).

c) Availability : Medium critical since failure “only” leads to the today’s situ-ation. As long as patients remain at home who otherwisly would be in ahospital: critical.

Integrity : False alarm is uncritical but shouldn’t be triggered too often. Theopposite: see availability.

Confidentiality of an alarm is less critical, all other message contents as wellas the patients as sender and receiver need to be protected ([PfPf 92, §4.3]).

d) Availability : Uncritical.

Integrity : Not very critical since for critical cases a doctor is compulsory.Anyway while accessing a medical database it must be clear who is responsiblefor it’s content. If doctors become dependent on such a database requirementsare heavily increased towards availability and integrity.

Confidentiality of the message content and of the request itself must be se-cured by anonymity ([PfPf 92, §4.1]).

Of course I’m not sure about the completeness of my protection goals.

1-7 Where and when are computers to be expected?

The numbers show the year where a appreciable spread of computer aided compo-nents (within it’s component class) took place or is predicted. In the near futureyou must consider switching stations, “harmless” end user devices or even user’snetwork connection units as potential attackers.

440

1 Answers for “Introduction”

Visual Telephone

Network

Connection Participant’s

Access Line

Switching

Center 2

Videotext

Telephone

Television

Radio

Switching

Center 1

Participant 1

Participant 2

Bank

Interactive Videotext Server

Interactive1985/manufacturerand user

2000/manufacturer

1985/manufacturerand operator

1995/manufacturer

1995/manufacturer

1995/manufacturer

1995/manufacturer

1985/manufacturerand operator

1985/manufacturerand operator

1-8 Deficits of legal provisions

The answer consists of a combination from the arguments of §1.2.3 and §1.2.5.

A prohibition is only useful if it’s adherence can be verified (principally imossiblewith observing attacks) with a reasonable effort, if it’s secured by criminal pros-ecution and if it is possible to restore the original state. All that is not true forIT-Systems.

“Data theft” in common, specifically the direct eavesdropping of connections orthe copying of data from computers is hardly detectable since the original datawon’t change. Nor the installing of Trojan Horses or the unallowed processing ofdata you legally (or illegally) obtained.

The reconstruction of the original state would be to delete all data created. Butyou can never be sure that there aren’t any further copies. Besides data can remaininside the human’s brain where deleting is not a worthwhile option.

Parallel to the problem of “data theft” there is the problem of data loss: May belost data can’t be restore or reproduced so that the original state can’t be broughtback.

So more effective measures are needed.

Annotation: For IT-Systems crossing the national borders an international agree-ment (expressed by laws) and it’s enforcement (by international courts) is needed.While there many such IT-Systems they are not being covered by many interna-tional agreements.

441

B Answers

1-9 Alternative definitions/characterizations of multilateral security

“The large-scale automated transaction systems of the near future can be designedto protect the privacy and maintain the security of both individuals and organi-zations. ... Individuals stand to gain in increased convenience and reliability;improved protection against abuses by other individuals and by organizations, akind of equal parity with organizations, and, of course, monitorability and controlover how information about themselves is used. ... the advantages to individualsconsidered above apply in part to organizations as well. ... Since mutually trustedand tamper-resistant equipment is not required with the systems described here,any entry point to a system can be freely used; users can even supply their ownterminal equipment and take advantage of the latest technology.” from [Cha8 85]

“Meanwhile, in a system based on representatives and observers, organizationsstand to gain com- petitive and political advantages from increased public con-fidence (in addition to the lower costs of pseudonymous record-keeping). Andindividuals, by maintaining their own cryptographically guaranteed records andmaking only necessary disclosures, will be able to protect their privacy withoutinfringing on the legitimate needs of those with whom they do business.” from[Chau 92]

“Considering the growing importance of legal affairs using open digital systems itarises the need to make the usage of those systems unobservable for non partici-pants and anonymous for participants by keeping up the necessary legal certainity”from [PWP 90]

“The term multilateral security is therefore used here to describe an approachaiming at a balance between the different security requirements of different parties.In particular, respecting the different security requirements of the parties involvedimplies renouncing the commonly used precondition, that the parties have to trusteach other, and, especially, renouncing the precondition, that the subscribers haveto place complete trust into the service providers. Consequently, each party mustbe viewed as a potential attacker on the other and the safeguards have to bedesigned accordingly.” from [RDFR 97].

“Multilateral security means to respect all security needs of all participating par-ties.” from [Rann 97].

“Multilateral security means to include the security intrests of all participants aswell as to deal with the resulting conflicts during the creating of a communicationconnection.” from [FePf 97]

“Multilateral security means to include the security intrests of all participants aswell as to deal with the resulting conflicts during the creating of a communicationconnection. The creation of multilateral security does not inevitably lead to thefulfilling of all the participant’s intrests. They possibly reveal incompatible intrests

442

1 Answers for “Introduction”

the participant were not aware of since protection goals are formulated explicitly.But it inevitably leads to an interacting of the participants using multilateralcommunication systems to achieve a certain balance of powers.” from [Fede 98]

Multilateral security is security with minimal assumptions about others:

• Every participant has security intrests.

• Every participant can express it’s intrests.

• Conflicts are to be recognized and solutions to them to be negotiated.

• Every participant can enforce it’s security intrests of the agreed solution.

from [FePf 99]

“... new developments, e.g. the rising need and demand for multilateral security,which cover the requirements of users and usees as well as those of IT systemowners and operators. ... The TCSEC had a strong bias on the protection ofsystem owners and operators only. This bias is slowly losing strength, but can stillbe seen in all criteria published afterwards. Security of users and usees, especiallyof users of telecommunication systems is not considered. Therefore techniques,providing bi- or multilateral security, e.g. those protecting users in a way privacyregulations demand it, cannot be described properly using the current criteria.”from [Rann 94]

“Multilateral security emphasises the equal protection of all participants. Theprotection goals of privacy and liability are considered as important basement withregard to personal freedom of man and obliging social behavior” from [PoSc 97]

“While for classical security models the operator’s view stays in front, multilateralsecurity respects the participating user’s view. Beside the protection the networksoffer to it’s users the user needs mechanisms for self protection for reducing thedependency to others. Not only are they protected now from attacks from thenetwork but they also keep up their mutal interests like buyer and seller exchangemoney and receipts sometimes even with notary supervision.” from [GGPS 97]

“Multilateral security is a user centric feature. The protection demanded is finallyfor the people and organisations that do employ telecommunication systems to dotheir tasks. However security serves not only the communication partners but alsoall others that are somehow connected to the partners or the content transmitted...”from [RoSc 96]

“Multilateral security: Whereas the introduction of new communication mediasand services primarily targeted the security of the provider (like against unper-mitted usage) now the security of the participant is to be emphasized.” from[BDFK 95]

443

B Answers

“It seem quite legimate to demand that users are to be treated equally reagardingtheir securtiy rights. If you grant every user that their securtiy does not dependon the goodwill of others but only on their own behaviour such a system would betruly multilateral.” from [PSWW 96]

“Multilateral security means the upholding of morale in a information technicalsense.” a student in winterterm 99/00.

2 Answers for “Security of single computers and its

limits”

2-1 Attackers outside of the schoolbook’s wisdom

First of all an attacker with unlimited capacity like computation power, time andmemory comes into mind. Fortunatly we will see that there are corresponding in-formation theoretical techniques to keep up security (confidentiality and integrity)against such an complextiy theoretically unlimited attacker.

However you’re doing harder against attackers that are owners of unknown tech-nical, physical or chemical innovations (like new gauges, blasting agents, ...).

And it becomes highly critical if they posses the ability to control new physical phe-nomenons: new kinds of radiation, fields, etc.; Or applied on humans: Supernaturalsenses (X-Ray-viewing capability, clairvoyanting) or abilities (Go-through-the-wall,invulnarability, telekineses). This is not meant to start a parapsychological discus-sion but as warning that perfect security can only be achieved in a closed worldwith no new perceptions. Luckly we live in an open world with new perceptionaround every corner. Here are some examples of closed world attacks:

• Unpermitted information gaining : Against an attacker with clairvoyantingabilities neither shielding (cp. §2.1) nor cryptographics (cp. §3) help toprotect confidentiality.

• Unpermitted modification of information: Against an attacker with telekineticabilities shielding (cp. §2.1) doesn’t protect confidentiality. Against an at-tacker with telekinetic and clairvoyanting abilities cryptographics (cp. §3)won’t help too.

• Unpermitted affecting of functionality : Against an attacker with telekineticabilities neither shielding nor (finite) fault tolerance will help to maintainavailability.

2-2 (In)Dependence of confidentiality, integrity, availability with identifica-tion as example

You can’t expect any “secure” IT-Systems if you have at least one of the threeprotection goals without any requirements:

444

2 Answers for “Security of single computers and its limits”

• If there are no requirements regarding availability then no IT-Systems atall needs to be realized. With that all it’s (existing) information are goodregarding integrity and confidentiality.

• If there are no requirements regarding integrity an IT-System couldn’t dis-tinguish between different people since the information needed for that couldbe modified without permission. The same applies to distinguishing differentIT-Systems.

• If there are no requirements regarding confidentiality an IT-Systems couldn’tidentify itself towards others since attacker could gather all necessary infor-mation from the IT-Systems. The other way round is not that easy: “Since anIT-Systems can’t keep any information confidential it can’t recognize otherinstances since they can obtain all information the IT-Systems distinguishother instances.” This argument seems consistent but the IT-System couldapply a collision resistant hash function (cp. §3.6.3.3 §3.8.3) to the informa-tion expected from the other instance and save only the result of it. Later onthe result is compared with the input applied on that hash function.

2-3 Send random number, expect encryption – is this working conversely,too?

The difference is subtle:

With the normal version you only have to assume that random numbers won’trepeat (with a relevant probability) during random number generation.

With the opposite version you additionally have to assume that the random numbergeneration is unpredictable for the attacker. Actually this should be true for anyrandom number generator (just like what the name hints) but it is always safetyfirst.

The difference becomes clear with an extreme example: The normal version issecure if the random number generator is a counter that is incremented after every“random number”. With this implementation of a random number generator theopposite version would be completely insecure.

For further intrest: In [NeKe 99] you’ll find, next to a detailed explaination of thisexample, a worthwhile introduction into a formalism (BAN-Logic) with which youcan examine such protocols.

2-4 Necessity of protocolation in case of access control

a) The confidentiality problem is reduced with every level of the “recursion” sincethe amount of personal identifiable data is (hopefully) getting less and alsoless sensible. Or the other way round: The problem is not really recursivesince the access to the secondary data can be way more secured than forthe primary data. For example you could print out the secondary data andprevent the afterward manipulation by spacially remote people. Furthermore

445

B Answers

it is not useful to let the secondary data be created too thoroughly; theyshould become less. Finally you can reduce the circle of people that mayaccess the secondary data to some very trushworthy ones. Hopefully thisreduces the circle that may access. If you introduce terniary data you may letevaluate them by other privacy commissioners than for the secondary data.Then the controllers control each other. Assuming non alterable protocolsbeing used you could realize a time staggered 2n eye principle (correspondingto the known - not time staggered - 4 eye principle) for n levels.

b) You should strive for a strict and detailed regulation who may do what underwhich circumstances. Then the amount of protocolation can be reduced: Asmuch as protocolation as needed but no more.

c) Here the “sensibility” doen’t decrease with every level but it increases: Theregulation who may grant rights is at least as security critical as the enforcingof the right and it’s limits. The same as for a) is true regarding to that thegranting of rights may be subject to more strong and other security regulationsthan for the usage of rights, the granting of rights for rights granting to otherand stronger security regulations and so and so on.

2-5 Limiting the success of changing attacks

Modifying attacks should be a) fastly detected and all unpermitted modificationsshould be b) localized as precisely as possible. After that the c) the state shouldbe recovered as good as possible that would be without the modifying attack.

a) Beside the commonly applyable information technical techniques like

• Error detecting encoding for transmission and storage

• Validation of the authenticy of all messages before their processing(authentication codes or digital signatures, cp. §3) so that faked com-mandos in the messages won’t be executed at all. The same for storagecontent that can be re-read from unprotected areas (like floppy discs,CDs, USB media devices).

do exisit special plausability tests for every single application. For examplea person can travel at most with 1000 km/h. This can be applied to dis-cover if an attacker pretends to be somebody else who reported earlier fromsomewhere else.

b) Here helps protocolation who had has accessed what and when (cp. §2.2.3).

c) Here you must differ between hardware and data (incl. programms). Hard-ware has to be repaired or replaced very quickly. Or you have alternatives likebackup computer centers, alternatives communication networks. The usageof hardware is mostly a financal problem.

For data (incl. programms) you will need backups. Here the following di-lemma exists: On the one hand the most current state possible should be

446

2 Answers for “Security of single computers and its limits”

recovered so that a minimum of work is lost. On the other hand if unpermittedmodifications took place the current state should be rolled back into the paststate that was unmodifed. What now? The backup system must allow to setback the distance of recovery dynamically. This can be done by conducting adaily full backup, and weekly, monthly and per year. The daily backup canbe overwritten weekly, the weekly every month and the year backup neverplease - you company shoud effort that many streamer tapes. An alternativewould be to to make a backup from time to time but also to create a logfilewhere all changes (incl. the exact values) are recorded. Using the currentstate and the logfile you can rollback to any previous state desired - as wellas you can go forward with an old backup and the logfile. Please rememberto conduct an backup of the logfile as well. And of course you can combinethat too.

2-6 No hidden channel anymore?

Unfortunately no! If our military really needs their computers there is the followinghidden channel: Everytime a computer fails the military will order a new one. Ifthey do so within the next 24 hours and if you assume that computers fail every 10years (=3652.5 days with 2.5 intercalary days) so there is a hidden channel withalmost (assuming a perfect computer that won’t fail except for the usage of thehidden channel: exactly)

ld 3652, 5 bit

and therefore more than 11 bit in 10 years.

Of course you can object that illoyal staff represents a far more bigger hiddenchannel. But I’m only considering the information technical point of view as wellas the usally overlooked aspects.

By the way the loyal staff may be a hidden channel of a bigger bandwidth too:Consider a trojan horse that is placed in a computer where is can influence thevideo output of the terminal by simply worsening the image quality. Then theattacker only has to observe if the staff leaves the hermetically shielded roomswith redden eyes.

2-7 Change of perspective: The diabolical terminal

The properties for a) come first followed by the motives of the participants touse such a device. The consequences are stated behind the implication arrow.Catchword you can find in the PC handbook are marked cursivly.

Using the device for as many purposes simultaniously as possible to make it’s innerworkings as inscrutable as possible. The motivation for the user to tolerate thatis that he can use his data for many purposes which makes the device look likehaving an good price performance relation → service integrated terminal device.

447

B Answers

The device should count as useful, nifty and advanced or even as uninevitable →service integrated device, useful, nifty, advanced, small, colorful, communicative ...given by basic agreements for big companies or public institutions.

The device should be modifyable → programmable device; dynamically loading ofsoftware: Plug-ins

The responsiblitiy for changes should belong to someone who doesn’t completelyunderstand what he self is doing. But he doesn’t realize that and he is havinggood reasons for doing the things. The user itself would be a perfect match. Thenothers could say: Blame yourself ...

As much computation power and storage as possible that are used for opaquereasons so that the resource usage for (universal) trojan horses won’t attract at-tention.

Since trojan horses won’t work perfectly at the first go system crashes should beconsidered as normal.

Opaque and complex closed source programms.

Proprietary, publicly undocumented data formats instead of openly documentedor standarized ones. So Word instead of RTF.

No access control (so that every program can influence every other one). Not tomention access control employing the Least Privilige principle. → Windows 98and Mac OS 9 don’t have any access control. Windows 2000 and Linux don’thave access control following the Least Privilige principle.

The device should have as universally as possible bidirectional interfaces so the usercan connect to everything pretty easy. Especially good are public open networks→ Internet One the one hand for dynamically reloading of software, on the otherto enable attackers to access their trojan horses as often as possible without notice.

Giving away user data (content, user ID or the ID of the device) of which theuser isn’t aware of→ Transferring plain data over open networks like the Internet.Adding the creator device of every object into the digital representation of theobject (DCE UUID of OLE or the Microsoft GUID=Global Unique Identifier).Device IDs like Ethernet adresses or processor IDs (Pentium III ) undermine theanonymity of the user since they can be tranferred without notice.

3 Answers for “Basics of Cryptology”

3-0 Terms

448

3 Answers for “Basics of Cryptology”

Cryptology comprises cryptographics (the science of the algorithms of the “good”users) as well as cryptoanalysis (the science of the algorithms used by attackers oncryptographics). Knowledge in the area of cryptoanalysis is necessary to evaluatethe security of crypthgraphical systems and it’s applications.

Cryptographical systems are either concealation systems (protection goal: confi-dentiality) or authentication systems (protection goal: integrity). Authenticationsystems are either symmetric (and therefore not suitable as legal evidence) or asym-metric and therefore suitable as legal evidence towards a third person. Because ofthis important property asymmetric authentication systems are also called digitalsignature systems.

The advantage to place terms like cryptosystem, encrypting and decrypting atthe top of the term tree is to use short common terms almost everywhere. Thedisadvantage is that the discrimiation between concealation and authentication ispushed into the background so that I will use these terms only for cryptographicalconcealation systems - if used in a more common meaning I will uses quotationmarks. My term tree looks like that:

Cryptography Cryptoanalysis

!!

!!

aa

aa

Cryptology

cryptographicsystem of secrecy”crypto-system”

”encrypt”, ”decrypt” symmetricauthentication system

digital signature system

�������

XXXXXXX

cryptographicauthentication system

((((((((((((

PPPPPP

cryptographic system”crypto-system”

”encrypt”, ”decrypt”

3-1 Regions of trust for asymmetric concealation systems and digital signa-ture systems

The secret key can be protected (regarding confidentiality and integrity) by gen-erating it inside the device (so the region of trust) in which the key is used. Thefollowing pictures arise:

449

B Answers

c

d

x xc(x)

encryption key,

public

decryption key,secret

= d(c(x))random number'

decryption

random number

plaintextplaintextencryption

keygeneration

secret zone

ciphertext

trusted area

trusted

area

random number

text

text, signature and

verification resulttext and

signature

s

x, s(x)

key for signatureverification,public key for signature

creation,secret

x, s(x),

"ok" or

t

x

random number'

secret zone

"false"

verify sign

keygeneration

trusted area

trusted

area

A backup copy of the signing key is not necessary since signatures remain testableand therefore valid even if the key is destroyed. The signing key should neverleave the signing device. If the key is not available anymore you’ll just createanother key pair, get the public part certified - an there you go. The same appliesto communication with asymmetric concealation: After generating and exchanging

450

3 Answers for “Basics of Cryptology”

the keys the message is just encrypted once again. If using asymmetric concealationfor storing data you may either need plaintext backups of the data or the encrypteddata and the decryption key.

3-2 Why keys?

a) You can do without key too: For symetric cryptographics two users need toexchange a secret and exclusive cryptographic algorithm. For asymmetriccryptographics you’ll have to publish a public algorithm from which it isimpossible to derive the secret one.

b) The disadvantages are numerous and fatal:

– Where do users get good cryptographic algorithms from? Eventuallynon-cryptographers too want to communicate confidentially and full of in-tegrity without anybody else (this includes the cryptographer too) break-ing it’s security. Finally the cryptographer that created the algorithm isneeded to be “removed” since he knows the secret. While this was donein the ancient times today all people (even cryptographers) are subjectto the Human Rights.

– How to analyize the security of cryptographic algorithms? Anybody ana-lyizing a symmetric algorithm is a security risc as the inventor is. Forasymmetric algorithms anybody who knows the secret algorithm for val-idating it’s security is a security risc.

– Implementing the cryptographic systems in software, hardware or firm-ware is possible only if the user is able to do this on his own. Else thesame applies for the implementor as for the algorithm inventor.

– In public systems where potientially everbody wants to communicate withanybody else securely only software implementations with algorithm in-stead of key exchange would be possible - firmware is machine dependentand hardware exchange would require physical transportation. But evensoftware implementations have the disadvantage to require (for symmtricsystems confidential and full of integrity, for asymmetric systems full ofintegrity) transmissions of a huge amounts of bits: Algorithms are usuallylarger than programs.

– A standarization of cryptographic algorithms would be impossible due totechnical reasons.

c) The main advantages of keys is: The cryptographic algorithms can be pub-lished. This allows:

– a broad and public validation of it’s security

– it’s standarization

– any implementation form desired

– mass production

– a validation of the implementation’s security

451

B Answers

3-2a Key adjustment for user autonomous key generation?

The objections against participant autonomous decentral key generation are wrong(cp. above). Thus this proposal is unnecessary and regarding the confidentialityof the secret keys - as you know - very dangerous.

Of course naturally you can’t totally rule out that the random number used forparameterizing the key generation occurs more than once with the consequencethat the secret part of the key can be generated more once. But the conclusionthat therefore the key generation needs to be coordinated is pretty obscure. Ifthe same random number occurs more than once either the RNG is bad or thenumber is too short. In both cases it won’t help to use the random number onceand discard it if it occurs again. Because nothing would prevent an attacker fromcreating equally long random numbers with the same generator. Since this can’tbe prevented only one thing helps: Carefully picking and validating the randomnumber generator as well as making random numbers long enough. Anything elseis windows dressing, which disctracts form the real problem.

Beside: If keys are compared it is sufficent to do so with the public keys. So the keygeneration can stay participant autonomous - the key comparison becomes part ofthe key certification.

3-3 Increasing security by using several cryptographic systems

For the protection goal confidentiality you may use cryptographic systems se-quentially. Then the plaintext is encrypted with concealation system 1, the resultis encrypted with concealation system 2 and so on (cp. fig. 3.2 where ki is the keyof the i− th concealation system: S := kn(...k2(k1(x))...)). Of course every systemneeds it’s own unique key and the applying order must be the same.

Decrypted is the other way round; therefore as last with concealation systems 1which reveals the plaintext: x = k−1

1 (k−12 (...k−1

n (S)...)).

encryption

1

k1

x...

kn(… k2(k1(x ))…)

k2 kn

...

k1(x

k1

x)encryption

2

decryption

n

decryption

1

k1(x )

Of course asymmetric systems can be used that way too. Even a mixed usage ofsymmetric and asymmetric concealation systems is possible. In the example youcould replace every i, 1 ≤ i ≤ n the ki on the left with ci and on the right with di.

This construction of course only works if the key text space of concealation systemsi is a sub space of the plain text of concealation system i + 1. In practice this canbe easily achieved (cp. §3.8.1,§3.8.2). If plain text and key text do have the

452

3 Answers for “Basics of Cryptology”

same length on all n systems this measure won’t affect memory and transmissioncomplexity (but most time it will increase due to the arrangement into blocks).The computation complexity is the sum of all computation complexities of everysingle concealation system.

Besides: The concealation system created by using several sequential concealationsystems is called product cipher of the concealation systems used.

For the protection goal authentication you can use the cryptographic systems inparallel: For every system the authenticator of the original message x is createdand all of them are consecutivly appended on x (with seperators; cp. fig. 3.6:x, k1(x), ..., kn(x)). The receiver checks every authenticator separatley: They allhave to be valid.

Of course you may use digital signature systems that way too, even a mixed usageis possible. In the example you could replace every i, 1 ≤ i ≤ n the ki on the leftwith si and on the right with ti.

Here too the keys used must be picked indepentently.

Computation complexity and memory and transmission complexity for the authen-ticator are exactly the sum of each single system. The computation can be done inparallel so that the whole system (properly implemented) is as fast as the slowestauthentication systems used.

x

encoding

1

k1

kn

x, k1(x),… , kn(x)

testing

1

k1

kn

x, k1(x)

n(x)x, k

x,„ok“ or„wrong“

encoding

n

testing

n

As well as for the protection goal concealation as for the protection goal authen-tication you’ll need additional memory to house the software that implements thecryptographic systems.

About the provability of both constructions:

453

B Answers

Systems parallel to the authentication can be trivially proved since nothing of theusage of the single systems would change, and especially not the distribution ofmessages and key texts/MACs.

Sequential Systems for concealation can’t be proved at all. The distribution of theinputs of “encryption 2” up to “encryption n” is changed by the construction -instead of the plain text the key text is encrypted. This may happen in such aunfortunate way that the encryptions 2 up to n won’t support security at all sothat Sequential Systems would be only as secure as “encryption 1”. In practicethis would surly only occur if “encryption 2” up to “encryption n” are explicitlydesigned to be “unfortunate” (in exercise 3-26 you will find a provably secure(in a certain sense), but more costly construction for using several concealationsystems).

In the other way round the proof for systems parallel to the authentication alsoshows that the construction is only as secure as it is hard to break all the systemsused. And in the other way round the proof for sequential systems for conceala-tion also fails, mainly because the attacker doesn’t have the interim results (thekey texts between the concealation systems used) - this is what makes his taskunequally harder than only to break all concealation systems used.

Discussion of wrong solutions:

To encrypt the plain text with several concealation systems, where some of themare unsecure, the message is split into as many parts as you have concealationsystems. Then every part is encrypted with another system. Disadvantage: Thepropability that an attacker can decrypt at least some parts of the message is usallyincreased and not decreased.

To authenticate the message with several authentication systems you create the 1.validation record with the first authentication system. The second system is usedto authentice the 1. validation record; so the 2. validation record is created, ...Disadvantage: If an attacker can break the first by finding another message for the1. validation record all other authentication systems are without effect (becausethe 1. validation record for the other matching part isn’t altered, the remaindercan stay untouched). Besides the computation of the validation record wouldn’tbe parallelizable that nice.

To save key exchange efforts the same key is used for several cryptographic systems(if possible at all). Disadvantage: In many cases the resulting system is not moresecure, sometimes even weaker than the weakest of the systems used. This can beeasily shown for authentication: Consider all systems require the same key. Thenit is clear that breaking one system results in the breaking of all systems. If somesystems allow effective gaining of information about the key, the sum of all theseinformation may be sufficent for an successful attack.

Discussion of clumsy solutions:

454

3 Answers for “Basics of Cryptology”

You use authentication system i not only to create a validation record for messagex but also for the validation records 1 to i− 1. This solution is safe of course butunnecessarily expensive:

a) The calculation effort of the validation records is increasing proportionallywith the length of the string to be authenticated.

b) The calculation of the validation records is no longer parallelizable since val-idation record i depends on part i− 1.

3-4 Key exchange protocols

a) The functionality of one key distribution center has to be provided by severalothers. For this either:

– all key distribution centers know directly each others keys or,

– the key distribution centers require a meta key distribution center.

In the later case two key distribution centers need to exchange two keys liketwo normal participants do.

From now on we will assume that both the key distribution centers ZA andZB, with which the participants A and B have exchanged keys, now have acommon key.

Symmetric system: Now one of the two key distribution centers may generatea key and transmit it confidentially to the other (i.e. ZA generates key k andsend it to ZB encrypted with kZAZB

(their common key)). Now every keydistribution center securely tells “their” participants the key (i.e. ZA sendskAZA

(k) to A, and ZB send kBZB(k) to B).

ZBZA

A B

kZAZB(k )

kAZA(k ) kBZB

(k)

Authentication of public keys: The public key of every participant is authen-ticated by “it’s own” center and send to the other’s center (ZA sends the keycA to ZB, signed with sZA

). The center of the other one checks the authen-tication and authenticates the key itself if necessary so that the receiver cancheck this second authentication (ZB checks with tZA

and signs cA with sZB).

455

B Answers

ZBZA

A B

sZ B(cA)

cA, sZA(cA)

cA,

cB, sZB(cB)

cB, sZA(cB)

The resulting key exchange is of course only as confidential as the weakestkey exchange sub-protocol.

Annotation: In case of that the authentication is done by digital signaturesystems it would be an alternative that the center of the other participant isnot authenticating the public key of the first participant but the public key ofhis center. Then the participant can validate the authentication of the publickey of the other participant on his own.

ZBZA

A B

sZA(cA),

cA, sZA(cA)

cA,

cB, sZB(cB)

cB, sZB(cB),

sZA(t ) sZB

( t )t , t ,ZB ZB ZA ZA

The advantages of this alternative are

– Centers can to store less (in the previous alternative ZB has to storecA, SZA

(ca) since only then ZB can justify the issuing of the cA, SZB(ca)

certificate).

– It is clear who has checked what (and who is legally responsible).

– It is transparent to the participant how many steps the authentication ofthe public key took

– This alternative can be applied more generally (cp. exercise part 3-4g).

The disadvantages are:

– The participants have to check several signatures to prove the authenticyof the key.

From my point of view the advantages are significantly prevailing - and theywill increase with the growth of the participant’s computation power.

b) Optional : Probably your solution is made of several key parts that are ex-changed with two different key distribution centers (one per country), and

456

3 Answers for “Basics of Cryptology”

the participants are adding them modulo 2. This is better than to exchangethe key as a whole. However not all key distribution centers per country needto cooperate to get the key. It is sufficient, if one of the two cooperates. Thismeans that the decision of the participants which are the centers in theircountry that will not cooperate at all is cirumvented. Suppose that all keydistribution centers have exchanged keys then there is a better solution thatrespects the local decision of both participants:

In the first phase one of the participants exchanges key parts over all his keydistribution centers with all the other’s centers. Then the participant andthe other’s participants key distribution centers add each of them modulo 2to gain a secret key. If the participant has a key distribution centers and theother one has b then b-time a partial keys (so a · b) are exchanged to gain bkeys.

In the second phase those b secret keys (exchanged with the key distributioncenters of the other participant) along with the keys exchanged between theother participant and his key distribution centers are used to exchange bpartial keys (which of course have nothing in common with partial keys fromphase 1) between the participants (cp. fig. 3.3).

c) Yes, parallelizing is useful for asymmetric systems too. There are methodsexisting for a altering attack of a single (or a few) key registers R: It couldhand out wrong keys to which the key register knows the secret keys. Asseen on the following pictures it could on the one hand unnoticably breakconfidentiality of the messages by re-encrypting them. On the other hand itcould fake them by re-signing or newly signing.

Altering attack on concealation by reencryption of messages

Altering attack on authentication by resigning of messages

The parallelizing is done like that: Every participant A2 publishs his ownpublic key to several public key registers. Every participant B requests A’spublic key from these registers and compares them. If some of them arenot equal either the signature system was broken (modification during thetravel to participant B) or at least one of the key registers is cheating, orparticipant A didn’t tell every register the same key. The latter case can beexcluded by “honest” key registers by syncronizing the key entries. Againyou can’t distinguish all cases clearly so availability is not guaranteed.

To make at least some cases distinguishable the participant who passes a pub-lic key to a key register should provide a written “statement” showing whichkey is his public key. The other way round the participant should receive astatement from the register which key they’ve put into their database. This

2This is pretty simple but not precise. It should be read as: Every participant in a role that has beenused by participant A before ...

457

B Answers

second signature can also be a digital signature since the register’s public testkey is known. After this exchange any disputs between both the parties areas easy to solve as the weakest of the two systems is secure.

There are at least two variations to parallelize asymmetric cryptographic sys-tems:

a) If public keys are to stay valid forever you don’t have to check for theirexpiration. Hence every key register may save the other’s key registerssignatures with the keys. Then an participant may request the key andall the signatures associated with it of another participant from one keyregister. This saves additional communication. Nothing will change forthe signature validation.

b) If you have doubts about the security of the asymmetric cryptographicsystem you may use several of them instead of one (cp. exercise 3-3). Andinstead of using a single multi-authenticated key you would use multiplekeys correlating to the numbers of systems used. For the concealationsystems KSi the key text of KSj is encrypted with KSj+1. For thedigital signature systems the plain text is signed and the signatures areconcatenated.

d) Yes. As to be seen in the next figure you conduct the known key distributionprotocol, each for symmetric and asymmetric concealation, in parallel :3

An participant B then sends a key k4 of the symmetric concealation system tothe other one. The key is then concealed with the public key cA (that is knownto B) of his partner A. Both partners use the sum k of all the symmetrickeys (k1, k2, k3) they received from the key distribution protocol and theadditional key k4. To aquire that sum an attacker would have to gather allsummands. And this would require the passive help of all key distributionauthorities and to break the asymmetric concealation system.

After the key exchange you have to use a Verman Chiffre (one-time pad)for concealation and a information theoretical safe authentication code forauthentication. If you suspect active attacks from key registers the single keyregister should be extended by further ones (cp. part c).

Annotation: The following variation of the question can be relevant in pratice:We do not assume that the attacker (while not receiving passive assistencefrom the key distribution authorities) is complexity theoretical unlimited but“only” able to break asymmetric cryptographical systems. Then you won’tneed a Verman Chiffre nor information theoretical safe authentication codes.

e) If the key is used within a symmetric concealation system the partner can’tdecrypt correctly anymore. If the key is used in a symmetric authentication

3If you want to resist altering attacks on the public key register you can and should use the parallelizing(which isn’t shown in the figure to keep it simple) for the public key registers (cp. answer to exercise3-4c.

458

3 Answers for “Basics of Cryptology”

system the messages that were “correctly” authenticated would be classifedas “fake” too.

To prevent any hidden alteration of the keys they should also be authenti-cated instead of only being concealed (in which order this is to be done issubject to exercise 3-13). If the authentication for the messages used for thekey distribution is valid then the circle of suspects is limited to the key dis-tribution authorities and both participants. The participants should makesure having the same keys by using a testing protocol for key equality rightafter the key exchange. This can be done by each one sending encryptedrandom numbers and returning them in plain text to the partner (This lit-tle crypto protocol is of course only safe if the symmetric concealation usedcan withstand at least one chosen ciphertext-plaintext attack (cp. §3.1.3.2)and if an altering attacker with knowledge of the inconsistent keys positionedbetween the participants can be excluded. Additionally you need to preventthe “mirror attack” from exercise 3-18 by having one participant sending therandom numbers and waiting for the response before the other one does so.Finally you can’t use the Verman Chiffre with this protocol! Because thenthe attacker would gain the exact region of the key that is being tested forequality. In short: We need a better testing protocol or a very good blockcipher. The first one is shown right next).

First of all we will look at a wrong solution. However this will lead us in theright direction:

Within the network it is known how to create checksums over strings (namlyparity bits, Cyclic Redundency Check (CRC), ...). The participants createsuch a checksum and one of them sends the result to the other one. If thechecksums are equal so both participants may think that (with a very highpropability) they have the same key. So where is the error in the reasoning?Simply said checksums are made to prevent stupid errors but no intelligentattacks. For the usual checksums it is easy to find many different values thathave the same checksums. So this can be done by the attacker to distributeinconsistent keys. Then the two participants would recognize this only if theyare using the key. A clever creation of a checksum would be a process withan outcome the attacker can’t predict!

In [BeBE 92, pg. 30] (with reference to [BeBR 88]) you will find the followingnice idea for a information theoretical safe testing protocol for key equality :For several times the participants pick randomly (and independently fromthe previous pick) the half of all key bits and tell each other the sum mod 2(If the keys don’t match it is discovered with a propability of 1

2). To makethe eavesdropping attacker not knowing anything about the forthcoming keysone bit of the sum mod 2 is removed (for example the last one). The key isshortened one bit for every iteration. This is no problem because you onlyneed a few iterations: After n steps the propability that the result is right

459

B Answers

is 1 − 2−n and for n > 50 sufficently high (For this testing protocol for keyequality too you have to exclude an altering attacker with knowledge of theinconsistent keys between the two participants: He could modify the sumssent to make them look consistent to the other participant).

One key distribution authority : If only symmetric authentication is usedfor the key exchange (so neither digital signatures nor unconditional securepseudo-signatures; cp. §3.9.3) then from each participant’s view the otherparticipant or the key distribution authority is lying (this has been provenfor a long time within the context of the Byzantine Agreement protocols[LaSP 82]). Either the participants are giving up or they pick, if possible,another key exchange authority.

Several key distribution authorities in sequence: If the two participants detectinconsistencies in their keys the key distribution authorities too should checkthe consistency with the same method and, if possible, circumvent attackingkey authorities. This is equal to the changing of the key exchange authorityfor “one key exchange authority”.

Several key exchange authorities in parallel : If the participants detect in-consistencies in their keys’s sum they should repeat the testing protocol forequality for every single of the added keys. All inconsistent one’s are omittedfor the sum. If there aren’t enough keys left to be added the participantsshould use additional key distribution authorities or even abstain from com-munication under these circumstances. Else an altering attacker between theparticipants can thwart the inclusion of keys into the sum that are unknownto him.

f) Ideas (maybe uncomplete; further one’s are welcomed):

a) Keys contain a part representing their expiration time. Then the partic-ipants only need some correctly working clocks for their security. Disad-vantage: If a key is compromised before it’s expiration it’s usage can’t bestopped.

A related solution would be to limit the amount of usages of the key. Thisonly makes sense for symmetric cryptographical systems where both theparticipants can count. Of course they have to permanently keep trackof the keys expired.

b) Explicit recall of keys - either by the key owner or the key’s certificateauthority. This can be realized by a digitally signed message that con-tains, along with the recalled public key or certificate, the exact point oftime of recall and the recallers identity (If the keyowner is the recallerthis authentication is last action undertaken with the associated secretkey).

Explicit recalls of keys of course only work with participants that can’tbe cut off by an altering attacker. This additional limitation in contrast

460

3 Answers for “Basics of Cryptology”

to 1. is the price for higher flexibility.

However there are protocols that make the participant able to detecttheir isolation very fast: They are sending each other messages after anamount of time slices previously agreed on. All messages are numberedan authenticated (including the enumeration). If there haven’t been anymessages around or if some of them are missing the participants canconclude that somebody tried to isolate them.

c) We combine 1. and 2. to have the advantages of both worlds.

d) Prior to the usage of the key the owner is asked if it’s still valid. Disad-vantage: If the participant isn’t always online large delays may occur ifwaiting for the answer without timeout.

e) A variant of 4. by waiting only a short amount of time. Since an alteringattacker can suppress the answer for a little time without notice thisshould be combined with variant 3.

g) What needs to taken care of while exchanging public keys?→ The public keys need to be authentic therefore the receiver needs to beable to verfiy the authenticy.

On whom can the task of a trustworthy public key register be distributed?→ The functionality of the trustworthy public key register could be dis-tributed onto some of the other participants.

When does the key exchange take place?→ Instead of letting public key registers (one or more; cp. part c) certify theassociation between your identity and your public key by it’s digital signaturesthe participant certifies this association by all participants he knows (andwith whom he can safly communicate, for example outside the system to besecured). Those certificates are passed along with the public key. The receiverof such a publicly certified key now decides if he wants to trust it’s authenticy.For this you could use certificates with known and authentic testing keys. Touse those other certificates you need certificates for their testing keys and soon. Usually a public key t becomes more trustworthy to participant T themore participants (called introducer in [Zimm 93]) certified key t in a mannerparticipant T can verify that. On the other hand the certificate will be moretrustworthy to T the less certificates are needed by T for it’s authentication.

What is to be done if not anybody trust everybody equally?→ Instead of considering all introducers equally trustworthy you can assignthem individual propabilities for their trustworthiness. Consequently you canalso calculate a propability for trustworthyness for every certificate chain aswell as for every multiply certified public key. The participant can now decideif the propability of trustworthyness is sufficent for the application.

What is to be done if a participant discovers that his private key is known tosomebody else?

461

B Answers

→ If there is no central hierachical key distribution then there is no centralhierachical list of all invalid keys. So the poor participant now needs to sendhis signed message “My key is invalid from now on” to all other participants.If legal stuff comes into play the receiving of this key-revocation-message needsto be acknowledged by all with date and time (A obvious but wrong solutionwould be: Everybody to use a key must ask the owner if the key is still valid.But now the participant who knows the secret key give the correctly signedanswer: “Of course my key is still valid. I guard my secrets!!!”).

What shall a participant do if his secret key was destroyed?→ Beside the creation of a key pair and the publication of the public one heshould tell everybody not to use the old key for new messages. To preventthat not everybody is able to do so this message should be signed - optimallywith the key that is just considered as destroyed. So right after the generationof a key pair the message “Please don’t use my key t anymore!” should besigned as first. This message is not needed to be kept as secret as the secretkey of which mostly no copies exist. But this message can be stored withseveral copies. Often it is useful to distribute it with a threshhold scheme(cp. §3.9.6) onto several other friends.

What is to be done if a participant discovers that he certified a wrong asso-ciation Participant ↔ Public Key?→ The participant revokes this certificate with a signed message. Regardingthe distribution of this Certificate-Revokation-Message the same as for therevealing of the secret key (cp. above) applies.

How to combine the technique developed in this exercise part with the onefrom exercise part c)?→ This key exchange method can be easily combined with the methods fromc) and d). Central hierachical key registers and anarchical key distribution ala PGP/GPG support each other. The safety is higher than with only onesystem.

3-4a Which kind of trust is needed for which function of a TrustCenter?

Blind trust for an entity is always necessary if it can do observing attacks. If anentity can only do altering attacks you can shape the procedures in order to makethe attacked participant taking notice of. And of course the other way round thatif the participant recognizes no attack there is no attack currently taking place.

• Key generation requires blind trust since the generator can always store acopy locally.

• Key certification only requires checkable trust since with a wrong key certi-fication the affected participant would see it during the first conflict. Withcorresponding procedures (cp. §3.1.1.2) he can even prove this attack.

462

3 Answers for “Basics of Cryptology”

• Directory services only require checkable trust since the providing of falsekeys can be proven if answers to requests to the directory service are digitallysigned.

• Locking service only require checkable trust (cp. directory services).

• Time stamp services only require checkable trust if a suitably distributedlog file exists since it prevents the service from back-dating (cp. [BaHS 92,HaSt 91]).

These rumminations allow the conclusion that if security functions are bundledinto one entity you should only bundle key certification, directory service, lockingservice, time stamp services but never the key generation (cp. exercise 1,2).

3-5 Hybird Encryption

Case 1:

a) Generation of a key k with a fast symmetric concealation system like AES.Encryption of k with the public cipher key cE of the receiver E. Encryptionof the file with k. Sending: cE(k), k(file).

Receiver decrypts the first part with decipher key dE and gains k. He uses kto decrypt the file.

b) Generation of a key k with a fast symmetric concealation system that conductsconcealation as well as authentication like DES running in PCBC mode (cp.§3.8.2.5; if you’d like to use two pure cryptographical systems you must replacek with k1, k2 and continue as seen in exercise 13). Signing of k (maybe withadditional date and time, etc..) with your own secret signing key sS (S as inSender) and then encryption of the signed k with the public cipher key cE ofthe receiver E (to conceal first and then sign would work too). Encryptionand authentication of the file with k. Transmission: cE(k, sS(k)), k(file) (Ifthe symmetric concealation system is less complex if the signature sS(k) isnot concealed with it then the message format cE(k), k(sS(k), file) can bebetter.).

Receiver E decrypts the first part with his decipher key dE and gains thesigned k and checks the signature with the testing key tS . If the signature isvalid E uses the key k to decrypt the file and to check the authenticy (Forpure cryptographical systems: Let k1 be the key for a symmetric concealationsystem, k2 key for a symmetric authentication system. By replacing k withk1, k2 you send: cE(k1, k2, sS(k1, k2)), k1(file, k2(file)). The authenticationof k1 is not necessary: cE(k1, k2, sS(k1)), k1(file, k2(file)) will do fine too.)

c) Like b) but the symmetric cryptographical system only needs to perform theauthentication.

Case 2:

463

B Answers

Here S can do as for case 1 therefore creating a key k for every file, encrypt itasymmetrically and so on. E too must proceed as for case 1. Alternativly S canreuse the key k created for the first file transmission - and tell E about this. Thiscan save a lot of complexity: No additional key generation, key distribution, keytransmission and key decryption - and for b) and c) no key signing, no signaturetransmission and no signature validation.

Case 3:

Here E can do as for case 1 therefore creating a key k for every file, encryptit asymmetrically for S and so on. If E wants to reuse key k for the first filetransmission as S did in case 2 you must be cautious: E didn’t create k on it’sown and needs to make sure that k is only known to the intended receiver of hisfile. For b) and c) it is assured by the digital signature from S. For a) E can’t besure who has generated k since here k isn’t authenticated at all, especially not byS. If at a) E wouldn’t create a new symmetric key k′ but use the old k an attackerA that has generated the original k and sent the first file to E could intercept themessage k(file from E addressed to S and decrypt it with k.

And what do we learn? A protocol that creates a random key can’t be used (atleast no without further deliberations) with an existing key. This is according tothe fundamential proposition of cryptographics: All keys and initial values shouldbe picked randomly and independently from each other - unless they need to bepicked in dependence, or the picking is thoroughly analyized and includes a proofof security.

3-6 Real-world situations (for the classification of cryptographical systems

a) An symmetrical authentication system is sufficent to spot alterations at eachother trusting partners. A key exchange between old friends is no problem.The system is neither very security critical nor time critical. To keep rentaloptions and preference of the crypto folk secret from curious renters that are(or it’s friends) employed as communication technician you can additionallyuse symmetric concealation. So picking a cryptographical system for thePfitzmann Familiy is like beeing spoilt for choice.

b) Symmetric concealation system, direct key exchange, security critical (cryp-tographers!), time is uncritical. David will use a one-time-pad or a pseudeone-time-pad - if it’s only about stealing ideas. If David is also intrested inhaving his data getting transmitted unaltered to the his trustworthy cryptog-rapher an additional authentication is necessary.

Since David has propabily already met all cryptographer that are trustworthyto him a direct key exchange should be no problem. But the question raisesif David want’s to encrypt his messages for each cryptographer seperately -which would be necessary with canonial key distribution (every cryptographerwith another key). If David wants to occasionally send ideas to a certain group

464

3 Answers for “Basics of Cryptology”

then he could give everybody of the group the same key - this would save quitesome encryption effort. Regarding confidentiality this wouldn’t be a problemsince the plain text would be known everybody from the group. Howeverif David want’s to exclude somebody from the group - then he must sendanybody else from the group a new key. Such a group key is also problematicregarding authentication. Because now everybody could authenticate the newidea with the symmetric authentication system and therefore tarnish David’sreputation of having new and good ideas. With pairwise key knowledge thiscouldn’t happen.

c) A symmetric authentication is sufficent with actually no need for a key ex-change. But time and space are very critical. You could use DES (runningin CBCauth mode cp. §3.8.2.2).

d) The computer freak should use a digital signature system. The key exchangehas to take place via a central authority or by phone book to make the sellerknow about the right testing key of the buyer. And to make sure that theassociation buyer - testing key is legally binding. Since the cryptographicsystem is time uncritical but security critical the computer freak should useGMR.

e) Concealation systems for keeping business secrets (maybe even authentication(cp. below), but symmetrical is sufficent because it’s not a legally bindingcontract). The key exchange should run over a public key register. So becauseof the key exchange a asymmetric concealation system is needed, like RSA(or a hybrid, cp. §3.1.4, like RSA combined with DES). If the request is donewith FAX, with 9600 bit/s or ISDN with 64 kbit/s, the application is not timecritical. If the company uses broadband communication then the usage of ahybrid concealation system is necessary (cp. §3.1.4). The security of RSAand DES should be way sufficent for this not very security critical application.

If authentication is not used at all the following attack can thwart the securityof business secrets even if a perfect concealation system is used: To gainknowledge about the prices of the competitors the attacker just sends theman according request (including the key for the concealation system). If onlynetwork addresses are used as “sender” and therefore as authentication theattacker just has to intercept the reply message to make the real receiver notsuspicious. If the sender is generated by the network the attacker has to adaptit to the connection of the attacked one.

For the reply you’ll need additional asymmetric authentication beside theconcealation to make the offer legally binding (cp. d)).

3-7 Vernam’s Cipher

a) The key 00111001 may be exchanged. The plain text 10001110 is supposedto be encrypted. Using addition modulo 2 the encrypter receives the ciphertext 10110111. By addition of the key and cipher text modulo 2 the encrypterreceives the plain text. Written down as calculus is as follows:

465

B Answers

encrypter:plain text 10001110⊕ key 00111001

cipher text 10110111

decrypter:cipher text 10110111⊕ key 00111001

plain text 10001110

b) Plain text x, cipher text S and key k be elements of the group. Regarding tothe definition of Vernam cipher the following is valid: The encrypter calculatesx + k =: S. The decrypter calculates S − k =: x (addition of the inverseelement is written as subtraction here). If an attacker finds out S then foreach plain text x’ exactly one key k’ exists, because in each group the equationx′ + k′ = S can be dissolved for k’: k′ = −x′ + S. Since all keys have sameprobability from the attackers view he won’t get to know something aboutthe plain text when he knows cipher text S.Annotations:

The proof uses no commutativity, thus it is valid for arbitrary groups.Proof if yours is possibly valid only for Abelian groups.

The proof ignores that according to the communication protocol the at-tacker gets to know something about the plain text: usually its length,at least his maximum length. The first can be avoided by filling mes-sages up to standard length and/or by adding meaningless messages, cp.§5.4.3. The best is to transmit a permanent stream of signs to ensure thatan attacker can’t recognise when no messages are being transmitted, cp.§5.2.1.1.

c) The result is unambiguous only if the coding operation is assumed as arranged.(This agreement can be considered as a part of the definition of the system ofsecrecy or of the key.) If addition modulo 2 is subordinated as an operation,the following plain text arises:

cipher text 01011101⊕ 10011011

plain text 11000110

If addition modulo 16 is subordinated, however, as an operation, anotherplain text arises:

cipher text 01011101−(16) 10011011

plain text 11000010

if it is calculated character wise (4-bit-wise), otherwise the following arises:

cipher text 10111010−(16) 11011001

plain text 11100001

written using our convention: 10000111.

d) Calculating modulo 2: Since each bit is en–and decrypted, it is valid: Invertthird bit from the right, i.e. transmit 010111010101110101011001.

466

3 Answers for “Basics of Cryptology”

(Modulo each number causes a change of the cipher text from S to S′ anda change of the plain text about the same difference d := S′−S, becauseS is encrypted as x′ = S′ − k = (S + d)− k = (S − k) + d = x + d.)

The following calculations assume that it is written character-wise from theleft to the right.Calculation modulo 4: Because always two groups of bits are en– resp. de-cryped, it is to be cleared what has to happen with the third last and fourth-last bit. If 01 modulo 4 is added to them the third last bit is inverted forsure, but the fourth-last bit too, provided that an add carry exists. If theattacker does not know the third last plain text bit, he cannot prevent thechange of the fourth-last plain text bit deterministically. Because nothing wassaid in the exercise about the fourth-last bit, one can accept the following asa solution: The attacker transmits 010111010101110101010001. Should thechange of the fourth-last bit be avoided with unknown third last plain textbit deterministically, the exercise is unsolvable for modulo 4.Calculation modulo 8: Because always 3-groups of bits are en– resp. de-crypted, it is to be cleared what has to happen with the three last bits. If100 modulo 8 is added to them, the third last bit is inverted for sure. Thusthe attacker transfers 010111010101110101010001.Calculation modulo 16: Because always 4-groups of bits are en– resp. de-crypted, it is to be cleared what has to happen with the four last bits. If0100 modulo 16 is added to them, the third last bit is inverted for sure, butthe fourth-last too. provided that an add carry exists. If the attacker doesnot know the third last plain text bit, he cannot prevent the change of thefourth-last plain text bit deterministically. Because nothing was said in theexercise about the fourth-last bit, one can accept the following as a solution:The attacker transmits 010111010101110101010001. Should the change ofthe fourth-last bit be avoided with unknown third last plain text bit deter-ministically, the exercise is unsolvable for modulo 16.

e) Because every section of the key is used by the attacked person only onceand all sections are generated stochastically independent of each other, anattacker can learn nothing from earlier reactions about future reactions. (Inprinciple such an attack exists of course, but it has just no use). Same is validregarding to active (not adaptive) attacks.It is noticed, however, that both sorts of active attacks regarding to messagescan be successful: By an active attack (chosen ciphertext-plaintext-attack)on the receiver an attacker finds out the value of the just current key sectionkcur. Then he is able to decrypt the next messages Nnext encrypted by thesender. The lesson from here is: The partner who decrypts is not allowedto output decrypted messages and make an active attack possible in thisway. It is allowed to deviate from this austerity, if the received message isauthenticated correctly. (For logical reasons with an information theoreticalsecure authentication code.)

467

B Answers

An additional annotation: With the Vernam cipher keys (or at least keysections) must be applied by one partner only in order to encrypt and fromthe other partner only in order to decrypt. It is in no case enough that twopartners have a common key and always the one who wants to send just amessage to the other, uses for this as a “key” the next key section. Otherwiseit can happen that both partners send messages at the same time, so that anattacker finds out the difference of their plain texts.

f) Since this exercise is about properties of the en– and decryption functionitself, i.e. independent of single keys, the abbreviated notation introducedin §3.1.1.1 is chosen awkwardly (The functions themself are just not presentin this definition), so that we go over to the more detailed notation likewisedescribed in the mentioned section. First the definition (1) is written downin the more detailed notation:

Definition for informational theoretical security for the case that all keys arechosen with same probability:

∀S ∈ S∃const ∈ N ∀x ∈ X :| {k ∈ K|enc(k, x) = S} |= const (1)

We show: At least if, as with all Vernam ciphers, key space = plaintext space= ciphertext space it is necessary and sufficient : The function enc has to runthrough the whole co-domain in case of retaining one of both input parametersand varying the other.

1 (retaining k) In case of constant k each ciphertext must be encrypteduniquely, thus occuring only for one plaintext (i.e. enc(k, •) is injective).Since there are as many ciphertexts as plaintexts, it is equivalent thateach ciphertext occurs at least once (i.e. dec(k, •) is surjective).

2a (retaining x, passing through is necessary) From definition (1) followsthat behind each actually appearing ciphertext S each plaintext x ispossible. It follows from 1. that each value S of the ciphertext spaceoccurs actually. This is formally written in the following way: ∀S∀x∃k :enc(k, x) = S. This is equivalent to ∀x∀S∃k : enc(k, x) = S, i.e. that allS can occur in case of retained x.

2b (retaining x, passing through is sufficient) Do vice versa all S occur whenretaining x security follows: If S is given, thus for each x exists a k, sothat enc(k, x) = S. In order to show that all x have same probability, too,we do need again that because of same size of the spaces enc(k, x) = Sis only valid for one k.

For the binary case of the Vernam cipher we search 2x2 functional boards,in which in every line and column stands a 0 and 1. There are exactly twowhich represent the function XOR and NOT XOR. (That there are exactlytwo one can see for example in this way: If one specifies the value for onecombination of both input parameters the function is clear, because the resultof the function must change each time when the parameter values are varied.

468

3 Answers for “Basics of Cryptology”

Since one has 2 possibilities for the first function value it follows that theremust be two functions.)

xS = k(x) 0 1

k 0 y y1 y y

y may be the inverse to y.

xS = k(x) 0 1

k 0 0 11 1 0

For y=0: XOR

xS = k(x) 0 1

k 0 1 01 0 1

For y=1: NOT XOR

g) Dear younger friend!Firstly if it is not clear to you intuitively you can reread in historic sources[Sha1 49], that the natural language consists of redundancy of more thanthe half and is therefore no replacement for a randomly chosen key, i.e. akey of maximal entropy. Therefore, one can understand the sense of longerciphertexts, if natural language is encrypted in this way, without determiningbook, side and line. Yes one can understand even the sense of the book passageused for encryption and determine it if necessary. (An exact procedure to theentire breaking by means of a pure ciphertext-plaintext-attack is outlined in[Hors 85, page 107 f.]. Entire breaking by means of a plaintext-ciphertext-attack is essentially easier.)

Secondly if you have not recently done it you should inform yourself aboutthe momentary and in future expected computing power and storage capacityof a PC. Then you will also fear that one would be able soon to store thewhole world literature on some optical disks and then decrypt automaticallyeach ciphertext, that has been encrypted using your procedure, that containsmechanically recognizable redundancy. by entire search in some hours. (Verydefenisive explanation for plausibility: World literature may have 108 volumesthat average 1 MByte. Thus 108/600 disks are needed for 5,25”-optical diskswith 600 MB each that are usual for 1990, thus circa 160 000 optical disks.To rummage them from each bit position needs 108 ∗ 220 ∗ 8 = 8 ∗ 1014 steps,where each step might last 10−6 s. Thus entire rummage would last 8 ∗ 108s,thus 92.593 days when using a computer with todays performance. If onestarts with the most probable works of world the search will end a lot faster.)

Finally a small comfort: You are not the first one who creeps in such amistake. Besides, you had luck to find a friend by publishing your procedurewho points out your mistake to you. Many of your colleagues who want tolearn either nothing more or whose principals want to protect copyrights keeptheir procedures secretly. You can think what often happens then.

h) No, because one can think out an arbitrary different plaintext of the samelength and then calculate the key so that he fits to this plaintext and theexisting ciphertext. Thus the prosecution authority gets no meaningful infor-mation.

i) One exchanged storage medium is used for 2-road communication, so that forevery direction only half of the storage capacity can be used. Then arises:

469

B Answers

text communi-cation (typingspeed 40bit/s)

speechcommunication(normal ISDNcoding:64kbit/s)

high-resolu-tion full-videocommunica-tion(140Mbit/s)

3.5” floppy(1.4 MByte)

1.62 days 87.5 s 0.04 s

CD-R (700MByte)

850 days 12.7h 21 s

DVD-RAMone-sided(4.7 GByte)

5440 days 3.4 days 134 s

Dual-Layer-DVDdouble-sided(17 GByte)

19676 days 12.3 days 468 s

BluerayROM (50GByte)

57870 days 36.2 days 1429 s

Exchange of 1.4 MByte floppy suffices for text communication for most appli-cations “for life” (since you will only like a few so much that you type 1.62 daysnon-stop for them). Appropriately exchange of optical 700 MByte of diskssuffices for many linguistic communication-applications (even if many peopletalk more perseveringly than typing). For uncompressed high-resolution full-video communication the current mass storages are still too small, but areabsolutely useful for compressed full-video communication of low dissolutionlike intended for ISDN (an additional channel of 64 kbit/s for the picture):To talk and watch ones telephone partner information-theoretical secure withthe help of a DVD-RAM for 3.4 days, might also reach here “for life”. Forvolatile acquaintances suffices a CD-R newly - and for intense tele-relations adual Layer DVD will suffice in future.

Another question is whether the reading access to media goes quickly enoughto be able to access the key in real time. For text communication and linguisticcommunication no problem. For high-resolution full-video communicationwith 140 Mbit/s for all media this is not possibly up to now - another reason,why full-video signals are compressed not only free of loss, but even withloss. The first obtains ca. the factor 4, with the latter it depends on the factwhich quality loss one would like to accept. If you have still unpacked yourpocket calculator, you can calculate the numbers for high-resolution full-videocommunication once more with the typical values of 34 Mbit/s for free of lossand 2 Mbit/s for lossy compression. What arises? DVD-RAM and DVD DualLayer are absolutely suitable for high-resolution full-video communication -for the storage of the One time Pad as well as for the storage of full-video

470

3 Answers for “Basics of Cryptology”

signal. No miracle: key, ciphertext and plaintext are of same length - and themedia were designed for the latter.

3-7a Symmetric encrypted Message Digest = MAC?

a) The recipe is completely insecure how the example mentioned in the exerciseshows:

From the intercepted message everybody can calculate the check digitby means of the publicly known procedure. From the check digit andtheir also intercepted encryption with the Vernam cipher everybody candetermine by calculating the difference the current key section. To each(from the attacker freely choosable) message the check digit can be calcu-lated by means of the publicly known procedure and then be encryptedby means of the current key section to an “authentic” MAC.

A selective breaking is reached by this passive attack what is not acceptable,as already mentioned in §3.1.3.1.

b) For the usual collision-resistant hash-functions this is a secure constructionafter current knowledge. But these are not only collision-resistant, but theyadditionally have a surpassing essentially property for the security of theconstruction applied with TLS:

Their functional value betrays nothing about the input.

To make clear, why this is essential, we assume the contrary: The collision-resistant hash-function may output 80 bits of its input as a part of its func-tional value. If the functional values as well as the inputs are long enough,such a collision-resistant hash-function can be a super collsion-resistent one.However, used with the construction applied with TLS, it shortens the keylength about 80 bits if the key stands “unfortunately” at the correspondingplace of the input. If it is worked with a key length of 128 bits what usually ismore than enough, then an attacker has to try at most 248 times to determinethe key. Today nearly every relevant attacker is able to do this.

c) Yes: Hash (¡key¿, ¡number of byte¿, ¡byte¿) and transfer the functional values(if necessary together with ¡number of byte¿, if the receiver does not helpcounting or the communication medium is unreliable). Now the receiver onlyhas to try out the values of the byte, by hashing (¡key¿, ¡number of byte¿,¡value of byte¿ and compares this with the transmitted functional value todecipher the functional value to the plaintext byte. Out of sender and receiverthis can nobody, because nobody knows the key. The numbering of the bytesprevents that same bytes result in same functional values, a synchronouscurrent cipher is produced this way, cp. §3.8.2. In the example it is supposedthat a byte is a meaningful compromise between the number of trys in caseof decryption by the receiver and the message-expansion by essentially longervalues of the hash-function (typically 160 bits) and number of bytes (typically

471

B Answers

64 Bit). It is clear that instead of bytes bits up to the long word of 4 byteseverything can be taken.

3-8 Authentication codes

a) Exchanged be the key 00111001. The plaintext HTTT is to be secured.Using authentication code H, 0T, 1T, 0T, 1 is generated as “ciphertext” andtransmitted.

b) If an attacker intercepts a combination of H and T as well as the MAC, two(of four) ciphertexts remain for him indistinguishable for good, because Hresp. T occur in each coloumn of the table, in which they occur, exactlytwice. Exactly these two values distinguish in, wether the MAC has to havethe value 0 or 1 for the other value of“throwing the coin”. (This claim canbe easily verified for this simple authentication code by consideration of allpossibilities.) Therefore the attacker has no better strategy than guessing,what will lead to the fact that the receiver will recognize the falsificationwith a probability of 0.5. An easier representation: If k = k1k2 then isk(H) = k1 and k(T ) = k2. (This is illustrated in the following picture takenfrom the exercise, where k1 is drawn bold.)Thus the authentication code canbe pictured as shown in the figure on the right: Here the value of the MaACis represented in dependency of the key and text. Obviously one does not getto know anything about the MAC for T through the current MAC for H.

Text with MAC

H,0 H,1 T,0 T,1

0,0 H - T -keys 0,1 H - - T

1,0 - H T -1,1 - H - T

Text

H T

0,0 0 0keys 0,1 0 1

1,0 1 01,1 1 1

c) No, because the receiver uses his key sections in a constant order.

d) Example: The key 00111001 be exchanged. The plain text HTTT is to besecured. Then the cipher text 00110100 is generated and transmitted.Regarding to authentication the argumentation above applies analogous: Forboth key values that are possible after being observed by an attacker he hasto take different cipher texts in case of the striven change of plain texts.Thus again he has no better chance for success than guessing right with aprobability of 0.5.If an aggressor executes a changing attack, however, he can recognize in everycase in the reaction of the attacked person (accepts faked message or not)recognize, which of both possible key values that are still possible after hisfirst observation is present, and thereby he is able to determine exactly thekey value. From this value and the observed cipher text arises the clear text.

472

3 Answers for “Basics of Cryptology”

This is explained with an example: The attacker intercepts the cipher text00. Then he guesses from the two possibilities the value 00, thus the plaintext H. He changes the plain text to T by forwarding the cipher text 10. Ifthe receiver accepts this, his guess was right, thus the original plain text wasreally H. If the receiver does not accept, the value of the key was 01, andtherewith is the original plain text T .

e) An additional key bit is used. If it is 0 everything stays the same. If it is 1all T are replaced by H in the table and vice versa. Even with the optimalattack over the key regarding to acquisation of information described abovethe attacker does not know the value of the additional key bit. This key bitacts as Vernam cipherso that the encryption is still perfect even after thisattack.Now, unfortunately, the combined system is not more efficient any more thanan authentication code and Vernam cipher, provided that only the profit bitis encoded first and afterwards authenticated. If one does it the other wayround, one needs instead of 3 key bits 4.

f) It is to be shown that there is for each possible MAC’ (i.e MAC ′ ∈ K)exactly one compatible value (a,b) according to the observation (m, MAC ′).The system of equations

MAC = a + b ∗m

MAC ′ = a + b ∗m′

with the unknown quantities a, b has exactly one solution if and only if m 6=m′, but m 6= m′ is exactly the aim of the attacker.

For those who do not remember linear algebra that exactly here a somewhatmore exactly explanation: A system of equations C = M ∗X with the matrixM and the vectors C and X is with given C and M exactly then uniquelysolvable if M is regular. M is regular if and only if a triangular matrix can becreated by the rows and coloums of M (i.e. all diagonal elements are 6= 0 andeither all values beneath the diagonal or all values above the diagonal are 0).This is shown for our matrix

(

1 m1 m′

)

shortly: Subtraction of first from second row results:

(

1 m0 m′ −m

)

This is exactly then a diagonal matrix if m 6= m′.To show a subtle proof error an older proof is shown in the following:

MAC := a + b ∗m⇔ a := MAC − b ∗m

473

B Answers

Thus b can be chosen arbitrary, because the equation can be complied by a.Thus the attacker experiences nothing about b. In case of authentication ofanother message m’ he ought to know b to be able to correct MAC to MAC ′

{because b ∗m changes its value arbitrary and distributed uniformly in caseof variation of m }Without the addition of the braces in the “proof” the authentication codeMAC := a + b ∗m2 would be a nice example to prove the opposite: Becausethe “proof” applies here too, but here the message m can be replaced by -mwithout having to change the MAC.

3-9 Mathematical secrets

a) It is not completely clear because the prime numbers are not chosen with sameprobability any more, but such, that come after long gaps in the sequence ofprime numbers are chosen with higher probability and such, that are onlytwo bigger than another prime number are only chosen if they were chosen asrandom number at the beginning.

For specialists: At least under Cramer’s assumption the procedure issecure nevertheless: This assumption (which I know from [GoKi 86]) im-plies π(x∗log2 x)−π(x) > 0. This means that one gets the prime numbersafter longest gaps only up to a factor of log2 x ≈ l2 more frequently thanthe above mentioned prime twins. For the products n probability differ-ences up to l4 arises. If there would be a successful factorization algorithmfor this distribution it would be less successful only with a factor of atmost l4 for a correct distribution. Therewith his success probability de-creases not faster than each polynomial, which stands in contradictionwith the factorization assumption. (Without Cramer’s assumption onewould come artificially to the same result, if one proceeds the search in2-steps not arbitrary long, but uses a criteria for stopping, e.g. after l2

2-steps. )

b) One can expect that both prime numbers are very near together, thus bothare about

√n. Therefore, one can search the numbers in the environs of

√n

(i.e.divide n by each of them.)More exactly: In the most cases applies, e.g. q ≤ p + 16l (prime numbertheorem with some leeway, because the distance to the next prime numberaverages l ∗ ln(2). The particularly beauty number 16 was chosen to alleviatethe following calculations).

In this cases is p2 < n < p(p + 16l) < (p + 8l)2, i.e. p <√

n < p + 8l, thus√n− 8l < p <

√n. Thus one only has to rummage this 8l numbers and has

then with significantly, even huge probability factorized.

c) 19 = (10011)2. MOD 101 applies:

312 = 3; 310

2 = 32 = 9; 31002 = 92 = 81 ≡ −20; 31001

2 = (−20)2 ∗ 3 = 1200 ≡ −12;

3100112 = (−12)2 ∗ 3 = 144 ∗ 3 ≡ 43 ∗ 3 ≡ 129 ≡ 28.

474

3 Answers for “Basics of Cryptology”

d) 77 = 7 ∗ 11; φ(77) = 6 ∗ 10 = 60 and 3 ∈ Z∗77, because gcd(3, 77) = 1. Thus

the small Fermat theorem says: 360 ≡ 1 mod 77.Thus it is valid: 3123 ≡ (360)2 ∗ 33 ≡ 12 ∗ 27 ≡ 27.

e) Euclid’s algorithm:

101 = 4 ∗ 24 + 5;

24 = 4 ∗ 5 + 4;

5 = 1 ∗ 4 + 1;

Backwards:

1 = 5− 1 ∗ 4 (Now dividing 4 by 5 and substituting 24)

= 5− 1 ∗ (24− 4 ∗ 5) = −24 + 5 ∗ 5 (now substituting 5 by 24 and 101)

= −24 + 5 ∗ (101− 4 ∗ 24) = 5 ∗ 101− 21 ∗ 24(Probe: 5 ∗ 101 = 505,−21 ∗ 24 = −504)

Thus: (24)−1 ≡ −21 ≡ 80.

f) There is no inverse for 21 mod 77 since gcd(21, 77) = 7 6= 1.

g) First we apply the extended Euclidean algorithm, in order to search u andv with u ∗ 7 + v ∗ 11 = 1. (Even if one can see this perhaps immediately:-21+22=1):

11 = 1 ∗ 7 + 4;

7 = 1 ∗ 4 + 3;

4 = 1 ∗ 3 + 1.

Backwards:

1 = 4− 1 ∗ 3

= 4− 1 ∗ (7− 1 ∗ 4) = −7 + 2 ∗ 4

= −7 + 2 ∗ (11− 1 ∗ 7) = 2 ∗ 11− 3 ∗ 7.

Thus the “base vectors” for p=7 and q=11 are: up = −21 ≡ 56, v ∗ q = 22.With the formula y = up ∗ yq + vq ∗ yp arises: y = −21 ∗ 3 + 22 ∗ 2 =(22− 21) ∗ 2− 21 = −19 ≡ 58.(Probe: 58 ≡ 2 mod 7 ∧ 58 ≡ 3 (mod 1)1.)

For z we can insert into the same equation (up and vq have to be calculatedonly once). We simplify before: 6 ≡ −1 mod 7and 10 ≡ −1 mod 11. Thus:z = −21 ∗ (−1) + 22 ∗ (−1) = −1 ≡ 76 mod 77.We could have had it just more simple: After formular (∗) from 3.4.1.3 isz ≡ −1 mod 7 ∧ z ≡ −1 mod 11⇔ z ≡ −1 mod 77.

475

B Answers

h) Since everything is modulo prime numbers one can apply Euclid-criteria:(

13

19

)

≡ 1319−1

2 ≡ (−6)9 mod 19; (−6)2 ≡ 36 ≡ −2⇒ (−6)4 ≡ 4; (−6)8 ≡ 16 ≡ −3

(−6)9 ≡ −3 ∗ (−6) ≡ 18 ≡ −1 : no square.(

2

23

)

≡ 211 ≡ 2048 ≡ 1 mod 23 : square

(−1

89

)

≡ (−1)44 ≡ 1 mod 89 : square

The simple case p ≡ 3 mod 4 propounds only with p=23. Thus one is able toextract the root just with our knowledge about the algorithm: A root is

w = 223+1

4 = 26 = 64 ≡ −5 ≡ 18 mod 23 (Probe: (−5)2 ≡ 25 ≡ 2)

(Thus the other root is +5.)

i) It is 58 ≡ 2 mod 7 ∧ 58 ≡ 3 mod 11. For p = 7, 11 p ≡ 3 mod 4 is valid, andit is

27−12 ≡ 8 ≡ 1 mod 7and3

11−12 ≡ 9 ∗ 9 ∗ 3 ≡ (−2) ∗ (−2) ∗ 3 ≡ 1 mod 11.

Thus 2 and 3 are square remainder mod 7 resp. mod 11. Thus we can use thesimple algorithm for extracting root to receive roots w7 mod 7 and w11 mod11:

w7 = 27+14 = 22 = 4 mod 7.

w11 = 3frac11+14 = 33 = 27 ≡ 5 mod 11.

With the Chinese Remainder Theorem and the already calculated “base vec-tors” we receive:

w = −21∗w11+22∗w7 = −21∗5+22∗4 = (22−21)∗4−21 = −17 ≡ 60 mod 77.

(Probe: (−17)2 = 289 = 3 ∗ 77 + 58.)If we insert each the “other” root mod 7 or mod 11, we receive the threedifferent roots:e.g.: w′ = −21 ∗ 5 + 22 ∗ (−4) = −105− 88 = −193 ≡ 38 mod 77.(Probe: 382 = 1444 = 18 ∗ 77 + 58.)Instead of proceeding calculating this way, we take now the inverse of botalready known roots:w′′ = 17w′′′ = 39.

j) One knows another, essentially different root of 4, viz 2.Thus one can receive a factor p as gcd(2035+2, 10379). Euclidean algorithm:

10379 = 5 ∗ 2037 + 194

2037 = 10 ∗ 194 + 97

194 = 2 ∗ 97 + 0

476

3 Answers for “Basics of Cryptology”

Thus p = 97. One recieves the complete factorization as q = 10379div97 =107; and p and q are prime.(For Fans: One starts with the factors, here P=97 and q=107. The roots of4 mod pand q are each ±2. To receive one of two essentially different rootsmod p*q, for which +2 mod p and -2 mod q or -2 mod p and +2 mod q isvalid, one has to create CRA(+2,-2) or CRA(-2,+2).)

k) Be wp := xp+14 and wq := x

q+14 .

Following 3.4.1.8 it is clear that x mod n has 4 roots w, w′, w′′, w′′′, that canbe calculated using wp, wq of x as follows:

w = CRA(wp, wq)

w′ = CRA(wp,−wq)

w′′ = CRA(−wp, wq)

w′′′ = CRA(−wp,−wq)

Following 3.4.1.6 is wp ∈ QRp and wq ∈ QRq, but −wp 6∈ QRp and −wq 6∈QRq. Following 3.4.1.7 it is valid:

x ∈ QRn ⇔ x ∈ QRp ∧ x ∈ QRq.

Thus it is valid, that w ∈ QRn and w′, w′′, w′′′ 6∈ QRn.

l) If the factorization assumption is wrong an attacker would be able to factorizea non negligible part of the modules n = p ∗ q. For such moduls he can applyEuler’s criteria and calculate

(

x

p

)

≡ xp−12 mod pand

(

x

q

)

≡ xq−12 mod q

for x, for which must be proofed if it is a square or not. x is a square iff(

xp

)

=(

xq

)

= +1. Since evaluating of the formulas causes at most cubical

effort (in the lenght of n), thus is especially polynomial in the length of n, anattacker is able to decide for a non eligible part of all moduls if x is a squareor not. This contradicts with the Quadratic Remainder Assumption.To practise dealing with the formal definitions of the factorization - andquadratic remainder assumption, the non eligible part is now written downformally:There is a (probabilistic) polynomial algorithm F , so that it applies for apolynomial Q: For each L exists a l ≥ L, so that it applies: If p, q are chosenas independent, random prime numbers of length l and n := p ∗ q

W (F (n) = (p; q)) >1

Q(l).

From this algortihm F we construct another likewise (probabilistic) polyno-mial algorith R. R calls F . If F factorizes, then R calculates, as just de-

scribed,(

xp

)

and(

xq

)

. If both are +1, R puts out “true”, otherwise “false”.

R is successfull if and only if F is successfull.

477

B Answers

Thus it applies: There is a (probabilistic) polynomial algorithm R, so thatit applies for the above mentioned polynomial Q: For each L exists a l ≥ L,so that it applies: If p, q are chosen as random prime numbers of length l,N := p ∗ q, and x is coincidentially beneath the cosets with Jacobi-Symbol+1, then it applies:

W (R(n, x) = qr(n, x)) >1

2+

1

Q(l).

This contradicts with the quadratic remainder assumption.

3-10 Which attacks are to be expected on which cryptographic systems?

a) The notary must use a digital signature system. Because he signs with thisarbitrary messages submitted to him except the time stamp (e.g. to append-ing of date and time to the message), the digital signature system to adaptiveactive attacks must resist. The message format could be as follows:¡submitted message¿, date, time, snotary(¡submitted message¿, date, time).

b) Symmetric or asymmetric systems of secrecy can be used.

The first must resist at least a known-plaintext-attack, because the attackercan read the plaintext to the observed article in the newspaper soon. If theattacker does sensational things, he can reach at least particularly a plaintext-ciphertext attack, e.g. he could spray the text, he wants to get encrypted,on the city hall. (Both attacks are irrelevant with the asymmetric system,because the attacker himself can encrypt arbitrary texts.)

Even if the newspaper would print arbitrary senseless articles (perhaps thecorrespondent accepts box number notificiations, too), the system of secrecy(symmetric or asymmetric) must resist even a chosen-ciphertext-attack, be-cause the attacker can read the plaintext of his sent ciphertext in the news-paper soon.

3-11 How long are data to be protected? Dimensioning of cryptographicsystems; reaction to wrong dimensioning; Publishing public key of asignature system or asymmetric system of secrecy?

a) The probability that the cryptographic system is broken within this time mustbe sufficiently small. If one uses an information-theoretical secure system(with the One time pad or suitable authentication codes), the following 3points are irrelevant. Otherwise is to be paid attention:

• Trivially grows with the length of the time, within which the protectionshould be effective, also the time which is available to the aggressor for acrypto analysis.

• The computation power/EUR probably increases furthermore exponen-tially with a duplication time ca. every year, until technological bordersare reached sometimes. This increasing computer power will also be avail-able to the attacker.

478

3 Answers for “Basics of Cryptology”

• Besides, more efficient algorithms can be discovered in this time. (Be-cause with all information-theoretical secure systems there are not yetany useful lower barriers for the complexity of breaking.)

In both examples one has to overdimension the systems so that they arehopefully sure for at least 80 years.

To receive a feeling what this means, an example: we assume, the growingof the computer power would last that long. (However, this might becomelimited with physical borders already, cp. [Paul 78, DaPr 89, page 236 f.,page 41] Then it is so, as if the attacker has approximately 281 years withcurrent computer power at a disposal. If one could not break the systemfaster than by entire search in the key space, this would have to be chosentoo largely around the factor 281, thus the keys about 81 bits too long, if theyare coded optimally. This would not even be a lot.

It is already Different with systems that base on factorization. In order toexplain the progress on the one hand with the algorithms for factorizationand, on the other hand, with the computer power, two example calculationsare shown:We assume the complexity is and will be

L1990(n) = exp(√

ln(n) ln ln(n)),

L1995(n) = exp(1.923 ∗ 3√

ln(n) ln ln(n)),

this is the complexity of the best factorization algorithm known in 1990 and1995, cp. [Schn 96, page 256].

In 1995 we used n of length 512 bit and this was justified with the fact thatthose numbers cannot be factroized within one year (what is different today,cp. [Riel 99]). Thus we should, from the perspective of 1995, suppose n∗ forthe future with

L(n∗) = 281 ∗ L(n)

From the algorithmic view of 1990 this means

L1990(n∗) = 281 ∗ L1990(n)

⇔ exp(√

ln(n∗ ln ln(n∗))) = 281 ∗ exp(√

ln(n) ln ln(n))

↔√

ln(n∗ ln ln(n∗)) = 81 ∗ ln(2) +√

ln(n) ln ln(n).

Furthermore we know that ln(n) = 512 ∗ ln(2) ≈ 360 and ln ln(n) ≈ 6, as wellas√

360 ∗ 6 = 44, thus circa

⇔√

ln(n∗ ln ln(n∗)) = 56 + 44

⇔ ln(n∗ ln ln(n∗)) = 1002

A little calculator acrobatics results that this unequation is complied forln(n∗) ≈ 1440. Thus log2(n

∗) = 1400/ ln(2) ≈ 2020.

479

B Answers

One would have to choose the numbers approximately 3.95 times longer thanbefore. The calculation expenditure thereby increases by the factor 3.953 ≈61.

From the algorithmic view of 1995 this means:

L1995(n∗) = 281 ∗ L1995(n)

⇔ exp(1.923 ∗ 3√

ln(n∗)(ln ln(n∗))2) = 281 ∗ exp(1.923 ∗ 3√

ln(n)(ln ln(n∗))2)

⇔ 1.923 ∗ 3√

ln(n∗)(ln ln(n∗))2 = 81 ∗ ln(2) + 1.923 ∗ 3√

ln(n)(ln ln(n∗))2

⇔ 3√

ln(n∗)(ln ln(n∗))2 = 42, 12 ∗ ln(2) + 3√

ln(n)(ln ln(n∗))2

⇔ 3√

ln(n∗)(ln ln(n∗))2 = 29 + 3√

ln(n)(ln ln(n∗))2

Furthermore we know that ln(n) = 512 ∗ ln(2) ≈ 360 and (ln ln(n))2 ≈ 36, aswell as 3

√360 ∗ 36 ≈ 23, thus circa:

⇔ 3√

ln(n∗)(ln ln(n∗))2 = 52

⇔ ln(n∗)(ln ln(n∗))2 = 523

A little calculator acrobatics results that this unequation is complied forln(n∗) ≈ 2400. Thus log2(n

∗) ≈ 2400/ ln(2) ≈ 3462. One would have tochoose the numbers approximately 6.76 times longer than before. The calcu-lation expenditure thereby increases by the factor 6.763 ≈ 309.

We see: the algorithmical progress within 5 years has “as much” influenceon the necessary length of the numbers as the increase of computer powerexpected within 80 years. Both forces a extension about approximately 1.500bits for the considered period.

(However, fortunately there are many applications where signatures must bevalid only shortly, e.g., in a digital payment system if one gets every quartera completion of an account and against this one has a right of objection foronly a few months.)

b) It is clear that one uses only stronger dimensioned systems with both systemstypes for all data that are to be newly encrypted or to be signed, and that onemust exchange new keys additionally, etc. If one trusts in the fact that thesoon breakable digital signature system is not yet broken, one can use it forthe exchange of public keys of stronger asymmetric cryptographic systems.Otherwise a new orignal-key-exchange outside of the computer network thatis to be protected is necessary - as well as with usage of symmetric crypto-graphic systems for key exchange. If one wants to regulate something legallyobliging with the digital signature system, the public test key should be newlyregistered. In every case it is positive to let at least key registers confirm theexchange of public keys. To avoid that key registers can explain arbitrarily

480

3 Answers for “Basics of Cryptology”

keys for invalid, they should possess a digital signature (if necessary in the oldcryptographic system) of the owner for a relevant message (key invalid from:date, time). The proceeding is different for messages that were encrypted /signed using the old, and in future too weak system: Secret data must be en-crypted with the stronger cryptographic system. This can happen accordingto the application, i.e. is there a safe environment for this new decryption withknowledge of keys needed for encryption of the present ciphertexts or not, bydecrypting the present ciphertexts with the system of secrecy that is soontoo weak and then encrypting the plaintext with the stronger dimensionedsystem of secrecy or by an additional encryption of the present ciphertexts.In case of asymmetric systems the latter case might be the most frequentedcase. Independent of the use of symmetric or asymmetric systems of secrecythe proceeding after the latter case means that the implementation of thesystem of secrecy that is soon too weak and the applied keys must remainavailable. Avoiding the risk of changing the encryption (decrypting the weak,followed by encrypting with a strong system) (the plaintext is present for ashort time) has its price. Unfortunately, one can never be sure whether allcopies of the ciphertexts, as described, are encrypted stronger. Thus the out-lined proceeding helps only against attackers that appear after the strongerencryption and also have, if necessary, difficulties to find another copy of theonly weakly encrypted ciphertext. Thus the outlined proceeding is no substi-tute for correct dimensioning starting from the beginning, but it is better, ofcourse, than just waiting and wanting to keep the appearing insecurity of thetoo weakly dimensioned system of secrecy secretly.

Signed data must be provided with an additional signature in the stronger di-mensioned signature system, best from the original signer. Thereto one couldoblige anybody in a transition period: everybody publishes stronger dimen-sioned test keys (signed for the at least occasional authentication in each casewith the not yet broken signature system) and everybody is obliged to resigna message, that was signed by him using the old system, using the strongersystem. This transition period must end of course, before the breaking of theold signature system is cheap enough. If the original Signer is not able to ornot willing to do an additional signing, a time stamp service can be tasked:everyone who has a message signed by another one and wants to prevent thedecay of this evidence submits the message together with the signature (in theold signature system) to a time stamp service. This signs in the new, strongerdimensioned signature system “message, signature in the old signature sys-tem, date, time”. If this is submitted later, this time-stamped signature is asauthentic as it was at “date, time” in the old signature system, as the timestamp service is protected against corruption and as cryptographically securethe new signature is to the time of submission. The protection of the timestamp service against corruption can be increased by the usual technology:Instead of a signature in the new signature system a signature (indepedent

481

B Answers

from the others) is appropriated by independently implemented and pursuedcomputers in each case in the new signature system. A message is consid-ered only as correctly time-stamped if enough (independent) signatures aresubmitted in the new signature system. Because everyone can protect au-tonomously his own interests in case of resigning, the proceeding outlinedhere is far more satisfyingly for this than regarding to secrecy. This appliesparticularly if the end of the cryptographic security of digital signatures (andtherewith also the end of every transition period) can be determined certainlyby the use of Fail stop signatures.

c) I transmit the test key of an asymmetric signature system. Because then mypartner can proof later with its help whether my public key of an asymmetricsystem of secrecy transmitted to him then is authentic, by signing this keytherefore by myself.

d) Yes, because an “upgrade” of signature systems is easier realiziable securelythan an upgrade of systems of secrecy. Furthermore the test key must beregistered for legal liability.

3-12 This proceeding is hopeless in principle:

Even if governments have clearly more money than big affiliated groups (whatis far from clear), affiliated groups (as well as curious neighbors) can focus theircryptanalytic efforts very strongly on separate broadcasting stations and receivers,while fight against crime (and also espionage) wishes cryptanalysis in a certainwidth, i.e. from a lot of broadcasting stations and receivers. Per key that is to getbroken governments might have rather less than more computer power than bigaffiliate groups.

Beside this fundamental argument, from my point of view, there is naturally a rowof other arguments, why such a “key length compromise” is unreasonable:

With the increase in generally available computer power the key length wouldhave to be adapted continually (cp. exercise 3-11 a)), what is not possible to bemanaged meaningfully, however, regarding to secrecy for dated back messages (cp.task 3-11 b)).

Criminals will not only trust in such weak cryptographic systems of secrecy, but en-code if necessary, in addition, with stronger ones and use if necessary, additionially,steganographic systems of secrecy, cp. 4 and 9.

3-13 Authentication and secrecy in which order?Be x the message:

If asymmetric authentication is wanted, thus a digital signature, one should signfirst and then encrypt, so that the receiver has a signed plaintext message tostore after decrypting. The signature must be coencrypted, because it containsinformation about the message in general. Thus: The sender creates k(x, s(x))resp. c(x, s(x)), depending on a symmetric or asymmetric system of secrecy.

482

3 Answers for “Basics of Cryptology”

If symmetric authentication suffices the order according to helpfulness does notmatter: One can do it as above, thus creating k(x, MAC) (In this case one won’thave a asymmetric system of secrecy). Or one can authenticate the encryptedmessage, thus creating a MAC for k(x). The latter has 3 advantages according toeffort and power:

• It “uses” by application of One time pad for secrecy less keys as well as lessarithmetic expenditure, because secrecy is not supposed to increase the lengthof the message, while authentication does - and the expenditure of the secondoperation increases with the length of its input, i.e. the output of the firstoperation.

• If one checks authentication first and this test fails, one can save for himselfdecrypting (and therewith expenditure).

• Proving authentication and decrypting can take place parallel, so that theanswer time can be shortened.

3-14 s2 −mod− n-generator

a) For n = 7 ∗ 11 = 77, s = 2 the following sequences results:sequence si : 4 16 25 9 4 . . .sequence bi : 0 0 1 1 0 . . .

thus for the plaintext 0101 in case of addition modulo 2 results the ciphertext0110. The decrypter creates the sequence bi in the same way and calculatesusing addition modulo 2 from the ciphertext 0110 the plaintext 0101.And because it was so much fun, just a second example:For n = 7 ∗ 11 = 77, s = 13 the following sequences results:sequence si : 15 71 36 64 15 . . .sequence bi : 1 1 0 0 1 . . .

thus for the plaintext 0101 in case of addition modulo 2 results the ciphertext1001.

b) First we have to calculate back the sequence of the other si using s4 = 25.We need the CRA, thus we calculate first the needed base vectors:Euclidean algorithm:

11 = 3 ∗ 3 + 2; 3 = 1 ∗ 2 + 1;

Backwards:

1 = 3− 1 ∗ 2 = 3− 1 ∗ (11− 3 ∗ 3) = −11 + 4 ∗ 3

Thus up = 12, vq = −11 and we have the formula s = 12 ∗ sq − 11 ∗ sp =11(sq − sp) + sq.

Furthermore we can pre-calculate (p + 1)/4 = 1 and (q + 1)/4 = 3.

s3: (Remark: Here we can’t just take 5 (the root that is known from calcu-lating with integers), because we need the root of the 4 roots from 25 mod 33that itself is a quadratic remainder.)

483

B Answers

Modulo 3: s4 ≡ 1⇒ s3 ≡ 11 ≡ 1Modulo 11: s4 ≡ 3⇒ s3 ≡ 33 ≡ 5⇒ s3 = 11(5− 1) + 5 = 49 ≡ 16 mod 33.

s2:Modulo 3: s3 ≡ 1⇒ s2 ≡ 11 ≡ 1 (one can see that that will stay the same)Modulo 11: s3 ≡ 5⇒ s2 ≡ 53 ≡ 4⇒ s2 = 11(4− 1) + 4 ≡ 4 mod 33.

s1:Modulo 11: s2 ≡ 4⇒ s1 ≡ 43 ≡ 16 ∗ 4 ≡ 9⇒ s1 = 11(9− 1) + 9 ≡ 31 mod 33.

s0:Modulo 11: s1 ≡ 9 ≡ −2⇒ s0 ≡ (−2)3 ≡ −8 ≡ 3⇒ s0 = 11(3− 1) + 3 ≡ 3 mod 33.The last bits of those are b0 = 1, b1 = 1, b2 = 0, b3 = 0.Thus the plaintext is 1001⊕ 1100 = 0101.

c) What happens here does just not depend on the s2−mod−n-Generator buthappens always when one adds bitwise a one-time-pad (cp exercise 3-7 d)) ora more or less secure pseudo-one-time-pad (Stream-cipher in deeper meaning,cp. ref3.8.1):

If a bit of the ciphertext tips over then just this bit of the plaintext is falseafter decryption using addition modulo 2. But if a bit gets lost then the wholeplaintext is wrong from this position:

Be the i − th bit of the plaintext xi. The i − th ciphertext bit is then Si :=xi ⊕ b− i. If S∗

i = Si ⊕ 1 instead of Si arrives at the receiver he will decryptit as S∗

i ⊕ bi = xi ⊕ 1..

If Si falls out, however, he will decrypt each bit from the i-th position withthe wrong bi: He reputes Si+1 for Si and decrypts it with bi and so on.

d) The s2 −mod− n-Generator guarantees in its normal use as symmetric (andmore than ever as asymmetric) system of secrecy as few authentication asVernam cipher. (cp. part c)).

Nevertheless the answer for the question is: Yes. One may take an informa-tion-theoretical secure authentication code and use the sequence generatedby the s2 −mod − n-Generator as key of the authentication system. (Thusthe key that must be exchanged actually is only the seed for the s2 −mod−n− generator).

The resulting authentication system could be broken if the attacker is ableto break the authentication code or the s2 − mod − n-Generator. But theauthentication code is information-theoretical secure, thus the s2 −mod− n-Generator remains.

For specialists: One ought to prove that the attacker, in order to break thesystem, has to break the QRA (Quadratic remainder assumption) actually,

484

3 Answers for “Basics of Cryptology”

i.e. that the resulting system is cryptographically secure. Sketch: If one canbreak the authentication code, if the keys of the s2 − mod − n-Generatorcan be chosen, while it is unbreakable with random keys, this is a measurabledifference between pseudo-random sequences and real random sequences. Butit is proven that in case of the s2 −mod− n-Generator the sequences cannotbe differed by any statistical test from real random sequences below the QRA.

e) The key exchange remains the same. Though each of both have to memorizenot the seed s anymore but the current si after the first message. This requiresthat the receiver decrypts the ciphertexts just once and in the same order asthe encrypter encrypted them. Otherwise the receiver would have to memorizemore. Compare 3.8.2.

f) Following 3.4.1.7 and 3.4.1.8 as well as exercise 3-9k) each square has exactly4 roots mod n. (with n = p*q, q ≡ 3 mod 4, p 6≡ q), from which exactly oneis a square again. The root of 1 which itself is root again is 1. Thus it isvalid: If s0 6= 1 all following inner states are 6= 1, since 1 results only fromthe square 1. Thus the probability of reaching the inner state 1 after i stepsis as high as the probability of choosing randomly one of the four roots of 1as seed s, thus 4/|Z∗

n|.

3-15 GMR

a) R = 28 is not within the domain of the permutation pair, because gcd(R, n) =gcd(28, 77) = 7. Thus one can’t sign anything using this. Therefore theexample for R = 17 follows. (There is 17 ≡ 28 mod 11, thus somebody, whocalculates with 28, can find his calculations modulo 11 again.)

Since gcd(17, 77) = 1 and(

1777

)

=(

1711

)

∗(

177

)

=(

611

)

∗(

63

)

= (6(11−1)/2 mod11) ∗ (37−1/2 mod 7) = (−1) ∗ (−1) = 1 R = 17 lies in the domain of thepermutation pair.

Since the messages are always 2 bit long one does not need a prefix–freetransformation. Thus it is to create S := f −1

m (R) directly. This is (cp. 3.5.3):

S = f −101 (17) = f −1

1 (f −10 (17)).

Step 1: First x := f −10 (17) is needed. Following 3.5.2 one has to test first

wether 17 or -17 is a square. (cp. 3.4.1.7)

Modulo p=11: 17 ≡ 6; Euler criteria: 6(11−1)/2 = 65;62 ≡ 3; 64 ≡ 32 ≡ 9; 65 ≡ 9 ∗ 6 ≡ −1⇒ 17 6∈ QRp ⇒ 17 6∈ QRn.Thus the root w of -17 is to be extracted. Modulo p=11: 60 ≡ 5; 5(11+1)/4 =53 ≡ 4;Modulo q=7: 60 ≡ 4; 4(7+1)/4 = 42 ≡ 2;(One can shorten the following, but it is mentioned for completeness: Apply-ing QRA and the “base vectors” from exercise 3-9g) results w = −21 ∗ 4 +22 ∗ 2 = −40 ≡ 37. This w is the root which is square again. x is supposedthe one with Jacobi-Symbol +1 (thus ±w), which is < 77/2. This applies forx = 37.)

485

B Answers

Step 2: Now S = f −11 (x) = f −1

1 (37) is calculated, thus S ∈ Dnwith(2S)2 ≡±37.(First we would have to test wether x or −x is a square. But we know thisfrom the first step: We calculated in any case the root w which was a square.Thus one can proceed immediately with w. Furthermore we have to calculatenow again with mod p and q solely for exctracting root of w. Thus we didnot need to compound w using QRA, i.e. we could leave out all parenthesizedparts.)

Thus: Root w∗ from the result of step 1:

Modulo 11: 4(11+1)/4 = 43 ≡ 9;

Modulo 7: 2(7+1)/4 = 22 ≡;

At next one has to divide by 2. As well this can happen solely with p and q:(Extended Euclidean Algorith for determination of the multiplicative inverseof 2 and subsequent multiplication with it, or halving in Z of w∗ or w∗ + presp. w∗ + q.)

Modulo 11: w∗/2 ≡ 10;Modulo 7: w∗/2 ≡ 2;

Thus the demanded signature S is one of the 4 numbers CRA(±10,±2).Before QRA we have to ensure Jacobi Symbol +1. Instead of the Eulercritera we can use that (3.5.1)

(

2

p

)

= −1and

(

2

q

)

≡ +1.

According to that it follows, because the Jacobi Symbols of w∗modp and qwhere 1 each, that

(

w∗/2

p

)

= −1and

(

w∗/2

q

)

≡ +1.

Thus we create CRA(−10, 2) (One could also take CRA(10,−2)).CRA(−10, 2) = CRA(1, 2) = −21 ∗ 1 + 22 ∗ 2 = 23.And again we just caught that of the numbers ±S that is < 77/2⇒ S = 23.Annotations:

1. It doesn’t matter, wether one divides by 2 first and then applies CRA orthe other way round.

2. If you receive 12 as signature, then you’ve forgotten to pay attention tothe Jacobi symbols. 12 has the Jacobi symbol

(

12

77

)

=

(

12

11

)

∗(

12

7

)

= 15 mod 11 ∗ 123 mod 7 = 15 mod 11 ∗ (−2)3 mod 7 = −1.

486

3 Answers for “Basics of Cryptology”

Thus 12 is not in the domain of the permutations. The dodgy thing isthat the rest of the signature appears to work though: f0(f1(12)) = 17,because f1(12) = 4 ∗ 122 mod 77 = 4 ∗ 144 mod 77 = 4 ∗ −10 mod 77 =37.f0(37) = −(372) mod 77 = 17. Thus the test wether the signature lieswithin the domain must be part of signature test.

b) Because 8 messages are to be signed the reference tree has a depth about 3,as in the figures in 3.5.4.

Analogous to figure 3–26 the fourth figure, i.e. the one attached to R011,consists of the following parts: (“parts” are here exactly the points: Theblack references and the gray signatures.)

präf (R )

präf (m)

R011Rsig = ƒ

011

r1

R-signature:

N-signature:

r00

•••

r0

re

r01

r011

r0

r1

r00

r01

r010

r011

r010

K-sig-natures

b

R011m

Nsig = ƒ R011

( )-1

e

r011

( )-1

präf (r r )Ksig = ƒ

0r( )

-1

1

Comparing this with figure 3-26 one can see what must be chosen resp. calcu-lated new: The only new reference is R011. New signatures are those of R011

regarding to r011 and (of course) of the message m regarding to R011. If oneconsiders as in figure 3-27, in which order one calculates at best, it results:

487

B Answers

R011

r1

R-signatures

N-signatures

r00

r0

re

r01

r011

r0

r1

r00

r01

K-sig-natures

R011präf(m)

: value already present

: random chose of value

: computation of value

: direction of computation

R100

r1

R-signature:

N-signature:

r10

•••

r0

re

r11

r101

r0

r1

r10

r11

r100

r101

r100

K-sig-natures

b

R100m

präf (m)Nsig = ƒ R

100( )-1

e

präf (R )Rsig = ƒ

100r100

-1

präf (r r )Ksig = ƒ

0r( )

-1

1

By comparing with the above fourth signature one can see that only thetoggling three references (rǫ, r0, r1) are already there as well as the toggling

488

3 Answers for “Basics of Cryptology”

K-signature. (This is just the change from the left to the right halft of thetree, thus after the first signature the most time-consuming operation at all.)For the order for calculation it results:

r1

r0

re

r0

r1

Of course one can send everything that was signed, but this is only necessaryfor parallel testing. If testing is proceeded sequently one does not need tosend the inner nodes of the signatures: The receiver of the signature has torecalculate all grey arrows against their direction and only compare wetherthe root of the tree rǫ that is known to him results.

c) It is to show: If there would be an algorithm that finds collisions of functionsfpref(m) efficiently, then there is an algorithm that finds collisions for pairs(f0, f1).

Therefore it is shown how to directly calculate from each collision fpref(m)

a collision of (f0, f1). Be the given collision m, m∗, S, S∗ with m 6= m∗ andfpref(m)(S) = fpref(m∗)(S

∗). Set v := pref(m) and w := pref(m∗).Let us call the bits of v = v0v1 . . . vk resp. w = w0w1 . . . wk′ . Be vi and wi

the first bits from the left in which v and w differ. Since v and w are prefixfree codings of different messages m, m∗ applies i ≤ min(k, k′) and of course0 ≤ i. Then it is:

fv(S) =fv0 · fv1 · . . . · fvi · . . . · fvk(S) =

fw0 · fw1 · . . . · fwi · . . . · fwk′(S∗) = fw(S∗)

Since in both lines the same permutations are shown ( for all j with 0 ≤ j ≤ iit applies Vj = wj), from the equality of the figures follows the equality of theoriginal figures. Thus it is

fvi · . . . · fvk(S) = fwi · . . . · fwk′

(S∗).

Consequently

fvi(fvi+1 · . . . · fvk(S)) = fwi(fwi+1 · . . . · fwk′

(S∗))

is a collision of f0 and f1.

Prefix free coding is needed for ensuring that v and w differ in each bitposition,which exists at both (otherwise there would be either no vi or no wi.)

489

B Answers

3-16 RSA

a) Key generation:Let p = 3, q = 11 therefore n = 33

Therefore:Φ(n) = (p− 1) • (q − 1)

Let c = 3. Test if gcd(3, 20) = 1. (Euclidean Algorithm). That is the case.

In order to find the exponent for decoding, one can either use the conventionalalgorithm or calculate it for mod p and q separately:

Conventional Algorithm: We are looking ford = c−1 mod Φ(n)hered = 3−1 mod 20

The extended Euclidean Algorithm gives us d = 7

More efficient Algorithm:dp := c−1 mod (p− 1) in our case dp := 3−1 ≡ 1 mod 2dq := c−1 mod (q − 1) — dq := 3−1 mod 10

From the extended Euclidean Algorithm we get dq = 7.

Encryption:Take m = 31 for the plain text. The corresponding cipher text S isS = 313 ≡ (−2)3 ≡ −8 ≡ 25 mod 33

Decryption:Conventional algorithm:Sd = 257 ≡ (−8)7 ≡ 643 • (−8) ≡ (−2)3 • (−8) ≡ 64 ≡ 31 mod 33

33 indeed is the plain text.

More efficient algorithm:Mod 3: Sdp = 11 = 1Mod 11: Sdq ≡ 37 = 93 • 3 ≡ (−2)3 • 3 ≡ 3 • 3 = 9

Using the CRA (as usual) we get m = 31.

b) To make things simple we can just modify the example used in a): Let keygeneration be identical (only c is now called the test key and d the signingkey).

If the plain text just so happens to be m = 25, then the signature will be the3rd root of m which gives us Sig = 31 (see above) and testing will result inSig3 = 313 ≡ 25.

Note:

1. If you want to use concelation for some messages and authentication forothers, you should choose different key pairs. Otherwise, in order to get the

490

3 Answers for “Basics of Cryptology”

plain text for an encrypted RSA block, one could just get the owner of thekey to sign that block. This risk can be circumvented, though, using a one-way hash function before signing a message or by designing the cryptographicprotocol in a way that prevents signing of arbitrary messages. But why takethis risk in the first place just to save that little bit of effort in key generationand storage in RSA?

2. Neither redundancy nor a hash functions were used in our example; thissystem is still weak against Davida attacks and should not be used unmodified.

c) To 0 and 1 respectively – independently of the chosen key pairs. We see thatthese plain texts should be avoided by using an appropriate encoding as inputto RSA.

If blanks (the characters that are by far most likely to concatenate to longsequences) are not encoded as 0. . . 0, this will be the case automatically withreasonable probability. You can also take care of this by means of redundancyand hash functions which should be used anyway.

d) One approach would be inserting 100 random bits at fixed positions in eachRSA block.

We need enough random bits so it becomes impossible for an attacker whosees the cipher text S to just guess a likely plain text m and do a completesearch to find out if S contains m. In order to do this, he would just encryptm using the public key c and iterating through all combinations of randombits. With just 8 random bits this would only take 256 iterations.

Timestamps, even if encoded in more than 8 bits, are not random enough.An attacker might just try out all probable timestamp values.

e) If the randomly chosen bits are part of the output of the concelation system,the proposed indeterministic flavor of RSA is not safe against active attacks:an attacker could just use the same approach as in deterministic RSA.

The answer becomes more complex if the random bits are not part of theoutput. But, as was pointed out in 5.4.6.8 and more elaborately in [PfPf 90],it will still be ”no” — at least if they are the first or last 100 bits.

Therefore, for a concrete implementation of RSA for example 100 randombits should be chosen in combination with a redundancy rating. The messageformat could then be m∗ = (m, z, h(m, z)) (where ordering is irrelevant), mbeing the message itself, z being the random bits and h being a one-way hashfunction. The encrypted message will be m∗c.

If the range of h is 160 bits (see 3.8.3), we need to subtract 260 bits off theoverall RSA block size in order to get the net RSA block size. 740 bits remainif the module is 1000 bits long which is 74 %.

f) Random bits can either be inserted into the to-be-signed message itself orinto the block that’s exponented for RSA (or both)

If they are inserted into the to-be-signed message they will be hashed with it.It’s crucial for the security of the signature that the hash function is collision

491

B Answers

resistant for every choice of random bits. This should be the case for a goodhash function — but why take the additional risk and the extra effort ofhashing more bits?

If random bits are inserted into the block signed by RSA, the risk of multi-plicative attacks is increased. Why take that risk?

All in all I’d always sign deterministically using RSA.

g) According to 3.4.1.1 the complexity of signing an RSA block of length 2l withan exponent of length |c| is proportional to (2l)2 • |c|. The expense per bitwhen signing extremely short messages doesn’t differ, because only one blockhas to be encrypted.

When encrypting very long messages it is noted that the number of bitsthat can be put into a single RSA block is increased with l. Neglecting therandom number and the hash value that take up space also, we can encrypt2l bits in each RSA block. Therefore the complexity for very long messagesis proportional to(2l)2• |c|

2l = 2l • |c|. Under the best of circumstances the complexity only growsproportionally to l. (In reality it’s even less, at least for very large values ofl, since the fast algorithms mentioned in 3.4.1.1 really pay off. On the otherhand for extremely long messages most of the time hybrid encryption (3.1.4)is used instead of plain RSA.)

h) Let l be the security parameter, i.e. the length of p,q.

Mod n: What we need is xd mod n (d = c−1mod Φ(n) has already beencalculated while generating the key). The length of x and d each are about2l.

Exponentiation using “square-and-multiply” will therefore be broken down to32•2l = 3l modular multiplications of numbers of length 2l (counting squareingas a multiplication).

Mod p,q: We need xdp mod p, xdq mod q and the CRA for those values (dp,dq, and the “base vectors” for the CRA have already been calculated duringkey generation). So we’re left with two more exponentiations, the argumentsof which only have length l (x, because it can be reduced mod p and q resp.,and dp and dq because they were calculated mod(p − 1) and mod(q − 1),respectively) and the CRA. The latter operation is made up of only twomultiplications and is therefore neglegible compared to the exponentiations.

That makes about 2• 32 • l = 3l modular multiplications of numbers of length

l.

So the ratio between the overall complexity of the approaches is just the sameas between modular multiplications of numbers of size l and 2l respectively.(Standard) modular multiplication is made up of multiplication and divisionand those operations both have a complexity of O(l2). When doubling l,therefore, the expense quadrupels.

492

3 Answers for “Basics of Cryptology”

i) A random exponent will have roughly size 2l − 1. Instead of 2l − 2 squaring

operations and (2l−2)2 multiplications when using a random exponent only

16 squaring operations and one multiplication will have to be performed.Therefore we’ll only have to do 17 operations instead of 3l−3. With this, thepublic operation will be sped up by 3l−3

17 . If l = 512, which is more realistic,the factor will be 3•512−3

17 = 90 similar!!.

j) Great idea: since scd(p− 1, q − 1) < Φ(n) = (p− 1)(q − 1), because p− 1and q−1 are at least both divisible by 2, we get, by substitution of Φ(n) withscd((p− 1), (q − 1)):

c will be chosen from a slightly smaller interval in step 3 this way, althoughthe same numbers in that interval can be chosen. In step 4 the multiplicativeinverse to c, d, will be smaller on average. We now getc • d ≡ 1 mod scd(p− 1, q − 1).

This yieldsc • d ≡ 1 mod p− 1as well asc • d ≡ 1 mod q − 1.The validation of encryption and decryption in 3.6.4.1 still holds.

Does this modification interfere with the security of RSA? I believe that’svery, very improbable, because only the choice of c, which has to be publishedanyway, changes slightly — and in practice c is very often chosen to be smallto make encryption less costly anyway (see 3.6.1 and also part i) of theseexercises)

Those who wish to be absolutely puristic about it may want to use this hybridapproach:

Step 3 is done as in 3.6.1 using Φ(n), and in step 4 first Φ(n) is used and thengcd(p− 1, q − 1) where the smaller result will be selected.

This way nothing changes as seen from the outside — the security of RSAwill stay completely unchanged. We only save a bit of effort while encryptingsince d is a little smaller. Not much though, because p − 1 and q − 1 won’thave too many common prime factors, if p and q are chosen randomly andstochastically independently.

k) We recycle the example from a), so n1 = 33

We choosen2= 5 • 17 = 85n3=23 • 29 = 667

We get

Φ(n2) = 4 • 16 = 64Φ(n3) = 22 • 28 = 616gcd(3, 64) = 1gcd(3, 616) = 1

493

B Answers

(In case you chose 7 or 13 as prime factors of n you probably found Φ(n) tobe divisible by 3)

For the plain text m = 31 we get

S1 = 313 ≡ (−2)3 ≡ −8 ≡ 25 mod 33S2 = 313 ≡ 29791 ≡ 41 mod 85S3 = 313 ≡ 29791 ≡ 443 mod 667

The extended Euclidean Algorithm gives us−18 • 33 + 7 • 85 = 1

From the CRA we get−18 • 33 • 41 + 7 • 85 • 25 = −9479 mod 33 • 85 = 1741(Check: 1741 mod 33 = 25, 1741 mod 85 = 41)

Once more, this time with 33 • 85 = 2805 and 667:

The extended Euclidean Algorithm4yields:778 • 667 − 185 • 2805 = 1

The CRA gives us:778 • 667 • 1741 − 185 • 2805 • 443 = −673566391 mod (667 • 2805) = 29791(Check: 29791 mod 667 = 443, 29791 mod 2805 = 1748)

The third root of 29791 is 31 — which is exactly the plain text m.

l) Breaking RSA completely means being able to calculate d for any given nand c. The proof mentioned in the exercise shows that knowing n, c, andd you can factor n — breaking RSA completely therefore is just as hard as

4Since computation gets difficult with these numbers, I include it here.Euclidean Algorithm:

2805=4 • 667 + 137667=4 • 137 + 119137=1 • 119 + 18119=6 • 18 + 1118=1 • 11 + 711=1 • 7 + 47=1 • 4 + 34=1 • 3 + 1

Going back:

1=4 − 3=4 − (7 − 4)=−7 + 2(11 − 7)=2 • 11 − 3(18 − 11)=−3 • 18 + 5(119 − 6 • 18)=5 • 119 − 33(137 − 119)=−33 • 137 + 38(667 − 4 • 137)=38 • 667 − 185(2805 − 4 • 667)=778 • 667 − 185 • 2805

494

3 Answers for “Basics of Cryptology”

factoring. Unfortunately this proof doesn’t even prove the impossibility ofbreaking RSA universally without knowledge of n — i.e. to find a procedurethat is equivalent to the key without being able to factorize n.

3-17 DES

a) Encryption:

K1 = 011, K2 = 010L0 = 000, R0 = 101L1 = R0 = 101, R1 = L0 ⊕ f(R0, K1) = 000⊕ (R0 •K1 mod 8)

= 101 • 011 mod 8 = 5 • 3 mod 8 = 111L2 = R1 = 111, R2 = L1 ⊕ f(R1, K2) = 101⊕ (111 • 010 mod 8)

= 101⊕ (7 • 2 mod 8) = 101⊕ 110 = 011

Exchanging both halves yields the cipher text 011111.

Decryption

We’re using the variable names from the pictures in 3.7.3. Go there forexplanation.

R2 = 011, L2 = 111 are givenR1 = L2 = 111, L1 = R2 ⊕ f(R1, K2) = 011⊕ (111 • 010 mod 8)

= 011⊕ (7 • 2 mod 8) = 011⊕ 110 = 101R0 = L1 = 101, L0 = R1 ⊕ f(R0, K2) = 111⊕ (R0 •K1 mod 8)

= 111⊕ (101 • 011 mod 8) = 111⊕ (5 • 3 mod 8)= 111⊕ 111 = 000

Since our choice of variable names includes an implicit exchange of values fromleft to right, we don’t need to exchange both halves explicitly. The plain texttherefore reads 000101, which is exactly the one that was encrypted.

b) Tables are created explicitly so every result can be looked up immediately.Where possible multiple s-boxes are integrated into one (in DES at least two:array of 212 entries).

c) The trick in fast software implementations of these systems is to store andlook up permutations, that would take up many bit operations otherwise, intables. A table with 232 entries would be too large though. As an examplewe’ll demonstrate, how to get along with only 4 tables of 28 entries each:

Entries will be 32 bits long and the ith tells us where to permute the 8 8th

last bits.

Example: 2nd table – eg. bits 9-16. Let W (9), . . . , W (16) = 28, 13, 1, 5, 30, 14, 2, 19..In every entry bit 9 will be at position 28, bit ten will be at postion 13,. . . ,bit 16 will be at position 19 and all other bits will be 0.

The final result of the permutation will be achieved by indexing the ith tablewith the ith set of 8 bits from the input and get the binary OR or XOR of all4 results.

495

B Answers

d) For one pair of plain text block and corresponding cipher text block we tryout different keys until we find one that matches. This will take on average255 attempts. After that we check whether the key we found also is correctfor other pairs of plain text and cipher text blocks, which is very likely. Thus,the average expense will be 255 DES-encryptions.

e) As was layed out in 3.7.6 we choose the complement of one of the given plaintext block / cipher text block – pairs, i.e. we take the compliment of the plaintext block and ask for the corresponding cipher text. With this approach ittakes 254 attempts on average to obtain the key, which is half as many.

More formally:

The plain text block/cipher text block – pair (x, DES(k,x)) is given.

We get x and ask for DES(k,x).

Because of the complementarity property we have DES(k,x) = DES(k, x).We calculate DES(k, x) and have the cipher texts for x encrypted with k aswell as with k at our hands for the attack.

In order to find the key k we try out different keys until encryption yieldseither DES(k,x) or DES(k, x). k will be k’ or k′ respectively.

If we make sure that each of the possible 256 values of k’ and k′ is tried out atmost once, the search terminates after at most 255, on average after 254 steps,where every step is made up of one DES-encryption and two comparisionoperations between the plain text and the cipher texts. Since the comparisionoperations are much less costly than the encryption, the expense for thischoosen plain text/cipher text attack is only half as big as for the plaintext/cipher text attack described earlier.

Same way as above we have to test wether k is the right key for other plaintexts blocks as well.

f) There’s a linear equation for every one of the 64 cipher text bits that is appliedto plain text bits according to the permutations and to key bits according topermutations and partial key generation — with known coefficients becausethe permutations and the partial key generation are public and fixed. Due tothe fact that plain text and cipher text are known, we get 64 linear equationswith 56 coefficients. Every software package for equation solving can figurethose out.

It obviously doesn’t matter, if the block length is 64 or 128 bits or whetherthe key length is 56 or 256 bits. The algorithm evidently breaks every Feistelcipher with known, fixed permutations. It probably can be modified to workon arbitrary linear block ciphers.

3-18 Modes

a) There is no redundancy. More precisely: A given cipher text can be decryptedinto any arbitrary plain text. When re-encrypted the resulting cipher text

496

3 Answers for “Basics of Cryptology”

will of course be identical to the input — in particular its last block. If themessages don’t contain any internal redundancy, e.g. in files that only containnumbers, the attacker might just send arbitrary cipher texts.

b) (This scheme is a bit better: In general the last plain text block to somerandom cipher text won’t be all zeros, so the attack will be spotted. But: )

If the cipher text is at least three blocks long, the attacker can modify all butthe last two. Since the mode CBC synchronizes every two blocks, the lastblock will still be decrypted correctly to a block of zeros and the attack won’tbe detected automatically.

c) Now, if some cipher text block is altered, encryption will result in somethingcompletely different. Thus there will be a wrong block in the plain text blockbuffer when encrypting the next block. In general the resulting error won’tbe eliminated by synchronization because a function of the last block will beadded to the next block after decryption so decryption will fail again andresult in a wrong plain text buffer all over again.

{Arguing PCBC is a synchronous cipher which would render the attack inef-fective would be wrong though. For this the definition of “synchronous” in3.8.1 is too weak, because even dependency on the bit position in the mes-sage satisfies it. Regard the following example for clarification: OFB is asynchronous stream cipher, but the cipher text can be modified at will, aslong as no cipher text bits of the authenticator are changed and no ciphertext is omitted.}

d) ECB: The attacker can’t achieve much, because the block cipher is assumedto be secure and it is used for nothing more than a block cipher. So thereare only the disadvantages left that were mentioned along the definition ofblock ciphers and their respective attacks: deterministic block ciphers can beattacked actively by having likely plain texts encrypted. The resulting ciphertexts can then be compared to intercepted cipher texts. With a bit of lucksome of the intercepted cipher texts can be ”decrypted” and even withoutluck certain plain texts can be ruled out.

CBC for concelation: The attacker can attack the system actively byinserting the values in the buffers. If these values are simply recycled, i.e.unless every message begins with a value that is chosen by the sender, afteran active attack on the first block the same attack as on ECB can be usedwith the same probability of success. If the values in the buffers are resetafter the active attacks, the attacker can still scan the cipher text for blockshe knows (ie. from an earlier attack on ECB or CBC). For these blocks he canthen proceed to calculate the respective plain text in CBC: he just needs tosubtract the last key. But this kind of search is harder, because the attackercan’t predict the encrypted values as well (and have them encrypted duringhis attack). His chances to succeed are slimmer than in ECB if the buffersare reset appropriately.

497

B Answers

CBC for authentication: If single blocks are authenticated everythingholds that was said for concelation. If many blocks are authenticated at atime even if the buffers are not reset he has to guess their precise values, whichis unfeasible if reasonable block lengths are chosen. This is said assuming thatthe attacker can’t just “ask for” the authenticated values already during hisattack, because no cryptographic system is secure against that. The embed-ding protocol is responsible to take care of this. The simplest approach is tohave the partner create part of the message that is to be authenticated.

CFB for concelation: The attacker succeeds for the next output blockprecisely if he knows the encryption of the value in the shift register. If theshift register isn’t reset for every message, he can in an active attack askfor the encryption for a chosen plain text (works for ECB, CBC and CFB),leave that plain text in the shift register have a cup of coffee and wait for thenext output unit to subtract the known value and celebrate the successfullattack. . .

CFB for authentication: ditto.

OFB: The attacker either needs to predict the value of the shift register orinsert it himself. The latter can’t be done by minipulating the plain textstream, because it’s not fed back. Success of an active attack depends solelyon the initialization of the shift register.

PCBC: For the advantages mentioned for CBC to be achieved, i.e. sameprobability of success as for ECB, the attacker can try to set the values in thetwo buffers actively to insert a value of h that suits his purposes. Apart frominitializations he faces more obstacles than with CBC, which is illustrated bythe following attempt to set the buffer values in PCBC: Send plain text P1.Now we know the contents of the plain text block buffer (P1) and we can findout the contents of the cipher text block buffer by examining the output (C1).Thus h(P1, C1) can be calculated. Still it’s hard to set both values. A plaintext block P2 could be chosen to set P2 + h(P1, C1) to some arbitrary valueand this way produce the cipher text C2. But now P2 has to be modified andthe plain text block buffer changes. The same is true for choosing the plaintext block P2, because adding h(P1, C1) producing unwanted results in C2.

For a known function h it proves difficult to produce a certain output valuefrom a small set of input values. Thus even a known and irregular distribu-tion of plain texts probably can’t be used in favour of the attacker, providedinitialization of the buffers is done in a wise manner. See ”CBC for authen-tication”.

OCFB: As oppose to PCBC, in OCFB the shift register value can be set bythe attacker even without initialization as long as:

• h is not a one-way function

• he has the chance to see the result Res of the encryption before thecorresponding unit in the plain text stream has to be handed over.

498

3 Answers for “Basics of Cryptology”

With Res the attacker has the value of the first input parameter of h and,unless h is a one-way function, is able to calculate an appropriate second valuefor the second input parameter for many output values of h. From this valuehe subtracts the desired plain text and recieves the plain text unit that hehas to insert to get his choice of result of h. If this scheme is repeated b

r timesthere will be a chosen value in the shift register. Now what was said in ”CFBfor concelation” applies.

e) Coding will be done like this: append a bit of given value (say 1) to the plaintext. If the text length isn’t a multiple of the block length, padd with bits ofanother value (say 0).

1. is satisfied, because we pad up to the next multiple of the block length.

2. is satisfied, because all bits from the last 1 to the end of the text can becut off to decode the text.

3. needs a definition for ”minimal”. Sensible definitions might be:

• Expansion over all plain texts ist minimal.

• Expansion over all plain texts weighted their respective probablility isminimal. This will be the same as above in case of standard distribution.

• Worst case expansion (expansion of the worst-case plain text) is minimal.

The given encoding satisfies the last definition, because every encoding hasto add at least one bit to at least one text (in order to distinguish between“padded” and “not padded” and because this encoding adds exactly one bit ifpossible. The encoding probably also satisfies the first definition (why don’tyou try to prove that?). You can find a distribution of plain texts for everygiven encoding that makes length expansion not minimal according to thesecond definition.

f) Shift registers and feedback are subsituted by a counter that starts at someinitialization value and is increased for every output unit. In order to accessthe ith unit i− 1 is added to the initialization value — off we go. This modeis called Counter Mode for obvious reasons. [Schn 96] p205.

3-18a Mirror attack Let the main frame implement the same secure block cipher(amongst other helpful features). Also assume that independently chosen keyshave been exchanged between the main frame and each user (more accurately:between the main frame and the “calculator”). Login will now be done accordingto the following protocol: send random number, expect encryption, decrypt, com-pare. This will be done in both directions, ie. client authenticates to main frameand main frame authenticates to client (for simplicity the user id, which has to besubmitted, too, so the main frame knows which key to use, is not included in thefollowing picture).

499

B Answers

k k

z

k(z), z'

k(z')

If main frame and “calculator” behave completely symmetrically (as suggested bythe picture and by our discussion so far) the protocol is insecure regardless of thestrength of the block cipher. If the main frame supports more than one session ata time, the second random number it generated in the first session may be used asthe first random number in the second session. If the main frame doesn’t detectthis (for example by enforcing one login per user at any one time) and respondscorrectly in session two his correct response may be fed back to it in session one.The main frame doesn’t authenticate the “calculator” but it his own “reflection”.We’re out of luck!

Minor mistake — major security hole (same goes for the “calculator”, if it acceptsto play the role of the second partner during authorization, so roles are symmet-rical, too. This could be the case, if clients can logg into many main frames at atime).

?k

z

k(z),z’

k(z’)

z’

k(z’),z”

repeat attackor cancel LOGIN

?

1st session 2nd session

Correct processes can be designed by following one out of two schemes: eitherinstances have some memory and accept only random numbers that satisfy certainconditions (which is kind of contrary to the idea of a random number), or theybehave asymmetrically. This could be done by either

• attaching their identity (eg. ‘M’ for ‘main frame’ and ‘C’ for ‘calculator’) tothe random number.

k k

z,T

k(z,T),z’,G

k(z’,G)

T

G

500

3 Answers for “Basics of Cryptology”

• having only one of the parties decrypt and one encrypt.

• have two keys instead of one for one pair of main frame and client; one forthe main frame to encrypt and the client to decrypt and vize versa.

kk’

z

k’(z),z’

k(z’)

k-1

k’-1

It would also be secure – though not very elegant – to only react to a randomnumber after having received replies to all random numbers that were sent, as in??. This way the main frame could not handle logins in parallel.

With precise enough clocks in place in both main frame and “calculator” copyingfrom the terminal to the calculator can be omitted once by using time instead ofthe random number (note the remark in ?? to make weakest possible assumptionson random number generation)

This will still not put up with attackers hijacking sessions after login. To accountfor this szenario identity must be authenticated continuously.

(It is noted that we need a block cipher with strong security properties in thisexercise, because it is used more in the way of an authentication system than aconcelation system and concelation systems don’t generally make good authenti-cation systems. See exercise ??. If time stamps are used, for example, an attackershouldn’t be able to guess from an encrypted message he’s seen the encryption ofsimilar, forged messages.)

3-19 End-to-end-encryption No matter if symmetric or asymmetric cryptography isused, every user needs at least one secret key. Thus, where a computer is used bymultiple users, the main problem is where to keep secret keys out of reach of otherlegitimate users of the system.

If the system is trusted enough so we can safely assume that it’s able to protectone user’s data from other users and if its hardware security shields off physicalattacks, then the problem is solved inherently.

If the system is completely untrusted, then not only storing the key, but alsoencryption, decryption, testing signatures and most of all signing needs to takeplace on a different, personal computer. It should have at least some small key padand display to communicate with its owner directly. Incarnations of this conceptinclude so-called “super smart cards’ ‘smart floppy disks’ (computers resemblingfloppy disks that connect to others via the floppy disk drive, which don’t needan extra reader and provide more space than a wallet-sized chip card with it’sstandardized magnetic-strip-dimensions) up to notebooks.

As a compromise between effort and security every user can enter a password oflength (actually entropy) great enough to create a key for a symmetrical concelation

501

B Answers

system. Using this key the actual keys for end-to-end encryption are decryptedat the beginning of the session and encrypted at it’s end. This procedure may ofcourse be broken either if the passwords don’t contain enough entropy afterall orthe symmetric concelation system can be broken or if other users can get theirhands on the password by simulating a password prompt so the user thinks he’scommunicating with the operating system itself. The latter could be preventedby always checking the computer for hardware manipulations and booting theauthentic operating system before logging in. The two things that are needed forthis approach are intrusion detecting casings, see 2.1, and authentic booting see[Cles 88, ClPf 91, ?].

3-20 End-to-end-encryption without exchanging keys before

a) To send out a message to others, a set of one-time-pads is generated. Thosewill be sent out separately on separate routes (e.g. via a friend and, where pos-sible, from separate places like home, office, public phones, cellular phones)and media (e.g. phone, fax, e-mail) and possibly at different times (pro-vided the service supports this, i.e. it doesn’t have real-time requirements).The sum of all those pads is used to en-/decrypt and/or authenticate themessage. An attacker would have to be at many different places at manydifferent times in order to eavesdrop on all the pads and read or manipu-late the encrypted/authenticated message. This method trades of security inavailability for security in confidentiality and integrity: only one pad (or, ofcourse, the message itself) needs to be intercepted in order to prevent themessage from being read.

One time pad and the information-theoretically secure authentication codesmay serve as cryptographic systems, the former being simple as can be andthe latter certainly doable with reasonable effort without a computer.

b) Once they get back their crypto devices they can use asymmetric cryptosystems. For this the crypto devices that have deleted all their secret keysneed to create new key pairs. In addition to a) Bob tells Alice his public key(or vice versa), for example via telephone — Alice’s crypto device of course hasalso deleted Bob’s public keys because it couldn’t tell whether only its secretkeys were going to be extracted, or whether the public keys were going tobe manipulated, too. (These days telephone conversations are authenticatedby the speaker’s voice. As soon as speach synthesizers work sufficiently well,that will be history. We shouldn’t rely on this authentication alone.). Afterthat the public keys are used to exchange one or more pads or authenticatethe message directly.

If Bob and Alice took precausions in case they lost their keys by agreeing on acommon secret they memorized (and thus always carry around with them) theycan, of course, use that as a starting value for symmetric key generation. Thisapplies most to b). Using a mnemonic pass phrase both will be able to remember

502

3 Answers for “Basics of Cryptology”

(a fairly long string with little entropy per character) and a cryptographic hashfunction (that may be publically known) a shorter value can be generated, thatcontains more entropy per character. This value can now be used as a startingvalue for key generation or even directly as a symmetric key.

3-21 Identification and authentication by an asymmetric system of secrecy

a) U creates a random number and encrypts it so only T can read it: cT (z). Usends cT (z) to the instance that’s supposed to be T and expects T , dependingon the protocol, to either send back z, or to use z in some other cryptographicoperation and send back the result. If that happens, that instance is in fact T(with a certain probability that is determined by the length of z and given thatcT is authentic, that the asymmetric concelation system is secure and z reallywas random). Now U has authenticated T . But beware: this na ive approachis prone to the changing attack layed out in ??. If some attacker A wantsto be authenticated as some user T he just sends on cT (z) to T and relaysthe reply to U . We see that this protocol doesn’t authenticate T as a directcommunication partner, but only makes sure T is involved in communicationsomewhere and, more specifically, T receives z. So, if U inserts its messageto T into z – z = cT (message), this protocol nevertheless does its job. Whenreceiving z, U knows that the messge has reached T .

(Warning: If you came up with an essentially different solution, check whetheryou made T use his private key dT on something that had not been encryptedusing cT . This kind of operation — using dT first, for signing, then cT ,for testing the signature — is only possible in certain concelation systems,like RSA, but not in every concelation system. Where this is possible theconcelation system can be used as an authentication system as well and thesolution of the exercise is trivial.)

b) The asymmetric concelation system needs to be secure against adaptive chosen-cipher-text attacks. Otherwise an entity’s responses during authorization maybe used to break their concelation system.

c) Construction of a symmetric authentication protocol from an asymmetricconcelation protocol:

T sends message Mi to U .

U generates the random number z, encrypts Ni and z and sends cT (Ni, z)to T .

T decrypts using dT and checks whether Mi from step 1 is identical tothe one from step 2. If so, T sends z to U - otherwise T never returns z.

If U receives the same value for z that he created, Mi is authentic messagefrom T .

This authentication system is only symmetric, because z is useless for everyoneelse and therefore doesn’t prove anything, since it wasn’t created by T , butby U . Therefore U could send z to himself arbitrarily.

503

B Answers

The disadvantage this protocal has against other authentication protocols isthat it needs three messages instead of one. It therefore consumes more timeand network resources.

Tabular representation of a symmetric authentication system on top of anasymm. concelation system.Given: U knows ct, T ’s public key.

T →Mi→ U

T ←cT (Mi,z)← UT →z→ U

Result: If, in this asymmetric authentication system, every effectively possiblemodification of cT (Mi, z) changes z, which in general should be the case in ablock cipher, then U knows after the third message that Mi was really sentby T .

Correspondingly, the tabular reptesentation of a symmetric authenticationsystem through a symmetric authentication system could look like this:Given: U and T both know k, k being a key for a symmetric authenticationsystem.

T →Mi,k(Ni)→ U

In the first approach T produces Ni and U only supplies z. Also, T doesn’treact to U ’s reply to his message, if Ni is not part of it. Thus, the asymmetricconcelation system doesn’t have to withstand an adaptive active attack in itsnastiest form.

3-22 Secrecy by symmetric authentication systems

Instead of every single bit of the message, only an appropriate MAC is sent. Therecipient then figures out, which bit corresponds to the MAC.

504

3 Answers for “Basics of Cryptology”

Sender recipient

Knows kAB Knows kAB

Let the message M beb1, . . . , bn, bi ∈ {0, 1}

CalculatesMAC1 := code(kAB, b1),. . . ,MACn := code(kAB, bn)

SendsMAC1, . . . , MACn Checks whether

MAC1 = code(kAB, 1) orMAC1 = code(kAB, 0)Thus receives the matching value for b1

. . .Checks whetherMACn = code(kAB, 1) orMACn = code(kAB, 0)Thus receives the matching value for bn

The MACs need to be long enough, so the recipient can check which bit matchesthe MAC. The small authentication code in figure 3-15, for example, is insufficient:with a key value of 00 H and T are authenticated with 0. Therefore the recipientis unable to decide whether H or T are supposed to be transmitted. The sameholds for a key value of 11 and a MAC of 1. Designing the MAC long enoughmakes the probability for this to happen neglegible.

The message is councelated perfectly using the above protocol, since noone butthe sender and recipient can test, which bits match the MACs. If someone couldguess those bits with a probability greater than 0.5 this would be an approach forbreaking the authentication system.

With long enough MACs the message is actually authenticated: after changing aMAC it will, with overwhelming probability, either not match 0 or 1, or it willmatch the same value as before. Otherwise the authentication system could bebroken with a probability too great to neglect: the attacker could change the MACand flip the corresponding message bit that is sent in the regular authenticationsystem.

Instead of authenticating a single bit it can be more efficient to authenticate morethan one bit using a single MAC. This saves on bandwith (more bits per MAC),but it also makes it necessairy for the recipient with a given number l of authen-ticated bits per MAC to do a maximum of 2l and an average of 2l−1 steps todecrypt. If l bits are sent separately, then 2× l steps are necessairy in a complete

505

B Answers

search and only l steps if 0 is assumed whenever the test fails for 1. Comparingthe difference in expense between l bits being sent together and seperately we findthat for l = 2 sending l bits together only has advantages, because 2l = l × 2, butthe expense for greater l grows very fast. l = 8 might be a sensible compromisefor many applications.

3-23 Diffie-Hellman key exchange

a) Basic idea:

We’re using the fact that exponentiation is commutative, i.e. it doesn’t matterin what order exponentiation is done.

This, put into a formula:

(gx)y = gxy = gyx = (gy)x

And, graphically:

g

≠x

≠x

≠y

≠y

gx y

xyg

g

b) Anonymity:

Solution on the organisational level: there’s no reason why public keys shouldhave to be linked to real identities — they could belong to pseudonyms justas well.

Solution on the protocol level: instead of using his commonly known publickey the sender chooses a new key pair for every encrypted message that issent out. He uses the new private key to calculate a common secret key forcommunication with the recipient and encrypts accordingly. After that hesends out the message along with the new public key5 to his partner. Onreceiption the public key can be used together with the recipient’s privatekey to get the common secret key and decrypt the message. (If you’ve everheard of asymmetric crypto using the ElGamal-concelation system: that’s aspecial case of what we just discussed. ElGamal uses modular multiplicationas the symmetric encryption routine in the hybrid concelation system layedout here. [?][Schn 96, p 478]):

5That’s why this modification of diffie-hellman key exchange can’t be used for steganography withpublic keys, see 4.1.2

506

3 Answers for “Basics of Cryptology”

Now, what if the sender wants an anonymous reply? Simple: the commonsecret key can just be re-used.

So asymmetric concelation systems and Diffie-Hellmann key exchange are onpar with respect to anonymity, too.

c) Individual choice of parameters

Every participant chooses their own p and g to publish as part of their publickey.

Every one can exchange secret keys with everybody else by using their p andg and proceed as was layed out in 3.9.1. After that the new, pair-specificpublic keys can be told to the other participants.

Because of that last step this modification of the Diffie-Hellman protocol isnot adequate for steganography with public keys, see 4.1.2. This modificationis used in PGP 5 and over, unless “Faster Key Generation” is selected. This isde-facto the protocol variant from exercise part b) except that every instancealso chooses and publishes their own p and g.

Even though this way public keys are chosen on a per participant pair basisit’s important to make sure public keys are authentic. This can be ensuredthrough certificates (see 3.1.1.2) and testing of certificates by the recipient.Wherever the sender can create signatures the recipient can validate, it’s bestto have the sender sign his pair-specific public keys himself. (see also 9.2).

d) DSA (DSS) as a basis

On this basis Diffie-Hellman key exchange works flawlessly, because all pa-rameters provide great enough security. Depending on whether p and g arenot the same for participants, things will have to be done according to b).DSA (DSS) can — contrary to its original purpose — be used as a basis forcryptographic concelation; if p and g are the same for all participants evenfor steganography with public keys 4.1.2.

It’s a lot harder to argue whether Diffie-Hellman key exchange when carriedout this way is as secure as the method in 3.9.1 and b), because there’s asubtle difference in distribution of g and a great differece in the length of x.To my knowledge though, this procedure can be regarded secure.

3-24 Blindly achieved signatures with RSA

Let 2 be the text (plus the result from the collision-resistant hash function, see3.6.3.5) that is to be signed blindly, and 17 the masking factor. Thus we get forthe masked text

2× 17t mod 33 = 2× 173 = 2× 29 mod 33 = 58 mod 33 = 25

(When making up such an example, of course, you go the other way around: 25 isgiven for the to-be-signed message from exercise 3-16 b). Therefore we’re looking

507

B Answers

for two numbers, whose product (mod 33) is 25. After choosing 2 and 29 the thirdroot of 29 (mod 33) has to be calculated. We’re using the inverse exponent:

297 ≡ 292 × 292 × 292 × 29 mod 33≡ (−4)2 × (−4)2 × (−4)2 × (−4) mod 33≡ 16× 16× 16× (−4) mod 33≡ 256× (−64) mod 33≡ 25× 2 mod 33≡ 17 mod 33

From exercise 3-16 b) we now use the masked text: 25, 31

(since 313 ≡ (−2)2 ≡ (−8) ≡ 25 mod 33 )

The recipient now divides the second component, the blind signature, by the mask-ing factor 17.

Euclidean Algorithm: calculate 17−1 mod 33:

33=1× 17 + 1617=1× 16 + 1

And back again:

1=17− 1× 16=17− 1× (33− 1× 17)=2× 17− 33

Therefore 17−1 mod 33 = 2.

So the signature is:

31× 17−1 mod 33 = 31× 2 mod 33 = 29

Double-checking our result for correctness — how the recipient of the blind signa-ture can test whether the signer is trying to shortchange him:

Signaturet ≡ (−4)3 ≡ −64 ≡ 2 mod 33

2 =? = text

With relief we note that this is correct and thus our calculation probably is, too.

3-25 Threshold scheme

The smallest adequate prime number p is 17, because the numbers between 13 and17 are not prime.

The polynome a(x) is q(x) = 2x2 + 10x + 13 mod 17.

508

3 Answers for “Basics of Cryptology”

Substitution yields (i, q(i)):

q(1) ≡ 2 + 10 + 13 ≡ 25 mod 17 = 8q(2) ≡ 8 + 20 + 13 ≡ 41 mod 17 = 7q(3) ≡ 18 + 30 + 13 ≡ 61 mod 17 = 10q(4) ≡ 32 + 40 + 13 ≡ 85 mod 17 = 0q(5) ≡ 50 + 50 + 13 ≡ 113 mod 17 = 11

The formula from 3.9.6 gives us:

q(x) = 8× x−31−3 × x−5

1−5 + 10× x−13−1 × x−5

3−5 + 11× x−15−1 × x−3

5−3 mod 17

= 88 × (x− 3)× (x− 5) + 10

−4 × (x− 1)× (x− 5) + 118 × (x− 1)× (x− 3) mod 17

= (x− 3)× (x− 5) + 10−4 × (x− 1)× (x− 5) + 11

8 × (x− 1)× (x− 3) mod 17

= (x− 3)× (x− 5) + 10× 4× (x− 1)× (x− 5) + 11× 15× (x− 1)× (x− 3) mod 17= (x− 3)× (x− 5) + 6× (x− 1)× (x− 5) + 12× (x− 1)× (x− 3) mod 17

= 2x2 + 10x + 13

q(0) of course values to the constant term, which in this case is 13.

Alternatively it might me more conveniant substituting 0 for x from the startinstead of figuring out the polynome q(x).

8× −31−3 × −5

1−5 + 10× −13−1 × −5

3−5 + 11× −15−1 × −3

5−3 mod 17 =

1208 + 50

−4 + 338 mod 17 =

15 + 50× 4 + 33× 15 mod 17 =

15 + (−1)× 4 + (−1)× 15 mod 17 =

13

3-26 Provably secure usage of several systems of secrecy?

Having n concelation systems we divide up our plain text into n pieces using aninformationa theoretically secure threshold scheme in such a way that all n piecesare necessairy to reconstruct the plain text. We then encrypt every single pieceusing a different concelation system.

We get additional expense from the application of the threshold scheme as well asfrom having to submit all n pieces to the recipient (instead of just the result fromthe product cipher). The overall expense we get is about n times as big.

None of the two combinations beats the other with regard to security:

• It is true that it’s impossible to prove anything in a product cipher about theeffectiveness of the ciphers 2 to n. But since the attacker never gets to knowthe intermediate results, a product cipher generally is a lot more secure thanthe most secure cipher in the chain and even more secure than the sum of

509

B Answers

all used concelation systems. What this says is that the effort necessairy tobreak the product cipher is substantially higher than the sum of efforts thatis needed to break each individual cipher.

• In order to be able to prove that this combination is as hard to break as themost secure used cipher (and even as the sum of all concelation systems) weneed to make use of the strong assumptions that were given in the exercise.In general they should hold. But as in the discussion of the security of theproduct cipher we’re now assuming things that are true ‘in general’. That’swhy, from my point of view, the product cipher outperformes this combinationnot only in terms of efficiency, but also in terms of security.

A very simple implementation of an information theoretically secure thresholdscheme in which all n parts are needed in order to decrypt the secret would beapplying the Vernam cipher n-1 times: the n-1 keys together with the cipher textmake up the n parts necessairy.

4 Answers for “Basics of Steganography”

4-1 Terms

Steganology integrates steganography (the knowledge of the “good people”’s algo-rithms) and steganalysis (the knowledge of the attacker’s algorithms on a stegano-graphic system). It it crucial to know about steganalysis to be able to estimatethe security of steganographic systems.

Steganographic systems are either concelation systems (security aim: confidentialconfidentiality) or authentication systems (security aim: embedding protected au-thor information). The next criterion for differentiation is, whether only authorinformation is embedded (watermarking) or also user related information (finger-printing).

510

4 Answers for “Basics of Steganography”

The advantage in introducing the term “stegosystem” as the root of the rightconcept tree is that now we have a short, popular term to our disposal. Thedisadvantage, however, lies in the loss of differentiation between concelation andauthentication. So I use this term for steganographic concelation systems only —when using it in a broader sense I’ll allways denote that with quotation marks.My concept tree looks like this.

steganology

stegano-

graphy

stegano-

analysis

steganographical system

steganographical

encryption system

steganographical

authentication system

Watermarking Fingerprinting

“stegosystem”

stegosystem

4-2 Properties of in– and output of cryptographic and steganographic sys-tems

a) Below you’ll find figures copied from 3.1.1.1, which are precisely block dia-grams for cryptographic systems.

k

k

x x

= ( ( ))k k x-1

( )k xdecryption

random number

plaintextplaintextencryption

keygeneration

secret key

secret zone

ciphertext

Figure 3.2: symmetric cryptographic concelation system.

511

B Answers

c

d

x xc(x)

encryption key,

public

decryption key,secret

= d(c(x))random number'

decryption

random number

plaintextplaintextencryption

keygeneration

secret zone

ciphertext

Figure 3.4: asymmetric cryptographic concelation system.

When, for each diagram, considering all three algorithms as one system, thedifference in in- and output between symmetric and asymmetric concelationsystems is only whether or not encryption needs an extra random number.Another difference lies in the requirements to the channel for key distribution.When looking at the interfaces of the three algorithms, the difference betweensymmetric and asymmetric concelation is whether the same key is used forde- and encryption (so it has to be kept secret) or keys for de- and encryptiondiffer (so they’re public).

A cryptographic system has (and is allowed) to assume, random numbers itgets as input are really random.— where necessairy random number genera-tion should be part of the system that uses it. A good cryptographic systemshouldn’t make any assumptions about the plain text — it’s input from out-side the system (and therefore a given). Therefore, when publishing the key,encryption has to be carried out indeterministically, so ‘simple’ plain textscan’t be guessed, encrypted and compared to the cipher text. On the otherhand though, there are no restrictions to the resulting cipher text — otherthan that it shouldn’t be excessively long maybe.

b) Below you’l find a figure that’s copied from 4.1.1.1, which is precisely a blockdiagram for steganographic concelation according to the embedding model.

512

4 Answers for “Basics of Steganography”

extract

random number

secret key

wrap

stegotext

Hembed

keygeneration

k(H,x)

content

x

wrap*

H*

content

x

k

k

Figure 4.4: steganographic concelation system according to the embeddingmodel.

There’s only one block diagram, because there are no known asymmetricsteganographic concelation systems.

Ideally a steganographic concelation system wouldn’t have to make any as-sumptions about the wrap, because in this model it is input (and thereforea given). (If we created a wrap in our steganographic system — syntheticsteganographic concelation — we’d have a plausibility issue in our applicationcontext.) Unfortunately no stegosystem will be able to work with arbitrarywraps, so certain assumptions have to be made and therefore every stegosys-tem has to rely on them to be adhered to. (The stegosystem should checkadherence to those assumptions to a certain extent and refuse to embed thecontent into an inappropriate wrap.)

For all other pieces of input the same is true as for cryptography. Additionally,the output has to be exactly as plausible as the input wrap.

c) In cryptographic concelation systems there are no assumptions the imple-mentation is unable assure. This is different in steganographic concelation(according to the embedding model — not according to the synthetic model):Assumptions about the wrap can’t be assured by the implementation.

4-3 Is encryption superfluous when good steganography is used?

If it is impossible for an attacker to tell whether a secret message is embedded hecan’t possibly understand it.

I’d still always use encryption in practise. The first reason is to make sure at leastthe secret message stays confidential even if the steganographic concelation systemisn’t as secure as was hoped for. The second — and at least equally importantreason though, is to make the content indistinguishable from white noise. Thismakes it possible for the steganographic concelation system — more accurately:to its embedding function — to make stong, yet realistic assumptions about theinput content.

513

B Answers

4-4 Examples for steganographic systems

1. From my collection of about 90 recent holiday pictures I choose the one thatby coincidence contains the four bits I’d like to send. Security would beexcellent — provided there is such a picture, which is very likely.

2. I choose one picture and scan it over and over again until the result containsthe four bits I want to send. Security would be excellent.

3. In order to send 40 bits I choose 10 pictures as I did in 1. (which will stillwork out with fair probability). Every once in a while I’ll have to send thesame picture twice. This will only provide reasonable security, if the senderis known to be a bit absent-minded at times. So this crypto system would beprimarily for notoriously abstracted professors.

4. In order to send 40 bits I choose 10 of my favourite pictures and scan them asin 2. Security would be good, although the time scanning would take couldbe a concern.

a) Modelling steganographic systems

Making up models is more an art than a deterministic process — there areno right (or wrong) models but models that are more (or less) appropriatefor a given purpose than others. A smart person once said that you’re notdone constructing a machine, whenever there’s nothing you could add, butwhen there’s nothing you could take out of it. This certainly is a good creedto work by in the art of model building.

This, applied to the context of steganographic systems, means that Anja,Berta and Caecilie all do make good points about certain steganographicsystems. I.e. if you, for example, can be sure that wrap material never repeatsbit by bit you don’t need a data base etc. So, since analysis, data base, andrating are not part of every steganographic system, and also since they canbe thought of as part of the implementation of the model given by figure ??(see the following figure) I tend to agree with Doerthe. . . to a certain extent.This is because Anja’s idea, that at least in some steganographic systemsthere should be some error output that rejects inappropriate wrap material,is actually a new one. There could be discussion in a gentelmen’s roundsome time about whether this should be included in the model explicitly, orwhether it’s implied because every operation may fail and there needs to beerror output that the embedding system has to react to. Anyway: Our ladies’round’s decision is that models of steganographic systems should include erroroutput.

514

5 Answers for chapter “Security in communication networks”

extract

random number

secretkey

wrap

stego-text

Hembed

keygeneration

k(H,x)

content

x

wrap*

H*

content

x

ana-lysis

wrap

H

content

x

evalua-tion

rejection

database

k

k

same interface like embed

4-5 Steganographic secrecy with public keys?

To start with, I wouldn’t call it steganographic secrecy with public keys, becauseembedding and extraction are always done using the same information. I’d say thisis bootstrapping a steganographic system with key using a keyless steganographicsystem. It can be useful for example where keyless steganography provides less netbandwith, requires more computing power or is less secure.

4-6 Enhancement of security by using several steganographic systems

4-7 Crypto– and steganographic secrecy combined?

4-8 Crypto– and steganographic authentication in which order?

4-9 Stego–capacity of compressed and non–compressed signals

5 Answers for chapter “Security in communication

networks”

5-0 Quantification of unobservability, anonymity and unchainability

5-1 End–to–End or connection encryption

515

B Answers

5-2 First encrypting and then coding fault-tolerant or the other way round?

5-3 Treatment of address– and error–recognizing fields in case of encryption

5-4 Encryption in case of connection–oriented and connection–less com-muication services

5-4a Requesting and Overlaying

5-4b Comparison of “Distribution” and “Requesting and Overlaying”

5-5 Overlayed Sending: an example

5-6 Overlayed sending: Appointing a fitting key combination for a alterna-tive message combination

5-7 Several–access–methods for additive channels, e.g. overlayed sending

5-8 Key-, overlaying-, and transmission–topology in case of overlayed send-ing

5-9 Coordination of overlaying– and transmission–alphabets in case of over-layed sending

5-10 Comparison of “Unobservability of adjacent wires and stations as wellas digital signal generation” and “overlayed sending”

5-11 Comparison of “overlayed sending” and “MIX’ that change the encod-ing”

5-12 Order of basic function of MIX’s

5-13 Does the condition suffice that output messages of MIX’s must be ofsame length?

5-14 No random numbers → MIX’s are bridgeable

5-15 Individual choice of a symmetric system of secrecy by MIX’s?

5-15a Less memory effort at the generator of anonymous return–addresses

5-16 Why changing the encoding without changing the length of a message

5-17 Minimal expanding of length in a scheme for changing the encodingwithout changing length of a message for MIX’s

5-18 Breaking the direct RSA–implementation of MIX’s

5-19 Wherefore MIX–channels?

5-19a Free order of MIXes in case of fix number of used MIXes?

516

6 Answers for “Value exchange and payment systems

5-20 How to ensure that MIXes receive messages from enough differentsenders?

5-21 Coordination protocol for MIXes: Wherefore and where?

5-22 Limitation of responsibility areas – Functionality of network terminals

5-23 Several sided secure Web access

5-24 Firewalls

6 Answers for “Value exchange and payment sys-

tems

6-1 Electronic Banking – comfort version

6-2 Personal relation and chainability of pseudonyms

6-3 Self–authentication vs. foreign–authentication

6-4 Discussion about security properties of digital payment systems

6-5 How far do concrete digital payment systems fulfill your security aims?

9 Answers for “Regulation of security technologies”

9-1 Digital signatures at “Symmetric authentication facilitates secrecy”

9-2 “Symmetric authentication facilitates secrecy” vs. Steganography

9-3 Crypto–regulation by prohibitopn of free programmable computers?

517

B Answers

518

C Index

519