+ All Categories
Transcript
Page 1: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 1 of 124

Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent and Secure Media Access

Project no. 671704

Research and Innovation Action

Co-funded by the Horizon 2020 Framework Programme of the European Union

Call identifier: H2020-ICT-2014-1

Topic: ICT-14-2014 - Advanced 5G Network Infrastructure for the Future Internet

Start date of project: July 1st, 2015

Deliverable D4.3

Demonstrators evaluation and validation

Due date: 31/12/2017

Submission date: 26/01/2018

Deliverable leader: Pavel Kralj (Telekom Slovenije)

Dissemination Level

PU: Public

PP: Restricted to other programme participants (including the Commission Services)

RE: Restricted to a group specified by the consortium (including the Commission Services)

CO: Confidential, only for members of the consortium (including the Commission Services)

Page 2: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 2 of 124

Executive Summary This deliverable D4.3 is a technical report of the development work made by all partners of the CHARISMA WP4, at the end of the project, as the final 5G field trial demonstrators were designed, setup and finally validated. It summarizes the work done during the final half-year of the project, building upon all the work done in the previous tasks T4.1 and T4.2 and reported in the earlier deliverables D4.1 [4] and D4.2 [5], and presents the results attained in validating and evaluating the deployed field trials and lab demonstrators designed during the project. It provides the final results analysis of the test-beds and field-trials, and the conclusions with respect to the validations against the relevant 5G-PPP KPIs [6].

The document first summarizes the setup of the two field trials at the Telekom Slovenije and APFutura premises, and the demonstrator at the NCSRD premises. Detailed descriptions of the field trials and demonstrator have already been described in deliverable D4.2 [5]. This document D4.3 also describes the differences between the planned and final hardware used, as well as explaining the software employed, and the test bed integration.

In all three field-trial and laboratory environments, the demonstrators focus on the three main features of the CHARISMA project:

• Low latency,

• Open Access (slicing), and

• Security aspects.

The various sub-system components developed in CHARISMA have been integrated together in the field trial and demonstration environments, such that the results of the required integration testing are also reported in this document.

This deliverable provides the validation of the specified CHARISMA use-cases and the overall secure, converged and virtualized distributed CHARISMA 5G architecture, including virtualised security aspects and the network management system solution developed in the project. Results of the validation tests are reported and an analysis has been performed by evaluating them against the KPIs defined in deliverable D1.2 [1], that are based on the 5G-PPP quantifiable KPIs [6].

The final goal of this document is to provide analysis and validation of the results emerging from the CHARISMA 5G field trials and use-case scenarios. Together with the previous deliverables D4.1 [4] and D4.2 [5] it presents the work done in preparation and execution of the 5G field trials that show case the features and achievements of the CHARISMA project..

Page 3: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 3 of 124

List of Contributors

Participant Short Name Contributor

Fundacio privada I2CAT I2CAT Shuaib Siddiqui, Eduard Escalona, Alber Viñes, Javier Fernandez Hidalgo

Heinrich Hertz Institution HHI Kai Habel, Matthias Koepp

National Center for Scientific Research “Demokritos”

NCSRD Eleni Trouva, Yanos Angelopoulos

APFutura Internacional Soluciones APFUTURA Enrique García

InnoRoute Gmbh INNO Marian Ulbricht, Andreas Foglar

JCP-Connect JCP-C Yaning Liu, Omar Fall, Jean-Charles Point

Cosmote COSMOTE Costas Filis,

Intracom Telecom ICOM Konstantinos Katsaros, Vasileios Glykatzis, Konstantinos Chartsias, Dimitrios Kritharidis

Telekom Slovenije TS Pavel Kralj

Ethernity Networks ETH Eugene Zetserov

Altice Labs ALTICE Victor Marques

University of Essex USSEX Mike Parker

Ericsson Spain ERICSSON Carolina Canales

Page 4: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 4 of 124

Table of Contents 1 Introduction .....................................................................................................................14

2 Final Pilot Sites Setup ........................................................................................................15 2.1 Telekom Slovenije Field Trial ........................................................................................................... 15

2.1.1 Physical Level Architecture ...................................................................................................... 15

2.1.2 Logical Level Architecture (Software) ...................................................................................... 17

2.2 NCSRD demonstrator....................................................................................................................... 18

2.3 APFutura field trial ........................................................................................................................... 21

2.3.1 Physical Level Architecture ...................................................................................................... 21

2.3.2 Logical Level Architecture ........................................................................................................ 22

3 Demonstrations ................................................................................................................23 3.1 Telekom Slovenije Field Trial ........................................................................................................... 23

3.1.1 Security Demonstration – Smart Grid and Meters .................................................................. 23

3.1.1.1 Scenario motivation ............................................................................................................. 23

3.1.1.2 Description ........................................................................................................................... 23

3.1.1.3 Scenario ............................................................................................................................... 24

3.1.1.4 Test setup ............................................................................................................................ 24

3.1.2 Low Latency Demonstration .................................................................................................... 25

3.2 NCSRD Demonstrator ...................................................................................................................... 27

3.2.1 Security Demonstration ........................................................................................................... 27

3.2.1.1 Scenario motivation ............................................................................................................. 27

3.2.1.2 Overview and advances to Y1 demonstration ..................................................................... 27

3.2.1.3 Scenario description ............................................................................................................ 27

3.2.2 Multi-Tenancy Demonstration ................................................................................................ 37

3.2.2.1 Overview .............................................................................................................................. 37

3.2.2.2 Phase-1: Single-tenant service deployment (slice creation) ............................................... 38

3.2.2.3 Phase-2: Multi-tenant service deployment ......................................................................... 38

3.2.2.4 Phase-3: Cross-tenant communication ............................................................................... 39

3.2.2.5 Wireless Backhaul network slicing and traffic policing ....................................................... 41

3.2.2.6 Intelligent traffic handling for vCaching .............................................................................. 46

3.3 APFutura Field Trial ......................................................................................................................... 47

3.3.1 Low Latency Demonstration .................................................................................................... 47

3.3.2 Multi-Tenancy Demonstration ................................................................................................ 48

3.3.3 BUS Use Case Demonstration .................................................................................................. 49

3.4 Fronthaul testing ............................................................................................................................. 50

3.4.1 Introduction ............................................................................................................................. 50

3.4.2 Frequency synchronization ...................................................................................................... 51

Page 5: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 5 of 124

3.4.3 Latency measurements ........................................................................................................... 53

3.4.4 Application testing ................................................................................................................... 54

3.4.5 Outlook .................................................................................................................................... 57

4 Integration Testing............................................................................................................58 4.1 Telekom Slovenije Field Trial ........................................................................................................... 58

4.1.1 T1: Integration for security service deployment ..................................................................... 58

4.1.1.1 T1.1 - OAM to deploy vIDS in CAL0 ...................................................................................... 58

4.1.1.2 T1.2 - SmartNIC configuration ............................................................................................. 58

4.1.1.3 T1.3 – OpenWRT configuration ........................................................................................... 59

4.1.2 T2: Integration for security policy definition ........................................................................... 59

4.1.2.1 T2.1: Security policy definition (CHARISMA GUI to SPM) .................................................... 59

4.1.2.2 T2.2: M&A alert definition (CHARISMA GUI to M&A) ......................................................... 60

4.1.3 T3: Integration for attack identification and mitigation (T3.1 and T3.2)................................. 60

4.1.3.1 T3.1 - Attack identification .................................................................................................. 60

4.1.3.2 T3.2: Attack mitigation ........................................................................................................ 61

4.1.4 T4 Integration for low latency demonstration ........................................................................ 62

4.1.4.1 T4.1 TrustNode low latency integration .............................................................................. 62

4.1.4.2 T4.2: OFDM PON .................................................................................................................. 65

4.2 NCSRD Demonstrator ...................................................................................................................... 66

4.2.1 T1.1 – Slice creation ................................................................................................................. 66

4.2.1.1 OAM to OpenWRT router CAL0 ........................................................................................... 66

4.2.1.2 OAM to SDN switch CAL1 .................................................................................................... 66

4.2.1.3 OAM to OVS Integration Bridge in compute node CAL1 ..................................................... 67

4.2.1.4 OAM to BH ........................................................................................................................... 67

4.2.1.5 Other devices ....................................................................................................................... 68

4.2.2 T1.2 - Security service deployment ......................................................................................... 68

4.2.2.1 IDS VNF deployment ............................................................................................................ 68

4.2.2.2 FW VNF deployment ............................................................................................................ 69

4.2.3 T2: Integration for security policy definition (T2.1, T2.2 and T2.3) ......................................... 69

4.2.3.1 T2.1: Security policy definition (CHARISMA GUI to SPM) .................................................... 69

4.2.3.2 T2.2: MA alert definition (CHARISMA GUI to MA)............................................................... 69

4.2.3.3 T2.3: IDS alert definition (IDS EMS to IDS) ........................................................................... 70

4.2.3.4 SPM – Orchestrator (Security Manager) interface .............................................................. 70

4.2.3.5 SPM – OAM (CHARISMA GUI) .............................................................................................. 70

4.2.4 T3: Integration for attack identification and mitigation (T3.1 and T3.2)................................. 72

4.2.4.1 T3.1: Attack identification ................................................................................................... 72

4.2.4.2 T3.2: Attack mitigation ........................................................................................................ 74

4.3 APFutura Field Trial ......................................................................................................................... 75

Page 6: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 6 of 124

4.3.1 Integration for low latency demonstration ............................................................................. 75

4.3.1.1 T1.1 OAM to SmartNIC ........................................................................................................ 75

4.3.1.2 T1.2 OAM to TrustNode....................................................................................................... 78

4.3.1.3 OAM to OLT PON ................................................................................................................. 78

4.3.2 Integration for Bus User Case demonstration ......................................................................... 78

4.3.2.1 T2.1 OAM to Trustnode ....................................................................................................... 78

4.3.2.2 T2.2 OAM to OLT PON ......................................................................................................... 79

4.3.2.3 T2.3 OAM to mobCaches ..................................................................................................... 79

5 Validation, Evaluation of results and lessons learnt ...........................................................83 5.1 CHARISMA mandatory requirements and KPIs ............................................................................... 83

5.2 Validation ......................................................................................................................................... 84

5.2.1 Telekom Slovenije Field Trial ................................................................................................... 84

5.2.2 NCSRD Demonstrator .............................................................................................................. 85

5.2.3 APFutura Field Trial ................................................................................................................. 86

5.3 Evaluation of results ........................................................................................................................ 87

5.3.1 Telekom Slovenije Field Trial ................................................................................................... 87

5.3.1.1 Security demonstration ....................................................................................................... 87

5.3.1.2 Security – attack identification and mitigation ................................................................... 91

5.3.1.3 Low latency validation – OFDM PON & TN .......................................................................... 92

5.3.2 NCSRD Demonstrator .............................................................................................................. 98

5.3.2.1 Performance characterisation of the firewall VSF ............................................................... 98

5.3.2.2 Latency introduced due to virtualisation .......................................................................... 100

5.3.2.3 Security attack detection and mitigation time .................................................................. 102

5.3.2.4 Multi-tenancy – vCache peering ........................................................................................ 103

5.3.2.5 Intelligent Traffic Handling ................................................................................................ 106

5.3.3 APFutura Field Trial ............................................................................................................... 107

5.3.3.1 Low Latency Results. .......................................................................................................... 107

5.3.3.2 Multi-tenancy Results ........................................................................................................ 111

5.3.3.3 Bus use case ....................................................................................................................... 113

5.4 Best Practices and Lessons Learned .............................................................................................. 114

5.4.1 Telekom Slovenije Field Trial ................................................................................................. 114

5.4.2 NCSRD Demonstrator ............................................................................................................ 114

5.4.3 ApFutura Field Trial ............................................................................................................... 116

6 Conclusions .................................................................................................................... 117

References .......................................................................................................................... 118

Acronyms ............................................................................................................................ 119

7 Appendix ........................................................................................................................ 121 7.1 Definitions ..................................................................................................................................... 121

Page 7: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 7 of 124

7.1.1 7.1.1 Network latency source s ............................................................................................. 121

7.2 Integration testing ......................................................................................................................... 121

7.2.1 APF SmartNIC ......................................................................................................................... 121

7.3 D4.2 Addendum ............................................................................................................................. 122

7.3.1 D4.2 section 4.2.6 update - optical wireless link testing ....................................................... 122

7.4 6Tree routing algorithm ................................................................................................................ 123

7.4.1 Downlink ................................................................................................................................ 124

7.4.2 Uplink ..................................................................................................................................... 124

Page 8: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 8 of 124

List of Figures Figure 1: TS Physical Layer Architecture .......................................................................................................... 16 Figure 2: TS C-RAN setup ................................................................................................................................. 16 Figure 3: Logical level architecture of the Telekom Slovenije field trial.......................................................... 18 Figure 4: Physical-level architecture of the NCSRD demonstrator (data plane) ............................................. 18 Figure 5: Logical-level architecture of the NCSRD demonstrator ................................................................... 20 Figure 6: (a)Integrated infrastructure at NCSRD demonstrator and (b) CMO infrastructure ........................ 20 Figure 7: APFutura Physical Level Architecture ............................................................................................... 21 Figure 8: APFutura Logical Level Architecture ................................................................................................. 22 Figure 9: TS security demonstrator setup ....................................................................................................... 25 Figure 10: TS Low latency demonstration setup ............................................................................................. 26 Figure 11: Mobile sensor platform architecture [10] ...................................................................................... 26 Figure 12: Slice creation over the NCSRD infrastructure................................................................................. 28 Figure 13: Security service deployment .......................................................................................................... 29 Figure 14: Time-series data in MA dashboard ................................................................................................. 29 Figure 15: Grafana dashboard showing the generic metrics .......................................................................... 30 Figure 16: Grafana dashboard showing the specific to the service metrics ................................................... 30 Figure 17: Virtual IDS (Snort) policy configuration .......................................................................................... 31 Figure 18: View showing the defined alert rules ............................................................................................. 31 Figure 19: Resources to be monitored ............................................................................................................ 32 Figure 20: Continuous policy evaluation ......................................................................................................... 32 Figure 21: Kratos C&C dashboard .................................................................................................................... 33 Figure 22: TCP flooding attack results in Kratos app ....................................................................................... 33 Figure 23: MA dashboard with metric originating from IDS ........................................................................... 34 Figure 24: Grafana dashboard when the attack is identified .......................................................................... 34 Figure 25: Alert received from the M&A module ............................................................................................ 35 Figure 26: Detecting applicable policy............................................................................................................. 36 Figure 27: Recommending next best action to the Security Manager ............................................................ 36 Figure 28: Firewall configuration with attacker's IP addresses ....................................................................... 37 Figure 29: Create new Shared Network .......................................................................................................... 39 Figure 30: Granting access to a shared network ............................................................................................. 40 Figure 31: Attaching to a shared network ....................................................................................................... 40 Figure 32: Running a script on a VNF............................................................................................................... 41 Figure 33: CHARISMA OAM GUI showing wireless backhaul as sliceable resource ........................................ 42 Figure 34: SDN backhaul configuration through the OAM .............................................................................. 42 Figure 35: H-QoS in SDN backhaul ................................................................................................................... 43 Figure 36: Example of OpenFlow meter format .............................................................................................. 43 Figure 37: Traffic policing using OpenFlow meters at NCSRD demonstration ................................................ 44 Figure 38: Part of NCSRD testbed .................................................................................................................... 45 Figure 39: Backhaul devices at NCSRD testbed connected through RF cable ................................................. 45 Figure 40: Backhaul slicing using OpenFlow meters ....................................................................................... 46 Figure 41: iPerf Clients ..................................................................................................................................... 46 Figure 42: iPerf Servers .................................................................................................................................... 46 Figure 43: Retrieval of Intelligent Traffic Handling rules ................................................................................. 47 Figure 44: APFutura Low Latency Demo Setup ............................................................................................... 48 Figure 45: APFutura Field Trial Slicing ............................................................................................................. 49 Figure 46: APFutura Field Trial Bus Use Case Scenario ................................................................................... 50 Figure 47: Fronthaul test: experimental setup ................................................................................................ 51 Figure 48: Clock distribution in fronthaul system ........................................................................................... 51 Figure 49: Frequency offset at IXIA ................................................................................................................. 52

Page 9: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 9 of 124

Figure 50: Frequency offset at DU ................................................................................................................... 52 Figure 51: Frequency offset of 8Hz for reference clock at DU, when master clock from IXIA is passed via CU and Switch to DU ............................................................................................................................................. 53 Figure 52: Latency vs. Frame length (one way – entire path) ......................................................................... 53 Figure 53: CU FPGA board with a 10G fibre link to IXIA and a 10G copper link to the switch ........................ 54 Figure 54: Transmitter board (UEssex) with horn antenna for 60GHz fronthaul measurements and 1 DU board in background .................................................................................................................................................. 54 Figure 55: mobile rack with devices and components for end user device .................................................... 54 Figure 56: Screenshot of information displayed on Laptop connected to end user device (see Figure 55) ... 54 Figure 57: Omnidirectional antenna (connected to user device receiver) ..................................................... 56 Figure 58: SmartNIC Configuration .................................................................................................................. 76 Figure 59: SmartNIC measurement procedure ............................................................................................... 77 Figure 60: Caching Nodes ................................................................................................................................ 79 Figure 61: Bus use case Workflow ................................................................................................................... 81 Figure 62: TS - network topology dashboard, available of physical infrastructure ......................................... 87 Figure 63: TS – Physical infrastructure management, adding OFDM-PON device to network topology ........ 88 Figure 64: TS – Physical infrastructure management, adding SmartNIC device to network topology ........... 88 Figure 65: TS – Physical infrastructure management, adding TrustNode device to network topology.......... 89 Figure 66: TS – OpenAccess dashboard, creating a new network slice on physical infrastructure ................ 89 Figure 67: TS – Open Access dashboard, view of the new slice ...................................................................... 90 Figure 68: TS – Open Access dashboard, list of virtual slices on same physical infrastructure ...................... 90 Figure 69:TS – M&A: Monitoring resources .................................................................................................... 91 Figure 70: TS – M&A List of alert rules ............................................................................................................ 91 Figure 71: OFDM-PON ..................................................................................................................................... 92 Figure 72: Integration setup for TrustNode router and OFDM-PON .............................................................. 93 Figure 73: Latency measurements for OFDM-PON (back-to-back) ................................................................. 94 Figure 74: Message flow between controller and mobile sensor platform [10] ............................................. 97 Figure 75: Set-up for testing firewall VNF bandwidth within the same server ............................................... 99 Figure 76: Variation of CPU usage as traffic rate changes .............................................................................. 99 Figure 77: Change of traffic rate with variable packet size ........................................................................... 100 Figure 78: Set-up - No virtualized firewall ..................................................................................................... 100 Figure 79: Set -up B - Virtualized firewall ...................................................................................................... 101 Figure 80: Latency variation for the different setups of RAM of the firewall VSF ........................................ 101 Figure 81: - Latency variation for the different vCPU allocations of the firewall VSF ................................... 102 Figure 82: DDoS attack detection and mitigation interactions between CHARISMA components .............. 102 Figure 83: Simplified vCache peering setup. The vCC is omitted for simplicity. ........................................... 104 Figure 84: vCache peering setup. The vCC is omitted for simplicity. ............................................................ 104 Figure 85: Effect of cache size on cache hit ratio (CHR) ................................................................................ 105 Figure 86: Effect of cache size on average download time (ADT) ................................................................. 105 Figure 87: Bandwith test APfutura Low Latency Demo ................................................................................. 108 Figure 88: 1st Latency Test APFutura ............................................................................................................ 109 Figure 89: Bandwidth test without CHARISMA Devices. ............................................................................... 109 Figure 90: Latency Test without Charisma Devices and low CPU use. .......................................................... 110 Figure 91: Latency test with CPU overloaded at 99.7%. ............................................................................... 111 Figure 92: VNO dashboard expanded ........................................................................................................... 112 Figure 93: VNO view ...................................................................................................................................... 112 Figure 94: MobCache deployment ................................................................................................................ 113 Figure 95: Network delay visualisation [11] .................................................................................................. 121 Figure 96: Installation of VLC link for long-term measurement .................................................................... 123 Figure 97: Initial debugging information from VLC link at Aveiro ................................................................. 123 Figure 98: SNR over spectrum for VLC link installed in Aveiro. ..................................................................... 123

Page 10: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 10 of 124

Figure 99: 6tree routing algorithm - downlink .............................................................................................. 124 Figure 100: 6tree routing algorithm - uplink ................................................................................................. 124

Page 11: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 11 of 124

List of Tables Table 1: Telekom Slovenije Field trial hardware inventory ............................................................................. 17 Table 2: Specifications of the physical devices used in the NCSRD demonstrator ......................................... 19 Table 3: APFutura field trial hardware inventory ............................................................................................ 22 Table 4: OpenFlow rules regarding metering functionality............................................................................. 44 Table 5: TS - T1.1 - OAM Security service deployment in TS field trial ........................................................... 58 Table 6: T1.2 - OAM SmartNIC configuration for “drop packages” functionality test .................................... 59 Table 7: T1.3 - OAM to OpenWRT ................................................................................................................... 59 Table 8: T2.1 security policy definition ............................................................................................................ 59 Table 9: T2.2: M&A alert definition (GUI to M&A) .......................................................................................... 60 Table 10: T3.1 – Attack identification .............................................................................................................. 60 Table 11: T3.2 – IDS Alert to M&A ................................................................................................................... 61 Table 12: T3.3 – M&A alert to SPM ................................................................................................................. 61 Table 13: T3.2 – Attack mitigation – SPM policy lookup ................................................................................. 61 Table 14: T4.1: TrustNode 6Tree speed test ................................................................................................... 62 Table 15: TrustNode configuration .................................................................................................................. 63 Table 16: TrustNode connectivity.................................................................................................................... 63 Table 17: TrustNode throughput ..................................................................................................................... 64 Table 18: TrustNode security........................................................................................................................... 64 Table 19: T4.2 Robot Demo functionality test................................................................................................. 65 Table 20: OFDM-PON latency test ................................................................................................................... 65 Table 21: T1.1 - OAM to OpenWRT router CAL0 ............................................................................................. 66 Table 22: T1.1 - OAM to SDN switch CAL1 ...................................................................................................... 66 Table 23: T1.1 – OAM to OVS Integration Bridge in compute node CAL1 ...................................................... 67 Table 24: T1.1 – OAM to BH ............................................................................................................................ 67 Table 25: T1.2 - IDS VNF deployment .............................................................................................................. 68 Table 26: T1.2 - FW VNF deployment .............................................................................................................. 69 Table 27: T2.3: IDS alert definition (IDS EMS to IDS) ....................................................................................... 70 Table 28: SPM – Orchestrator (Security Manager) interface .......................................................................... 70 Table 29: SPM – OAM (CHARISMA GUI): Request to create a security policy ................................................ 70 Table 30: SPM – OAM (CHARISMA GUI): Request to read security policies ................................................... 71 Table 31: SPM – OAM (CHARISMA GUI): Request to update security policies ............................................... 71 Table 32: 4.2.2.3 SPM – OAM (CHARISMA GUI): Request to delete security policies ................................ 72 Table 33: IDS alert to MA ................................................................................................................................. 73 Table 34: SPM – M&A interface: Reception of properly built Alert Notification message ............................. 73 Table 35: SPM – M&A interface: Reception of badly built Alert Notification message .................................. 73 Table 36: SPM – Security Manager send mitigation action ............................................................................. 74 Table 37: SPM – Security Manager send mitigation action ............................................................................. 75 Table 38: SmartNIC installation ....................................................................................................................... 76 Table 39: APF2, SmartNIC processor reduction load test ............................................................................... 76 Table 40: APF3, acceleration of L2/L3 forwarding test ................................................................................... 77 Table 41: APF 1 ................................................................................................................................................ 78 Table 42: APF 2 ................................................................................................................................................ 78 Table 43: MoBcache test APfutura field trial .................................................................................................. 82 Table 44: Updated mandatory CHARISMA requirements and KPIs ................................................................ 83 Table 45: Validation – Telekom Slovenije field trial ........................................................................................ 84 Table 46: Validation - NCSRD demonstrator ................................................................................................... 85 Table 47: Validation – APFutura field trial ....................................................................................................... 86 Table 48: Assigned bits per sub- and MAC-carrier and its addressing ............................................................ 94

Page 12: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 12 of 124

Table 49: Latency measurement results, measured with IXIA and latency-robot between robot controller - and robot base station location. ..................................................................................................................... 97 Table 50: CPU usage over different values of traffic rate ............................................................................... 99 Table 51: Traffic rate measurements over different packet sizes ................................................................. 100 Table 52: Measurements of latency with different setups of RAM for the firewall VSF .............................. 101 Table 53: Measurements of latency with different setups of the RAM for the firewall VSF ........................ 102 Table 54: Measurement of the time required to complete the several procedures that are part of attack mitigation ...................................................................................................................................................... 103 Table 55: Average Download Time analysis in vCache peering scenarios .................................................... 106 Table 56: Effect of Intelligent Traffic Handling on Average Download time ................................................. 107 Table 57: D4.3 section 4.2.6 update – optical wireless link testing .............................................................. 122

Page 13: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 13 of 124

List of Equations Equation 1: Possible bit rate per ONU channel ............................................................................................... 95 Equation 2: Propagation delay for 10 km of optical fibre ............................................................................... 96 Equation 3: ONU gross data rate for QPSK modulated carriers ...................................................................... 96 Equation 4: ONU layer 1 Ethernet data rate ................................................................................................... 96

Page 14: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 14 of 124

1 Introduction

WP4 is the showcase work package for the technologies and technical solutions developed in the parallel technical work packages WP1, WP2, and WP3 of the CHARISMA project. It provides the validation of the end-to-end secure demonstrators and field trials, with the validations based on the Use Cases and Scenarios as previously defined in WP1. The specific objectives of WP4 are:

• Laboratory prototype validation of the specified scenarios from work package WP1.

• Two small-scale test-bed validations of selected solutions in 5G laboratory setups.

• 5G field-trial pilots of CHARISMA low-latency security converged cellular network architecture.

• Results evaluation of prototypes and pilot validators.

This deliverable D4.3 provides the testing and validation results of the specified CHARISMA use-cases and the overall secure, converged & virtualised distributed CHARISMA 5G architecture, including the virtualised security and the network management system security solutions that have been developed. Analysis of the CHARISMA results compares them to the 5G-PPP quantifiable KPIs, as well as the indicators and use-case specifications from work package WP1 and task T1.2. The steps to be taken are:

• Technical evaluation of the two field-trials and test-bed demonstrators

• Fine-tuning and optimisation

• Implementation of the use-case scenarios as defined in D1.2: Use Cases [1]

• Technical (objective) assessment of the field-trials and lab demonstrator results

• Statistical processing of results and derivation of overall comments for platform operation

The key organisational aspect of this document D4.3 focuses around the two field trials and the one demonstrator located in:

1. Field trial in Telekom Slovenija premises, Ljubljana, Slovenia 2. Field trial in APFutura premises, Centelles, Spain 3. Demonstrator in NCSRD premises, Athens, Greece and

The document D4.3 is structured as follows: Chapter 2 provides an updated overview of the CHARISMA field trials, with a description of the differences between the planned and actual final setup. It provides a physical and logical level architecture for each field trial and demonstrator. Additionally, there is a dedicated subchapter describing updates to the integration and configuration of the CHARISMA hardware and software components in each testbed. Chapter 3 is dedicated to the demonstration scenario descriptions. Demonstration scenarios are described per field trial, and according to the main CHARISMA project features of low latency, open access (slicing), and security aspects. Chapter 4 is dedicated to the integration testing and any potential component interoperability issues. It describes the testing performed on the integrated CHARISMA hardware and software components according to each test bed setup and demonstration flow. Chapter 5 discusses the validation and evaluation of the field trial and lab demonstrations. Here, demonstrations are validated and finally evaluated according to the relevant 5G-PPP KPIs as well as the KPIs defined in the deliverable D1.2 [1].

The aim of this deliverable is to perform the final validation and evaluation of the CHARISMA project objectives, and technical developments as produced in work packages WP1-WP3. It represents the final six months of work done in WP4 at the conclusion end of the project.

Page 15: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 15 of 124

2 Final Pilot Sites Setup

This chapter provides an overview description of the CHARISMA field trials and laboratory demonstrator: trials at the Telekom Slovenije and APFutura premises, and a demonstrator at the NCSRD premises. Each field trial and demonstrator has already been described in detail in the deliverable D4.2 [5]. The following subchapters describe the following features: physical level architecture, logical level architecture, and the planned integrations at each of the field trials and the lab demonstrator. Ultimately the purpose of this chapter is mainly to describe the differences between the planned setup (as described in D4.2 [5]) and the final sites setup.

2.1 Telekom Slovenije Field Trial

The following sub-sections contain descriptions of the final physical level architecture, and the logical level architecture, as well as the planned integrations required for the Telekom Slovenije field trial.

2.1.1 Physical Level Architecture

The field trial setup hosted at the TS premises in Ljubljana has been described in detail in section 2.3 of the deliverable D4.2 [5]. This section now offers a brief overview of the differences in the field trial setup between that, which was initially planned and the actual final setup.

Telekom Slovenije has set up a dedicated physical infrastructure located at its laboratory in Ljubljana for the purposes of the CHARISMA project field trial, and serves as a future 5G laboratory environment for Telekom Slovenije that has been enhanced with CHARISMA concepts. The field trial therefore enables the project consortium to test and validate the 5G concepts and products developed in the 5G PPP Phase 1 project. The infrastructure consists of a cloud and virtualisation environment, network connectivity and a 4G – LTE radio access network, while the radio access network (RAN) is setup as a Cloud-RAN (C-RAN) environment.

The main part of the TS field trial is the cloud environment comprising an OpenStack and Ericsson HDS8000 cloud / virtualisation infrastructure. OpenStack is used to demonstrate the CHARISMA features and architecture, while the HDS platform offers a virtualized network infrastructure hosting the virtualized functions of the 4G LTE network(s).

The OpenStack installation is spread across the CHARISMA CAL0 and CAL3 layers. It serves as a container for virtual network functions (VNFs) and for the CHARISMA control, management and orchestration component (CMO). The cloud infrastructure may be spread over vast geographical areas, and managed centrally, to enable a telecom operator to spread the cloud environment and the CHARISMA intelligent management units (IMUs) at different converged aggregation levels (CALs). In the current setup CAL3 represents the central office of the telecom operator, while CAL0 is a geographically distant remote access network. Deploying VNF and business applications at the edge of the network (CAL0) enables the telecom operator to guarantee low latency services to its customers via shortening the path towards service.

Page 16: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 16 of 124

Compute NodeCharisma CMOCompute Node

OVS SwitchOVS Switch

OpenWRT

OpenWRT

HDS 8000EPC

BBU1

BBU2

Switch

RRU1

RRU2

RRU3

LTE

LTE

OFDM PON

CAL 0 CAL 1 CAL 2 CAL 3

Internet

ResidentialUser

Simulator

ResidentialUser

LTE

LTE

WiFi

SmartNIC

CAL 0'

TrustNode

Robot Controller

Robot

WiFi

1

1

2 2

3

4

3

4

6

97 8

5

11

10 12

Figure 1: TS Physical Layer Architecture

As shown in Figure 1, the radio access network (RAN) is connected to the packet core network, residing in the HDS8000 cloud / virtualisation environment, via a high capacity orthogonal frequency division multiplexing passive optical network (OFDM-PON). The commercial equipment represents contributions from CHARISMA consortium partners, with the OFDM-PON having been developed by HHI within CHARISMA.

At CAL0, CPE devices are connected via Open vSwitch to the OpenStack compute node. The Open vSwitch has been added to the topology, to provide network programmability at the edge. The additional OpenStack node at CAL0 acts as an IMU, providing the ability to offload the VNFs and application logic at the edge of the network, therefore lowering the latency by means of “shortest nearest path”.

Figure 2: TS C-RAN setup

The TS field trial access network consists of two cloud radio access network (C-RAN) connected via the TrustNode router. A C-RAN is comprised of a base band unit (BBU) and multiple remote radio units (RRUs). Additionally, the access network is extended via LTE-enabled customer premises equipment (CPE) providing WiFi access over the LTE network. The TS C-RAN access network is depicted in Figure 2.

Page 17: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 17 of 124

Finally, CAL0 layer also contains the Trustnode router developed by Innouroute within CHARISMA, to demonstrate the low latency features provided by this device.

The physical architecture of the Telekom Slovenije field trial is depicted in Figure 1, whilst the hardware inventory detailed specification is offered in Table 1.

Table 1: Telekom Slovenije Field trial hardware inventory

ID Role Vendor CPU Model CPU cores RAM Storage Other

Features

1 HDS 8000 Ericsson 1024

GB

4 TB

CAL0 OpenStack HP E5-2630

CAL3 OpenStack HP E5420

CAL0 Open vSwitch HP E5420

CAL3 Open vSwitch HP E5420

CAL0 Robot Controller HP E5420

Robot

2 OFDM PON HHI

3 TrustNode Router Innoroute Intel E3800 4 4GB externa

l

FPGA

acceleration

4 Baseband unit Ericsson

5 Remote radio unit Ericsson

6 Linksys 1200 ACS Linksys Marvell Armada

385 88F6820

2 x 1.3 GHz 512

MB

128 MB OpenWRT

enabled

router

The packet core network is deployed on top of the HDS8000 virtualisation platform, with all virtual packet core functions being Ericsson’s commercially available VNFs (vMME, vPGWs and vSGW). The packet core network is integrated with the test and production Home Subscriber Server (HSS).

2.1.2 Logical Level Architecture (Software)

Form the logical architecture perspective, the Ljubljana TS field trial comprises the CHARISMA CMO and elements at the CAL0, CAL1, CAL2 and CAL3 levels. Figure 3 describes the logical level architecture, focusing on end-to-end slicing, and control and management of the converged layers, with the CAL delineations clearly shown.

Page 18: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 18 of 124

Figure 3: Logical level architecture of the Telekom Slovenije field trial

2.2 NCSRD demonstrator

The laboratory demonstrator setup hosted at the NCSRD premises has been described in detail in section 2.1 of the deliverable D4.2 [5]. As already explained, the purpose of the NCSRD lab demonstrator is the demonstration, validation and evaluation of the multi-tenancy and security features of CHARISMA. Here, for reasons of completeness, we are including the diagrams that depict the physical and logical level architectures of the demonstrator (Figure 4 and Figure 5). No changes were made from the NCSRD demonstrator architecture that was presented in D4.2.

Figure 4: Physical-level architecture of the NCSRD demonstrator (data plane)

Page 19: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 19 of 124

The specifications of the physical devices used are enumerated in the table below:

Table 2: Specifications of the physical devices used in the NCSRD demonstrator

ID Role Vendor CPU Model CPU cores RAM Storage Other

Features

1 User Equipment B (laptop) SONY Intel Centrino

U7600

2 x 1.2GHz 2 GB 120 GB

1 User Equipment A (laptop) Toshiba Intel i5 – 4200M 4 x 2.5GHz 4 GB 360 GB

1 User Equipment C (mobile

phone)

Samsung

Galaxy S4

Exynos 5 Octa

5410

4 x 1.6GHz

& 4x

1.2GHz

2 GB 16 GB

1 4G LTE USB adapter HUAWEI - - - - HUAWEI

E3372h

2 WIFI CPE Linksys - 2 x

1.30GHz

256

MB

128 MB WRT120AC

3 eNodeB HP Intel i7 – 4790 8 x 3.6GHz 8 GB 500 GB USRP B210

external RF

interface

4 SDN switch CAL1 Turbo-X Intel(R)

Core(TM)2 Duo

CPU E6750

2 x

2.66GHz

4 GB 300 GB 5 network

interface

cards

5 SDN switch CAL3 Turbo-X Intel(R)

Pentium(R) 4

2 x

2.60GHz

2 GB 300 GB 5 network

interface

cards

6 Backhaul node Intracom

Telecom

SDN-enabled

7 Backhaul node Intracom

Telecom

SDN-enabled

8 GTP encapsulation/

decapsulation server

Turbo-X Intel i5 - 4460 4 x 3.2GHz 16 GB 1 TB 5 network

interface

cards

9 Cloud Compute Node 1 Turbo-X Intel(R)

Core(TM) i7-

7700 CPU

4 x

3.60GHz

32 GB 256 GB

SSD

10 Cloud Compute Node 2 Turbo-X Intel(R)

Core(TM) i7-

7700 CPU

4 x

3.60GHz

32 GB 256 GB

SSD

11 Cloud Controller Node Turbo-X Intel(R)

Core(TM) i5-

2500K CPU

4 x

3.30GHz

16 GB 1 TB

12 EPC HP Intel i5 - 2400 4 x 3.1GHz 16 GB 500 GB

13 Managed Switch 1 Netgear GS716T

14 VNO application server DELL Intel i7 - 2600 4 x 3.8GHz 16 GB 320 GB

15 Managed Switch 2 – for

management

HP - - - - 2510-48

J9020A

16 CMO Cloud Infrastructure Fujitsu Intel(R) Xeon(R)

CPU X5677

8 x

3.47GHz

96 GB

Page 20: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 20 of 124

Figure 5: Logical-level architecture of the NCSRD demonstrator

The following figure shows an overview of the final integrated infrastructure at NCSRD:

Figure 6: (a)Integrated infrastructure at NCSRD demonstrator and (b) CMO infrastructure

Page 21: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 21 of 124

2.3 APFutura field trial

The following chapter describes the final hardware and software setup in the APFutura field trial. From the time of writing the deliverable D4.2 [5], the field trial has been updated, at the physical Level, by adding one physical server and moving the Trustnode from CAL3 to CAL2. Another modification with respect to the planned intentions in D4.2 [5], relates to the robot use case. The low latency demo will show how the variation of the latency could affect to end devices. Due to the internal decision to have one use case with moving vehicles, the robot use case was transferred instead to the TS field trial. The latency measurement in this APFututa field trial have been performed using ping clients and jPerf.

2.3.1 Physical Level Architecture

The main differences between the physical Level Architecture described in deliverable D4.2 [5] are to be found at the CAL2 and CAL3 nodes.

At the time of writing deliverable D4.2, CAL3 had only one server and the Trustnode router. Now in the final setup the number of servers has been increased to two, one at CAL2 and another at CAL3 while the Trustnode router has been now moved to CAL2. Due to the addition of this new server and now all the devices are connected to the Trustnode router. This way we can demonstrate the concept of devolved offload with shortest path nearest to end-users developed in CHARISMA.

The SmartNIC developed by Ethernity is installed in the CAL3 server where all the OpenStack instances are running and where one of the VNFs is deployed. At the CAL2 server a regular NIC card is in use with the CAL2 server also having one VNF deployed.

The next figure shows the inventory for all CHARISMA devices used in the APFutura field trial.

Figure 7: APFutura Physical Level Architecture

The above-mentioned hardware acceleration devices SmartNIC and TrustNode are placed at the CAL2 and CAL3 nodes. The connection to the fronthaul (CAL1) is established via optical components OLT( 4) and ONT (5). The mobile caching devices, MobCache (7), are located in CAL1 and CAL0.

Table 3 shows a detailed inventory for all CHARISMA devices used in the APFutura field trial.

Page 22: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 22 of 124

Table 3: APFutura field trial hardware inventory

2.3.2 Logical Level Architecture

From the logical level architecture perspective, the only change from the deliverable D4.2 relates to the low latency scenario.

The low latency demonstration is based on the two CHARISMA devices that have been specially designed for low latency operation: The SmartNIC and the Trustnode, which optimize the propagation delay of network traffic; whereas, in contrast, the MobCache devices decrease the networks start up latency for multimedia content. Figure 8 shows the logical connections between the components.

Figure 8: APFutura Logical Level Architecture

ID Role Vendor CPU Model CPU cores StorageOther

Features

1 Server (CAL3) DELLIntel Xeon

E56204 126GB

1 Server (CAL2) SuperMicroIntel Xeon E5-

26404

2 SmartNIC Ethernity

3 TrustNode InnoRoute Intel E3800 4 external

FPGA

accelerati

on

4 OLT Altice Labs

5 ONT Altice Labs

6 PCs HP & Apple

512MB

NAND

flash

8GB

eMMC

1 mSSD

upto

256GB

8 Laptop and MobileApple &

Samsung- - - --

32GB DDR3

RAM

7 MoBcache JCP-C Freescal Quad-cores 1.2 GHz 2GB DDR3

32GB DDR3

4GB

Page 23: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 23 of 124

3 Demonstrations

This chapter defines the refined demonstration scenarios that are being demonstrated in the two field trials and one lab demonstrator. Each of the field trials and lab demonstrator has been targeted to demonstrate the three main features of the CHARISMA project:

• Low Latency

• Multi-tenancy (slicing)

• Security aspects

Each field trial and demonstrator addresses those features in a unique manner; nevertheless, feature portability is also considered to be an additional requirement to be shown as part of the demonstrations. This is shown by implementing the same or similar features on three different network and hardware architecture setups.

3.1 Telekom Slovenije Field Trial

This chapter focuses on those demonstrations to be carried out at the Telekom Slovenije field trial. The demonstrations are based predominantly on the actual and foreseen requirements harmonized inside the CHARISMA consortium.

3.1.1 Security Demonstration – Smart Grid and Meters

3.1.1.1 Scenario motivation

According to the EU Commission [7], there is the aim to replace at least 80% of electricity meters with smart meters by 2020 wherever it is cost-effective to do so. It is anticipated that this smart metering and smart grids rollout can reduce emissions and annual household energy consumption.

Although Telekom Slovenije is itself not an electricity smart grid operator, it finds a great interest in the above-mentioned topic, since electricity metering can be considered to be a small part of the much larger IoT segment. As a mobile and fixed communications network operator, Telekom Slovenije is interested in providing reliable and SLA based connectivity to the electricity Smart Grid and potentially other customers with similar requirements to connect devices over vast geographical areas.

Telekom Slovenije has also participated in the EU funded project SUNSEED [8]. Here, the project objective was to produce a set of guidelines on how to overlap, combine and interconnect, i.e. converge, DSO and telecom communication networks for dense DEG (distributed energy generation) smart energy grids as an evolutionary development from the necessity of the state covering end-to-end infrastructure. The initial plan was to join both field trials and show synergies of both projects, however, due to the different project time lines, the decision was made to simulate in CHARISMA a similar Smart Grid environment.

3.1.1.2 Description

The Smart Grid operator needs to connect metering devices to the central office and monitor the power consumption in real-time. The metering devices may be spread over a geographically vast area, and subsequently located very distant with the respect to the Smart Grid operator central office.

As a connectivity provider, Telekom Slovenije owns a CHARISMA-compliant network infrastructure, with such a network infrastructure offering low latency services by use of the CALs and specially designed low latency

Page 24: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 24 of 124

hardware components, such as the TrustNode router and SmartNIC. Efficient use of physical resources via virtual network slices require reliable and guaranteed connectivity, as well as traffic isolation. Finally, the important security aspects are also covered by the network operator.

Network slicing is created over the common network operator infrastructure, serving also network operators residential telephony users. The slice is isolated from other slices, with traffic being prioritized, and reliable and guaranteed connectivity being established.

The Smart Grid central office is deployed locally at the network operator’s micro-DC infrastructure at the edge of the network, therefore providing low latency via the CHARISMA infrastructure principles. In this way, data received from metering devices is processed locally and only the necessary information is passed towards the central office.

Security infrastructure monitors traffic in the slice dedicated to Smart Grid operator for potential anomalies and security threats. In case traffic anomaly or security threat is discovered, the metering device producing such traffic is disabled. In such case, an Alarm is raised and the Smart Grid operator technican is dispatched to check the metering device.

3.1.1.3 Scenario

The Scenario focuses on the security aspects of the CHARISMA architecture, such that it is the architecture itself that enables the network operator to deploy security services that monitor traffic originating from client devices. The security service isolates potential rough devices, either malfunctioning of potentially hijacked devices, and prevents them from participating in a DOS attack. The following is the basic demonstration flow:

1. Metering devices report the power consumption usage, simulated with IXIA simulator; 2. Devices transmit power consumption data periodically, one request per second from each device; 3. The simulated application logic is deployed at the CAL0 level of the CHARISMA architecture, with an

IXIA simulator; 4. Traffic from metering devices is passed through the OpenWRT CPE device to Open vSwitches, before

it reaches the CAL0 OpenStack compute node; 5. Before reaching the Smart Grid application, traffic is sent to the vIDS, deployed at the CAL0 compute

node, which monitors traffic for potential anomalies; 6. The traffic reaches its destination, that is, the Smart Grid central office application simulated with

the IXIA simulator; 7. At a certain point, one of the metering devices starts generating huge amount of traffic, ten times

more than is usual; 8. The vIDS detects the traffic anomaly, and triggers the Monitoring and Analytics module; 9. The Monitoring and Analytics module queries the Service Policy Manager to obtain the traffic policy; 10. Due to the policy being exceeded, the Security Manager is invoked to mitigate the threat, and

instructs SmartNIC to block traffic originating from the malfunctioning metering device(s). 11. The SmartNIC disables traffic originating from the malfunctioning metering device(s).

3.1.1.4 Test setup

A separate network slice is created for the smart grid operator. The slice is used to connect the metering devices to the smart grid application of the smart grid operator. The slice ensures the QoS and SLA needed to reliably deliver the traffic via the operator infrastructure. Slice traffic isolation and prioritisation are applied. Network congestion in another (e.g. residential) slice has no impact on the smart grid slice. The setup diagram is depicted in Figure 9.

Page 25: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 25 of 124

OpenStack Compute NodeCAL0

OpenWRTOpen vSwitch

CAL0

IXIAIXIA

SmartGridMetering Devices

SmartNIC vIDS

M&A

SPM

SM

Figure 9: TS security demonstrator setup

The network slice consists of the following:

1. WiFi (OpenWRT) – slicing on the OpenWRT router is done using internal traffic isolation based on the router’s internal VLAN mechanism;

2. Open vSwitch: VLAN traffic coming from the OpenWRT enabled router, does so via the Open vSwitch towards the OpenStack compute node at CAL0;

3. CAL0 OpenStack compute node; 4. vIDS VNF for traffic monitoring.

The following hardware elements are involved in the demonstration:

• IXIA traffic simulator - metering device simulation;

• OpenWRT router – acting as CPE equipment, and CAL0 network offload;

• CAL0 Open vSwitch, ensuring network programmability at CAL0;

• OpenStack compute node CAL0 connected to CAL0 Open vSwitch via the SmartNIC interface. The compute node provides following functionalities:

o General network slicing capabilities, as such, providing a container for vIDS VNF and capability for edge computing, that deploys business logic at the network edge, close to client devices;

o vIDS functionality providing traffic inspection; o SmartNIC providing network acceleration and a threat mitigation point;

• Monitoring and Analytics’ module to trigger an alarm;

• Service Policy Manager for policy lookup;

• Security Manager, providing instructions to the SmartNIC.

3.1.2 Low Latency Demonstration

The TS demonstration also contains the low latency devices SmartNIC and TrustNode. This test setup will extend the measurements done in the AFFutura testbed, and provide a hardware visualisation of the achieved latency reduction. Test setups is shown in Figure 10.

Page 26: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 26 of 124

OpenWRT

OFDM PON

WiFiRobot

Controller

TrustNodeRouter

Robot Controller

Position A

Position B

Robot

Figure 10: TS Low latency demonstration setup

In particular, this test setup focuses on the following delay optimisation strategies:

• Hardware accelerated node propagation delay reduction;

• Hierarchical network delay reduction.

To measure and simulate latency, a robot communicating with a controller, as described in D4.2 [5] and [10], will be used. Figure 11 shows the basic assembly of the robot-platform. The ping between the robot and the robot controller will determine the reaction time of the robot (the robot is a representation implementation of an M2M communication or cloud controlled servo loop, target-latency for this application is 1ms [9]). In comparison to previous experiments, we extended the robot control protocol to a comand-ttl-field. This means the control commands passing a loop between the robot and the controller several times before they are interpreted by the robot. So, we have the possibility that the network latency might be multiplied by a factor, to increase the impact of the latency to the robot reaction. This allows us to simulate authentic network conditions with several hops, by using a test setup with a limited number of devices. Depending on the robot-controller-location (A or B in Figure 10) the whole network latency or the hierarchically short-path latency can be shown.

Figure 11: Mobile sensor platform architecture [10]

Page 27: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 27 of 124

3.2 NCSRD Demonstrator

3.2.1 Security Demonstration

3.2.1.1 Scenario motivation The security attack that has been selected to be demonstrated over the NCSRD infrastructure is a Distributed Denial of Service (DDoS) scenario and how the CHARISMA CMO platform components interact towards its identification and mitigation. The motivation behind the selection of the security scenario is two-fold:

First, the selected scenario requires an end-to-end integrated run of all security related CHARISMA components, involving all major CMO components and demonstrating the developments with WP3. The scenario involves interactions between the following components: CHARISMA GUI, Open Access Manager (OAM), TeNOR NFVO, Service Policy Manager (SPM), Monitoring & Analytics (M&A) and security Virtual Security Functions (VSFs).

The second motivation is relevant to the significance of DDoS attacks today. DDoS attacks are amongst the most common and most serious cyber-attacks seen in today’s networks [18][19]. DDoS is an attack that disrupts operations and causes loss of reputation, productivity and/or revenue. The CHARISMA consortium has supported the selection of the specific attack scenario; specifically, the CHARISMA telecom operators, specifically, COSMOTE and Telekom Slovenije, have highlighted the importance of securing against DDoS attacks and provided evidence1 of such attacks towards their networks. COSMOTE has provided detailed info1 on DDoS attacks performed over their network and networks of the Deutsche Telekom group. A document with specific attack data and vectors of date, size and duration of attack has been provided. Moreover, Telekom Slovenije has provided written confirmation that DDoS attacks are the most common security attack over Telekom Slovenije’s networks.

3.2.1.2 Overview and advances to Y1 demonstration The security demonstration in NCSRD is providing advances on the first-year security demonstration of CHARISMA as was previously described in detail in deliverable D4.1 [4]. Similarly to the first-year demonstration, the scenario presented involves a security attack that is being identified and mitigated using 5G enabling technologies such as NFV and SDN. The main advances in comparison to the Y1 security demonstration are:

• A full cycle of interactions between the developed WP3 components is shown for the identification and mitigation of a security attack through automated security policy enforcement. Specifically, OAM is used to allocate slices, TeNOR is used to deploy security services on the allocated slice, SPM is used to define the security policies, recommended actions and M&A, and assist the security attack identification decisions.

• A more sophisticated attack scenario is implemented, simulating real-world cyber-criminal activities. A botnet network has been created by recruiting several bots using Command and Control (C&C) software. The botnet is used to perform several types of Distributed Denial of Service (DDoS) attacks. IP spoofing mechanisms are placed to further complicate identification of the compromised machines and detection of the attack.

3.2.1.3 Scenario description

3.2.1.3.1 Slice creation

We assume the scenario in which a Virtual Network Operator (VNO) A is interested to lease a slice over the infrastructure that belongs to the Infrastructure Provider (IP) to offer its services to its customers. As already

1 Internal confidential reports

Page 28: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 28 of 124

described, through the CHARISMA GUI the Infrastructure Provider will create a slice that includes all the available resources of the NCSRD infrastructure (Figure 12) and assign this slice to VNO A. In the backend of the CHARISMA GUI a request is sent to Open Access Manager (OAM) for the slice creation. The OAM in turn sends a request to each device of the infrastructure that is part of the slice and requests allocation of particular resources for the slice creation. Additionally, requests for resources allocation are sent to the SDN Controller instance running at NCSRD (OpenDaylight), which is responsible for interacting with all SDN-compatible devices in the NCSRD pilot. Specifically, the devices that receive the request are the Open WRT router at CAL0, the SDN switch at CAL0, the OVS in compute node at CAL0, both ends of the backhaul link, the SDN switch at CAL3, the OVS in compute node at CAL3 and the OpenWRT router at CAL3.

Figure 12: Slice creation over the NCSRD infrastructure

3.2.1.3.2 Service deployment

In this scenario, we assume that VNO A offers its customers a particular service (e.g. web service, video streaming), running at the core network. IMU implementations at CAL0 and CAL3 allow the deployment of additional services on demand. In particular, CAL0 implementation allows the VNO to deploy services closer to the edge, following the MEC paradigm. In the security scenario, the services we consider for deployment are implementing security functions, with the purpose to protect the services offered by the VNO A to its customers. Figure 13 provides an example of security service deployment though the CHARISMA GUI. For the needs of the security demonstration, we are deploying the CHARISMA network service (NS) that is composed of two VNFs: the virtual Intrusion Detection System (vIDS) and the virtual Firewall (vFW). A request towards TeNOR NFVO is sent from the CHARISMA GUI backend, requesting the deployment of the particular network service.

Page 29: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 29 of 124

Figure 13: Security service deployment

After service deployment, all the virtual machines that are part of the deployed service are registered automatically for monitoring through a call from the CHARISMA GUI backend to the MA. A list of generic monitoring metrics (CPU utilization, RAM utilization, disk usage, byte-rate and packet-rate) is monitored by default. Moreover, a number of metrics that are specific to the deployed service are also monitored (e.g. IDS logs exported as metrics). Figure 14 shows an example of the MA (Prometheus) dashboard and the values being monitored for CPU utilization.

Figure 14: Time-series data in MA dashboard

When the desired metrics are registered in MA, two Grafana dashboards are created, illustrating the different graphs of the VNO’s services. Specifically, one dashboard displays the generic monitoring metrics (Figure 15) and the other dashboard displays the metrics that are specific to the deployed service (Figure 16).

Page 30: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 30 of 124

Figure 15: Grafana dashboard showing the generic metrics

Figure 16: Grafana dashboard showing the specific to the service metrics

3.2.1.3.3 Security policy definition

VNO A can define its security policies through three configurations: in the SPM defining its security polices; in the MA defining alerts; and in the virtual IDS instance, defining Snort rules that allow identification of patterns that signify security attacks. The appropriate configuration of all three elements will allow a full integration cycle between the three components and their co-operation in the identification of a security attack.

The following figure (Figure 17) provides an example of the configuration applied to the virtual IDS system. The Snort rule below in used to define a threshold of packet rate per second that is considered a security attack (DoS/DDoS) towards the VNO service. This configuration is done directly through the Element Management System (EMS) offered by the IDS VSF.

Page 31: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 31 of 124

Figure 17: Virtual IDS (Snort) policy configuration

The configuration of the MA with monitoring alerts is made through the CHARISMA GUI. A monitoring alert consists of one or more conditions and thresholds, which when reached, trigger the alert. The following picture (Figure 18) depicts the view infrastructure provider has of the defined alerts. VNO can also define similar alerts.

Figure 18: View showing the defined alert rules

Additionally, through the CHARISMA GUI, the VNO (and the infrastructure provider too) has the ability to configure SPM with security policies. The following picture (Figure 18) shows the view where the infrastructure provider defines or configures the resources to be monitored.

Page 32: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 32 of 124

Figure 19: Resources to be monitored

From that moment onwards, the SPM will continuously evaluate whether the policy applies (not yet, since no alert has been received from the M&A module). This is reflected in the following figure:

Figure 20: Continuous policy evaluation

3.2.1.3.4 Attack identification

For the Y2 security demonstration, we have implemented a more complex scenario for the Distributed Denial of Service (DDoS) attack. We have developed an application, named Kratos that implements Command & Control (C&C) software. The developed application includes the malicious software that runs inside the compromised systems (bots), the C&C backend source code (botmaster) that is used to communicate with the bots, and a C&C dashboard that provides the status of the bots and allows their control for executing security attacks. Figure 21 provides an overview of the Kratos dashboard.

Page 33: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 33 of 124

Figure 21: Kratos C&C dashboard

A number of flooding attacks are possible (TCP, UDP and ICMP) to be initiated, with or without IP spoofing mechanisms towards a specific target. Additionally, SQL injection attacks towards a victim server are also possible to execute through the Kratos dashboard. During an attack, Kratos provides the status of the bots and also their logs (terminals), providing details of the attack taking place. For example, Figure 22 illustrates a TCP flooding attack and the information displayed in the Kratos dashboard.

Figure 22: TCP flooding attack results in Kratos app

The virtual IDS identifies the attack and exposes the attack notification as a metric to MA. The following figure shows the MA (Prometheus) dashboard, displaying a metric named “DDoS attack” that is being exposed by the virtual IDS. The value of the metric equals to 1, which means that at the current moment the IDS is detecting a DDoS attack. Details of the attack are displayed in the metric labels.

Page 34: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 34 of 124

Figure 23: MA dashboard with metric originating from IDS

When all the conditions defined in the alert rule in MA are reached, in the Grafana dashboard that display metrics related to the security service, a notification of the attack is shown. Additionally, as illustrated in Figure 24, historical data of the alert notifications are shown.

Figure 24: Grafana dashboard when the attack is identified

The MA notifies SPM with an alarm when the defined condition is reached. The following figure shows the logs generated by the SPM when such an alert notification is received:

Page 35: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 35 of 124

Figure 25: Alert received from the M&A module

The SPM checks the policy descriptor if an action has been defined for the specific alarm. In this case, the policy applies, and SPM evaluates the received data and identifies the attack (Figure 26).

Page 36: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 36 of 124

Figure 26: Detecting applicable policy

Based on the defined policy and action specified, SPM contacts the Security Manager to enforce the appropriate actions. In this scenario, the action to be taken is the configuration of the virtual firewall (IPs to be blocked). The following figure shows the request sent to the Security Manager for the firewall configuration:

Figure 27: Recommending next best action to the Security Manager

Next, the Security Manager uses the interfaces exposed by the firewall VSF for adding new rules to block the traffic sent from the attackers’ IP addresses. Figure 28 shows the GUI of the virtual firewall after this interaction is finalized. The firewall GUI shows the list of already applied rules, blocking the attackers and thus, mitigating the DDoS attack.

Page 37: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 37 of 124

Figure 28: Firewall configuration with attacker's IP addresses

3.2.2 Multi-Tenancy Demonstration

3.2.2.1 Overview

The purpose of the multi-tenancy demonstration is to first showcase the flexibility introduced by the CHARISMA architecture in deploying network services based on the principles of virtualization and programmability (Objective-1). This corresponds to the creation of network slices, defined as the set of network, compute and storage resources dedicated to the support of a particular network service. Traffic and resource isolation then becomes of particular importance since the dedication of resources to a network slice subsequently enables the co-location of multiple network slices on top of the same physical infrastructure i.e., multi-tenancy. The multi-tenancy demonstration therefore, aims to further show the ability of CHARISMA to support multi-tenancy and achieve traffic and resource isolation (Objective-2). However, CHARISMA takes a step further in additionally proposing and subsequently demonstrating the potential benefits of tenant co-location, in the form of controlled sharing of resources for the establishment of a co-operation with mutual benefit D3.4 [3]. The purpose of the multi-tenancy demonstration then further includes the ability to enable this co-operation, while controlling the traffic and resource isolation between the multiple tenants of the infrastructure (Objective-3).

The demonstration has a particular focus on the inter-domain peering of virtualized caches. The overall demonstration comprises of the following phases that will be further detailed in the following subsections:

• Phase-1: Single-tenant service deployment (slice creation) (Section 3.2.2.2) This phase simply showcases the baseline setup of a single tenant, including the allocation of network resources and the instantiation of a transparent virtualized caching service D3.4 [3], on top of the NCSRD testbed. The setup enables the demonstration of the single tenant network service, including caching benefits.

• Phase-2: Multi-tenant service deployment (Section 3.2.2.3) This phase includes the creation of a second tenant on top of the same physical infrastructure i.e., the NCSRD testbed. The two tenants are completely isolated in this phase, showcasing baseline resource and traffic isolation.

• Phase-3: Cross-tenant communication (Section 3.2.2.4) This is the main phase of the demonstration, which establishes the communication channel between the two co-located tenants. The setup enables the demonstration of both the expected performance

Page 38: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 38 of 124

related benefits, stemming from the co-operation of the two tenants, and the impact of this co-operation on traffic and resource isolation.

In addition to the aforementioned three phases, the multi-tenancy demonstration environment in NCSRD includes the following elements, aiming to shed light on the particular features of the CHARISMA architecture that relate to resource isolation and the reduction of latency:

• Wireless Backhaul traffic policing (Section 3.2.2.5)

• Intelligent traffic handling for vCaching (Section 3.2.2.6)

As described in D3.4 [3], this feature allows specific traffic flows to by-pass the vCache VNFs with the purpose of avoiding unnecessary delays, incurred by the virtualization environment, when a cache miss is expected. Though not directly linked to the multi-tenancy character of the demonstration, this feature is linked to the vCaching functionality of the CHARISMA prototype and has been put to test in the NCSRD testbed so as to allow a unified testing environment.

3.2.2.2 Phase-1: Single-tenant service deployment (slice creation)

The first phase of the multi-tenancy demonstration focuses on the instantiation of a single tenant of the infrastructure. The instantiated service includes an end-to-end network slice that includes all segments of the NCSRD testbed, namely:

• WiFi access: A separate Virtual SSID is created for the tenant. All client devices (hosts) served by this tenant associate to this SSID. The Access Points tag the corresponding traffic with a tenant-specific VLAN ID.

• SDN Network Switches: The switches of the infrastructure are configured with flow rules associating the network service of the particular tenant to specific forwarding decisions. The tenant-specific VLAN ID value introduced by APs is used to identify the traffic. In this baseline setup, forwarding rules redirect all traffic towards the OpenStack compute node (i.e., transparent caching) and in particular the vCache VNF instantiated there.

• NFV Environment: In the simple baseline vCaching scenario, two VNF instances are created for a single tenant i.e., vCache and vCC.

• SDN Wireless Backhaul: Following the SDN primitives, the wireless backhaul equipment is configured in a similar manner to the SDN network switches (see above).

Details on the integration/configuration of the above segments have been provided in D4.2 [5].

The creation of a single tenant slice is performed through the CHARISMA GUI, which interfaces the CHARISMA CMO architecture to orchestrate the allocation and configuration of the desired network, compute and storage resources. Of particular importance is the integration of the individual network, compute and storage resource allocations into one functional slice. To this end, the CHARISMA GUI allows firstly the individual allocation of network resources, signified via, e.g., a VLAN ID definition, followed by the allocation of compute and storage resources in the form of instantiated VNFs. The integration of the allocated resources is simplified by the CHARISMA CMO by allowing the attachment of a VNF to a particular network (slice). Once all resources are integrated into the network slice, the instantiated (virtual) tenant of the infrastructure is able to support communications for the end devices at the access network. In the context of transparent caching, this is equivalent to enabling transparent caching for flows traversing the slice towards a content server. For the needs of our measurements and demonstration activities, a content (HTTP) server has been instantiated within the NCSRD testbed (Figure 4).

3.2.2.3 Phase-2: Multi-tenant service deployment

Given the above single tenant functionality, the support for multi-tenancy becomes trivial, provided the availability of network, compute and storage resources. The exact same procedures are followed as in the

Page 39: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 39 of 124

case of the single tenant creation, to instantiate a second network slice, also supporting transparent caching. At the end of the repeated network slice creation procedure, two similar network slices are available. However, no communication is allowed across the two instantiated networks, including the instantiated vCache VNFs.

3.2.2.4 Phase-3: Cross-tenant communication

In the next phase, the CHARISMA GUI is employed to enable the establishment of a well-controlled and secure communication channel between the two instantiated vCache VNFs, with the purpose of supporting cache peering, i.e. the exchange of content between the two tenants. As described in D3.4 [3], this is accomplished by first establishing a secure shared network for the intended communication. The CHARISMA GUI provides this capability within the operational environment of one of the two tenants created in the previous phases, as shown in the example figure below.

Figure 29: Create new Shared Network

Upon completion of this step, the created network is only logically created within the underlying infrastructure, since no VNF or any other host is allowed to access it. The tenant that owns the shared network can then proceed to grand access to another tenant of the infrastructure. This is illustrated in the following example figure.

Page 40: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 40 of 124

Figure 30: Granting access to a shared network

Once this step is completed both tenants have rights to access the shared network, but still no VNF is attached to it. This is accomplished in the next step as illustrated in the following example figure. This step takes place for both tenants, effectively attaching the corresponding vCache VNFs to the shared network.

Figure 31: Attaching to a shared network

As previously described in D3.4 [3], at this point both created tenants own vCache VNFs that have access to the newly created shared network. Note that no other VNF/host is allowed to access the network, contributing therefore to traffic and resource isolation. In order for vCache peering to be realized, the next

Page 41: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 41 of 124

and final step is to configure the vCache VNFs on the application level, i.e. to allow Squid cache instances to establish peering links so as to direct their cache misses to their counterparts. This is accomplished by running custom scripts within the employed VNFs, as illustrated in the Figure 32.

Figure 32: Running a script on a VNF

In the particular context of the vCache peering demonstration, this feature of the CHARISMA GUI allows the communication with the vCC instances, which are then responsible for configuring the corresponding vCache VFNs at each side of the peering link through the available NETCONF interface. The CHARISMA GUI allows the selection of the appropriate script as well as the insertion of a (dynamic) set of parameters.

3.2.2.5 Wireless Backhaul network slicing and traffic policing

As described in previous deliverables, e.g. D3.4 [3], in order to provide multi-tenancy in the wireless backhaul context, we defined a virtual slice of each VNO to correspond to an EVC. Each EVC is defined by an S-VLAN ID that offers the required slice isolation. The C-VLANs, associated to an S-VLAN ID, represent different customers of each VNO. Furthermore each EVC can correspond to a different Service-Level Agreement, e.g. a different set of QoS parameters.

In our lab testbed setup the backhaul devices become visible through the OAM GUI after successfully connecting the OAM to the SDN controller through RESTCONF.

Page 42: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 42 of 124

Figure 33: CHARISMA OAM GUI showing wireless backhaul as sliceable resource

In order to subsequently create the backhaul slicing service, the InfP (via the Open Access Manager) sends a request to the SDN controller’s REST API, describing the switches (Node field) and the ports (port and UNI) where the services are to be deployed, as well as the VNO id (EVC id), the corresponding S-VLAN field and the C-VLANs (CE VLAN field) per VNO. We have the capability to specify a range of C-VLANs (e.g. 10-20) or a set of C-VLANs (e.g. 10, 12, 18) that would belong to the same VNO.

Figure 34: SDN backhaul configuration through the OAM

Page 43: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 43 of 124

In order to fulfil the increased requirements for traffic differentiation in the case of VNO coexistence on the same infrastructure, a Hierarchical Quality of Service (H-QoS) scheme is proposed. To achieve this, everything boils down to the granularity of the packet classifier, which must be able to provide the 3 degrees of freedom classification for the different packet fields. In our case the S-VLAN is the QoS identifier for the VNO, the C-VLAN is the identifier for the customer, and the C-VLAN PCP is the identifier for the Class of Service of the customer, as depicted in Figure 35.

Figure 35: H-QoS in SDN backhaul

In order to provide policing functionality for each VNO, the InfP (via the Open Access Manager) sends a request to the SDN controller’s REST API defining (for a specific switch) a meter id, meter band drop rate, and burst size. The controller then sends a meter modification message (METER_MOD) and installs the meter to the appropriate switch. Figure 36 shows a wireshark capture of the meter modification message (meter id 2) which drops packets after the 200 Mbps threshold.

Figure 36: Example of OpenFlow meter format

By now deploying an EVPL service, the flow modification messages will have an additional field in the instructions part that will tell packets to go to a specific meter by using the goto-meter instruction, as shown in Table 4. Multiple customer flows can send the packets to the same meter or can direct packets to different meters. This depends on the requirements of the EVPL service that are sent from the InfP and scalability constraints. In our case to simplify things, in order to provide traffic metering tailored to the needs of a VNO, the group of C-VLANs that are associated with each VNO will be forwarded to a specific meter.

Page 44: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 44 of 124

Table 4: OpenFlow rules regarding metering functionality

MATCH INSTRUCTIONS

IN PORT C –VLAN GO-TO METER

ID

PUSH S-VLAN

OUTPUT

In Figure 37 below, we showcase the policing functionality that was demonstrated at the NCSRD lab. To simplify things, we only take into consideration one customer (C-VLAN) for VNO A and one customer for VNO B. OpenFlow rules (Table 4) were installed on each backhaul switch forwarding traffic to the appropriate OpenFlow meter (Figure 36).

Figure 37: Traffic policing using OpenFlow meters at NCSRD demonstration

Page 45: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 45 of 124

Figure 38: Part of NCSRD testbed

Figure 39: Backhaul devices at NCSRD testbed connected through RF cable

For the ingress backhaul traffic from the Wi-Fi, a traffic policing functionality was enforced, restricting traffic for VNO1 to a max of 20 Mbps and of VNO2 to 80 Mbps. Traffic was generated by a smartphone for VNO A at 50Mbps and a laptop for VNO B at 100Mbps, both running iPerf clients (Figure 40 and Figure 41). iPerf Severs were installed on dedicated VNFs for each VNO and the traffic policing results were monitored. As can be seen in Figure 42, traffic rate from customer of VNO 1 was reduced from 50 to 20 Mbps and from the customer of VNO 2 was reduced from 100 to 80 Mbps.

Page 46: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 46 of 124

Figure 40: Backhaul slicing using OpenFlow meters

Figure 41: iPerf Clients

Figure 42: iPerf Servers

3.2.2.6 Intelligent traffic handling for vCaching

The intelligent traffic-handling feature of CHARISMA aims to provide VNOs with the capability of intelligently selecting the flows that would be desired to bypass the established vCaches, with the purpose of avoiding

Page 47: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 47 of 124

unnecessary virtualization overheads (as shown also in Section 5.3.2.4). In essence, this feature aims to provide VNOs the flexibility for fine-grained management of their flows. In this context, the NCSRD demonstration environment aims: (i) to show the orchestration capabilities for the intelligent traffic handling, as an offering to individual VNOs, rather than to put emphasis on the identification of the exact rules for some particular workload; (ii) to provide the operational environment for the quantification of the associated performance gains.

Support for the intelligent traffic handling concept necessitates the establishment of an interface, between a network tenant / VNO and the CMO, for the delivery of the traffic/flow rules that will enable certain flows to bypass the vCache VNF available (see Section 3.3.5.1 in D3.4 [3]). Within the realization of the CHARISMA architecture at NCSRD, this translates to the communication of these rules by the vCC (or vCache) VNF to the Open Access Manager (OAM) component that subsequently configures the programmable network infrastructure, i.e. the SDN switch interconnecting the NFVPoP (where the vCache VNFs reside) through the underlying OpenDayLight (SDN) Controller instance.

Towards this end, a polling interface has been established at the OAM side, allowing the OAM to periodically query the vCC or the vCache for any available intelligent traffic handling rules. The following figure illustrates this communication. Initially, an instantiated vCC or vCache notifies the OAM about the ability to support the intelligent traffic-handling feature, further specifying the desired rule-polling interval. This triggers the periodic polling of the IP destinations that must be added to or removed from the set of the forwarding rules in the underlying switch. The rules are delivered to the ODL controller, which configures them to the infrastructure.

Figure 43: Retrieval of Intelligent Traffic Handling rules

3.3 APFutura Field Trial

This chapter covers the three demonstrations being carried out in the APFutura testbed. We use these three demos to showcase at least two of the three main features of the CHARISMA project, these three being: low-latency, open-access, and security.

In the APFutura field trial low-latency, open-access, and one of the important functions in 5G: service availability/reliability, will be demonstrated.

3.3.1 Low Latency Demonstration

The APFutura field trial has two CHARISMA devices to reduce the latency, the TrustNode and the SmartNIC. The TrustNode router uses hierarchical routing and provides 6tree technology to reduce the end-to-end

Page 48: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 48 of 124

latency. On the other hand, the SmartNIC, installed in one server, can reduce power consumption and latency by off-loading the server CPU and processing all of the traffic in the SmartNIC itself.

We have used an already-deployed NGPON2 fibre network, using the OLT and ONTs, and we have introduced the CHARISMA devices to produce realistic latency measurements. To make this measurement, iPerf and jPerf have been used. The only difference between them is that jPerf is Java based, which allows to obtain graphics, which are very easy to compare.

This demo focuses on demonstrating the latency reduction, and to extract results taken in two scenarios (i) using CHARISMA devices and (ii) with regular devices. Therefore, in the second scenario, the Trustnode router has been replaced by a regular router and instead of the SmartNIC, the integrated NIC in the server has been used.

To make the test as realistic as possible, we use the network to demonstrate traffic isolation between different tenant slices while the server’s CPU usage increases due to sharing its load among slices.

Figure 44: APFutura Low Latency Demo Setup

In Figure 44 we can see different paths for the regular latency and for the low latency, and how the CHARISMA devices can achieve lower latency by reducing the CPU traffic handling process. This demo is supported by the internal software of the TrustNode, that accelerates routing of all the packets and provides very fast processing. The SmartNIC avoids the extra usage of the Linux kernel in the management of the received packets, reducing the latency further.

3.3.2 Multi-Tenancy Demonstration

Similar to the Telecom Slovenia field trial, the APFutura field trial has an OLT in which the traffic can be sliced. In order to slice the traffic, the OLT uses a different VLAN (Virtual Local Area Network) tag for each ONT traffic. It is possible to assign different VLANs for different ports in the same ONT, thus allowing the sharing of the same ONT with different VNOs (Virtual Network Operators).

VLANs travel from the ONT to the interconnection router, which can add and remove VLAN tags to send the traffic into the Internet.

Page 49: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 49 of 124

Figure 45: APFutura Field Trial Slicing

3.3.3 BUS Use Case Demonstration

The Bus use case assumes that a group of passengers are commuting in metro or bus. The MoBcache (MB), which can be installed at bus stations, on the bus, or even carried by the users, creates automatically an ad-hoc network (“mobile CDN”) which will ensure continuous use of content even when disconnected. A vCache deployed in the CAL3/CAL2 cooperates with MoBcache in the edge to overcome the bottleneck to the content server through caching/prefetching the requested content. Then, content will be intelligently cached or prefetched according to user interests (via a real use of the content), and ensure a user continuity of service.

The bus use case focuses on the service availability/reliability, demonstrating how low latency, caching and multi-tenancy improvements has a positive impact on the overall performance. In this use case, we use a van that simulates a bus and several ONTs deployed at different locations to simulate the different bus stations.

The scenario of this demonstration involves two bus stations and a MoBcache connected to the ONT at each bus station. Inside the van another MoBcache is installed, with the MoBcache device installed in the bus being responsible for managing the traffic that the users request, and making the connection to the other MoBcaches installed at each bus station. WiFi is used to make the connections between MoBcaches..

When the bus is outside of the WiFi coverage, the MoBcache inside the bus uses a 4G connection to continue offering the content.

In this case the user is jumping from one network to another, so to control the video prefetching and requests, the MoBcache devices use two different VNFs allocated in different CALs: vCC (Virtual Caching Controller) allocated in CAL3 and Vcaching (virtual caching) allocated in CAL2.

The following Figure 46 describes the field trial setup scenario.

Page 50: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 50 of 124

Figure 46: APFutura Field Trial Bus Use Case Scenario

3.4 Fronthaul testing

3.4.1 Introduction

In order to test the synchronisation features of a fronthaul system using a new functional split, such a system has been setup at Fraunhofer HHI labs in Berlin. It consist of a data source (protocol analyser or video server) a centralised unit2 , an SyncE-aware Ethernet switch, two distributed units including two mmW transmitter, a mobile end user device including a mmW receiver and a data sink (protocol analyser or video client). The mmW transmitter and receiver circuits have been developed by UEssex. The overall setup is depicted in Figure 12. A detailed description of the CU and DU can be found in Figure 47.

2 Parts of the “FPGA code” for CU and DU have been developed in the iCIRRUS project by Fraunhofer HHI. Nevertheless, this configuration and especially the SyncE features have been developed in CHARISMA.

Page 51: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 51 of 124

Figure 47: Fronthaul test: experimental setup

3.4.2 Frequency synchronization

In a first test the Synchronisation functions of CU, SyncE-aware switch and DU have been investigated. For this purpose, the linear configuration as depicted in Figure 48 has been used. Here, the protocol analyser (IXIA) operates as “master clock” from which the clock of the following device or nodes are derived.

Figure 48: Clock distribution in fronthaul system

The clock frequency has been measured by means of an electrical spectrum analyser (ESA). First, the reference clock output of the IXIA Ethernet analyser has been investigated. Initially, the clock frequency has an offset of ~160Hz compared to the nominal frequency of 156.25MHz. This offset can be improved by changing the IXIA clock slightly via the GUI. An offset of 1ppm has given the best frequency match between nominal frequency and actual clock. In this case, the offset was reduced to ~40Hz.

Page 52: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 52 of 124

Figure 49: Frequency offset at IXIA

Second, clock frequency at the DU reference output has been observed. Initially, the DU was not connected with the switch. Therefore, the DU clock was generated locally and on offset of about 800Hz was observed (blue curve, Figure 50). Next, the DU was connected to the switch (as described in Figure 48) and its clock was derived from the free running clock of the switch, whose SyncE function was turned off. In this case the clock offset was reduced to about 200Hz (red curve, Figure 50). Afterwards, the SyncE function of the switch was turned on. Then, the clock of the switch was derived from the incoming data generated by the IXIA and passed to the CU. Their reference clock offset was further reduced to about 100Hz (yellow curve, Figure 50).

Figure 50: Frequency offset at DU

Page 53: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 53 of 124

This offset could be minimized by tuning the master clock of the IXIA by hand. If an offset of 1ppm was set via the GUI (only integer values for ppm allowed), the derived clock offset at the DU was about 8Hz (8*10-9).

Figure 51: Frequency offset of 8Hz for reference clock at DU, when master clock from IXIA is passed via CU and Switch to DU

With this setup the clock propagation from the IXIA (representing a master clock), via the CU and Ethernet switch to the DU could be demonstrated.

3.4.3 Latency measurements

The latency of the entire link, as depicted in Figure 47, has been measured for two work loads and Ethernet frames with different length. For the first case, a load of 1Gb/s the fronthaul link is not saturated. The maximum (gross) data rate on the physical layer is 2.5Gb/s for a mmW signal at the physical layer. A more detailed evaluation of the fronthaul system can be found in e.g. [17].

Figure 52: Latency vs. Frame length (one way – entire path)

These results show again, that for small frame sizes the latency is higher compared to larger frames. This can be explained with the low latency mode of the propriety fronthaul system. In this case, fronthaul frames are

Page 54: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 54 of 124

being sent immediately to the DU after Ethernet frames are in the input queue of the CU. Therefor the fronthaul link, using only 2000bytes frames is saturated quite easily.

The second observation is: for a higher load, or more specific in an overload situation, the latency is also larger. This can be explained by the fact that the queues at DU and CU are filled at the maximum possible level, which is caused on the other hand by using a combination of propriety and standard Ethernet flow control to avoid a overloaded situation.

3.4.4 Application testing

In another test, the fronthaul system was used to transmit a video stream in downstream direction from a video server to a client application connected to the end user device. While the video was transmitted, the user device was connected to two different DUs. The fronthaul data was duplicated at the SyncE switch between CU and both two DUs. The block diagram of this setup is also shown in Figure 48.

The following Figure 53, Figure 54, and Figure 55 depict the CU, DU, the UE device together with the mmW boards and antenna during the fronthaul lab setup.

Figure 53: CU FPGA board with a 10G

fibre link to IXIA and a 10G copper link to the switch

Figure 54: Transmitter board (UEssex) with horn antenna for 60GHz fronthaul

measurements and 1 DU board in background

Figure 55: mobile rack with devices and components for end user device

Figure 56: Screenshot of information displayed on Laptop connected to end user device (see Figure 55)

Page 55: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 55 of 124

The following series of pictures shows the sequence, when the UE is connected to the first DU, is in a transitional state, and connected to the second DU.

1. UE connected with first DU: On Laptop to be seen, the video stream (right) and received samples after ADC before DSP.

2. UE in transitional state, no radio signal (blue dot), video stream has stopped.

3. UE connected to second DU: Radio signal to be seen and video stream has restarted

This sequence, where the UE device receives data from two different DUs, was repeated for an omnidirectional antenna connected to the user device (Figure 57)

Debug informationIQ radio signal after ADC

Video stream

UE radio

DU radio #1

Debug informationIQ radio signal after ADC

Video stream

UE radio

DU radio #1

Debug informationIQ radio signal after ADC

Video stream

UE radio

DU radio #2

Page 56: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 56 of 124

Figure 57: Omnidirectional antenna (connected to user device receiver)

1. UE is connected to 2nd DU: On Laptop plays the video stream and the received samples after ADC are shown. RF emission from 1st DU is blocked

2. UE in transitional state, both radio stream are being received. UE DSP cannot cope with many source yet, video stream has stopped.

60GHz omnidirectional antenna

60GHz downconverter board

Control board

Page 57: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 57 of 124

3. UE connected to second DU: Radio signal to be seen and video stream has restarted.

This video transmissions have worked without any visible packet loss only when the clock information was passed from the ‘clock master’ - the IXIA to both DU. On the other hand when the DU clocks were not synchronized anymore by un-setting the SyncE function in the Ethernet switch the video transmission was erroneous and not stable.

3.4.5 Outlook

In this demonstration we have shown the importance of clock synchronisation in a fronthaul radio network using the Synchronous Ethernet paradigm. For a fully operational other synchronisation functions are required. The frame start of the radio frames needs to be synchronised to each other in order to avoid interference, when the UE device receives the same signal from two different DU. This requires timing synchronisation, which can be performed by means of IEEE1588v2. Currently, there are no IEEE1588v2 IP cores for 10G Ethernet on the market, therefore the implementation of PTP on the DU and CU was not considered. Nevertheless, these functions have been investigated in the iCirrus project in detail (e.g. [17]). In addition a synchronisation of the LO frequency is required to allow coherent reception of the radio signal. This requires very high accuracy (in the order of 10-11), which cannot be provided by the current SyncE implementation and requires further research. HHI plans to investigate this issue with an industrial partner.

Antenna of 2nd DU blocked, UE receiverwith omni antenna gets signal from 1st DU

Video continues, data transmission

Page 58: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 58 of 124

4 Integration Testing

The following subchapters describe the integration tests produced in order to validate the step-by-step integration between the different resources/modules.

4.1 Telekom Slovenije Field Trial

The following subchapters describe the integration test produced to validate the step-by-step integration between the different software and hardware in the Telekom Slovenije field trial.

4.1.1 T1: Integration for security service deployment

This section describes the testing procedures related to the security service deployment.

4.1.1.1 T1.1 - OAM to deploy vIDS in CAL0

For testing the OAM´s ability to deploy a security service in the field trial hosted at Telekom Slovenije, a set of tests have been produced. These tests verify the functionality where the OAM instructs OpenStack (through TeNOR) to deploy in CAL0 a network service consisting of an IDS VNF.

Table 5: TS - T1.1 - OAM Security service deployment in TS field trial

Test Description

Identifier TS_T1_1_OAM__deploy_vIDS_in_CAL0_01

Test Purpose Deploy a Service, in this particular case Security Service at TS, containing a single vIDS VNF

Configuration

Test sequence

Step Type Description Result

1 <stimulus> VNO deploys security service in CAL0

2 <stimulus> TeNOR notifies OAM that VNF has been deployed

3 <check> Check that the OVS from CAL0 has the right rules Passed

4 <check> Check that the SDN switch connected to CAL0 has the appropriate rules

Passed

5 <check> Check that a VM has been instantiated with that particular id in OpenStack

Passed

6 <check> Check that the VM is connected to three networks: management network, slice network and VLAN pair network.

Passed

Result PASSED

4.1.1.2 T1.2 - SmartNIC configuration

The SmartNIC needs to be initialized at slice creation time. At that point, the OAM notifies the SmartNIC about the VLAN tag associated with the slice. Note that the security service is to be deployed over a slice where the SmartNIC has been previously configured.

Once the service has been deployed and an attack is identified, the M&A raises an alarm that is monitored by the SPM. The SPM contacts the Security Manager who requests authorisation from the OAM to configure the SmartNIC in such a way that it can drop packages based on an IP that it receives as a parameter.

Page 59: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 59 of 124

Table 6: T1.2 - OAM SmartNIC configuration for “drop packages” functionality test

Test Description

Identifier TS_T1_2_SmartNIC package drop test

Test Purpose SmartNIC client testing, verifying the package drop functionality from SmartNIC

Configuration

Test sequence

Step Type Description Result

1 <check> Check traffic is flowing through SmartNIC Passed

2 <stimulus> Send REST Call to OAM to instruct SmartNIC to drop packages according to a Source IP

3 <check> Check that the traffic from that particular IP is not flowing through SmartNIC

Passed

Result PASSED

4.1.1.3 T1.3 – OpenWRT configuration

Table 7: T1.3 - OAM to OpenWRT

Test Description

Identifier OAM to OpenWRT router CAL0

Test Purpose Test to verify the OAM performs a correct configuration of the OpenWRT router.

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Configure virtual OpenWRT

2 <check> Check that virtual OpenWRT has been configured, for that call the OpenWRT API and check that the expected parameters are configured properly (APN – access pint name)

204 Slice gets

configured

Result PASSED

4.1.2 T2: Integration for security policy definition

This section describes the integration testing related to the security policy definition.

4.1.2.1 T2.1: Security policy definition (CHARISMA GUI to SPM)

Table 8 shows the specification of the integration test for testing the security policy definition. The test verifies that the OAM is able to create a security policy in the SPM.

Table 8: T2.1 security policy definition

Test Description

Identifier Request to create a security policy in SPM

Test Purpose This test verifies that OAM is able to create a security policy in the SPM for the purpose of TS related security test.

Configuration Required entities: CHARISMA GUI and SPM server

Preconditions The CHARISMA GUI and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> CHARISMA GUI sends a CreateSecurityPolicy request to the SPM to create a new security policy, according to the interface description https://confluence.i2cat.net/download/attachments/ 27135041/Policy%20Provisioning%20and%20Mgt%20 -%20GUI%20CHARISMA%20V7%20S.pptx?

Page 60: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 60 of 124

Test Description

Identifier Request to create a security policy in SPM

version=1&modificationDate=1491295927000& api=v2

2 <check> CHARISMA GUI receives status code 201 from the SPM; and and SUPAPolicyObjectID

Passed

Result PASSED

4.1.2.2 T2.2: M&A alert definition (CHARISMA GUI to M&A)

Table 9 shows the specification of the integration test for testing the security monitoring & analytics module. The test verifies that the M&A is able to communicate with the SPM.

Table 9: T2.2: M&A alert definition (GUI to M&A)

Test Description

Identifier Reception of properly built Alert Notification message

Test Purpose This test verifies that the SPM is able to properly understand and process a well-formed Alert Notification message coming from the M&A module, being that either a beginning Alert Notification or an end Alert Notification

Configuration Required entities: M&A server and SPM server

Preconditions The M&A and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> M&A sends Alert Notification message to the SPM using the previously established network connection.

2 <stimulus> M&A receives status code 201 and “alert_rule_id” from the SPM

3 <check> Verify that the SPM has properly stored the Alert in its internal structure

Passed

Result PASSED

4.1.3 T3: Integration for attack identification and mitigation (T3.1 and T3.2)

The following test verifies the security attack identification and mitigation procedures. Traffic passing through the CAL0 OpenStack compute node is monitored, and vIDS inspects with the purpose of identifying potential deviations. Deviations are forwarded to the M&A where an alarm is triggered and the policy is checked with the SPM. In the case where the policy is triggered, the Security Manager component is invoked to mitigate the threat in the SmartNIC.

4.1.3.1 T3.1 - Attack identification

This test verifies that vIDS is able to detect a potential traffic anomaly.

Table 10: T3.1 – Attack identification

Test Description

Identifier Attack identification

Test Purpose The purpose of the test is to check whether vIDS can detect the traffic anomaly and trigger the M&A

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Normal traffic

2 <stimulus> Traffic deviation

3 <check> vIDS detect the anomaly and sends notification to M&A Passed

Page 61: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 61 of 124

Test Description

Identifier Attack identification

Result PASSED

Table 11: T3.2 – IDS Alert to M&A

Test Description

Identifier IDS Alert to M&A

Test Purpose The purpose of the test is to check whether M&A is notified of traffic anomaly and alarm is raised

Configuration

Test sequence

Step Type Description Result

1 <stimulus> SPM receives traffic anomaly notification

2 <check> Based on the alert rules definition, alarm is raised and SPM is queried

Passed

Result PASSED

Table 12: T3.3 – M&A alert to SPM

Test Description

Identifier M&A alert to SPM

Test Purpose The purpose of the test is to check whether SPM is notified of traffic anomaly and alarm is raised

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Alert rule is triggered in the M&A due to traffic anomaly detected by the vIDS. The notification is forwarded to the SPM

2 <check> SPM receives the alarm from M&A Passed

Result PASSED

4.1.3.2 T3.2: Attack mitigation

The following tests in Table 13 are related to attack mitigation. In the case where the policy defined in the SPM is invoked, the SPM triggers the Security Manager, who further mitigates the threat by ordering the SmartNIC to block traffic originating from a specified IP address.

Table 13: T3.2 – Attack mitigation – SPM policy lookup

Test Description

Identifier Attack mitigation – SPM policy lookup

Test Purpose The purpose of the test is to check whether the traffic from the IP address that exceeded the policy is blocked

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Policy in SPM is exceeded

2 <check> Check whether SPM triggered the Security Manager Passed

3 <check> Check whether Security Manager triggered SmartNIC Passed

4 <check> Check if traffic from the IP address that exceeded the policy is blocked

Passed

Result PASSED

Page 62: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 62 of 124

4.1.4 T4 Integration for low latency demonstration

The following subsection describes the tests performed for the Telekom Slovenije low latency demonstration. The low latency setup includes CHARISMA-developed hardware components, such as the OFDM-PON and the TrustNode low latency IPv6 router.

4.1.4.1 T4.1 TrustNode low latency integration

The following tests verify the TrustNode low latency integration.

Table 14: T4.1: TrustNode 6Tree speed test

Test Description

Identifier 6Tree speed test

Test Purpose This test will verify the devices propagation delay on hardware level. The test should show the propagation delay for 6Tree frames through the device.

Configuration The test setup is similar to the Illustration, with the case of the device needing to be opened. An oscilloscope with two input channels will be needed.

Test sequence

Step Type Description Result

1 <stimulus> Start 6Tree traffic with low bitrate of 1 packet/s

2 <stimulus> Attach probe 1 to pin 15 of the PHY chip which belongs to the input port

3 <stimulus> Attach probe 2 to pin 48 of the PHY chip which belongs to the output port

4 <check> Measure the time difference of the rising edged of both signals Latency 2.15 us

Page 63: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 63 of 124

Test Description

Identifier 6Tree speed test

Result PASSED

Table 15: TrustNode configuration

Test Description

Identifier INR_TN_test_1

Test Purpose TrustNode management interface testing.

Configuration TrustNode, DHCP-server

References Rest interface description

Applicability HTTP-POST request

Pre-test conditions

• Power on

• DHCP configured

Test sequence

Step Type Description Result

1 <stimulus> Attach management network interface to the TrustNode, wait until the device requests a IP from the DHCP server

2 <stimulus> Set the 6Tree prefix via REST interface

3 <check> Check if new prefix is applied at Trustnode Passed

Result PASSED

Table 16: TrustNode connectivity

Test Description

Identifier INR_TN_test_2

Test Purpose TrustNode traffic connectivity

Configuration TrustNode, Traffic generator

References

Applicability Generating IPv6 packets

Pre-test conditions INR_TN_test_1

Test sequence

Step Type Description Result

1 <stimulus> Configure prefix, maybe according to example configuration:

Page 64: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 64 of 124

Test Description

Identifier INR_TN_test_2

2 <stimulus> Send 6Tree packet into port(n)

3 <check> Check if packets arrives on the corresponding output port Passed

4 <check> Repeat #3 and #2 for all port combinations Test passed for

all iterations

Result PASSED

Table 17: TrustNode throughput

Test Description

Identifier INR_TN_test_3

Test Purpose TrustNode traffic throughput

Configuration TrustNode, Traffic generator

References

Applicability Generating IPv6 packets

Pre-test conditions INR_TN_test_2

Test sequence

Step Type Description Result

1 <stimulus> Configure according to INR_TN_test_2

2 <stimulus> Send IPv6 packets to port, matching ports prefix

3 <check> Check packet forwarding, packet should be forwarded according to 6tree rules

Passed

4 <stimulus> Datarate (bidirectional): 1804.9Mbps Latency(avg-IXIA): 3,941µs Packetloss: 0%

5 <check> Check packet forwarding, packet should be dropped Passed

Result PASSED

Table 18: TrustNode security

Test Description

Identifier INR_TN_test_4

Test Purpose 6Tree security test

Configuration TrustNode, Traffic generator

References Applicability Generating IPv6 packets

Pre-test conditions INR_TN_test_2

Test

sequence Step Type Description Result

Page 65: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 65 of 124

Test Description

Identifier INR_TN_test_4

1 <stimulus> Configure according to INR_TN_test_2

2 <stimulus> Send IPv6 packets to port, matching ports prefix

3 <check> Check packet forwarding, packet should be forwarded according to 6tree rules

Passed

4 <stimulus> Send IPv6 packets to port, not matching ports prefix

5 <check> Check packet forwarding, packet should be dropped Passed

Result PASSED

4.1.4.2 T4.2: OFDM PON

The following tests verify the OFDM-PON low latency integration.

Table 19: T4.2 Robot Demo functionality test

Test Description

Identifier Robot Demo functionality test

Test Purpose Robot, TrustNode, OFDM-PON, OpenWRT-Router, and Robot Controller integration test and latency range estimation

Configuration

References

Applicability

Pre-Test conditions

Test sequence

Step Type Description Result

1 <stimulus> Steering robot again a barrier

2 <check> Robot stops accordingly Passed

3 <stimulus> Ping robot from robot controller

4 <check> Round trip delay between robot and robot controller 6 ms < x < 30 ms

Result PASSED

Due to the influence of the WIFI connection, the round-trip delay is higher and more variable than allowed by the relevant KPI. In this case, the robot demo will work as defined, assuming one of the multiple stop signals will have a delay value out of the lower range measured. Lower range latencies are small enough for demonstration purposes. The WIFI link is expected to add the most significant part of the delay values, as well as the uncertainty. TrustNode and OFDM-PON delays will be measured separately to get a reliable value for these new CHARISMA components.

Table 20: OFDM-PON latency test

Test Description

Identifier OFDM-PON latency test

Test Purpose Single component OFDM-PON downstream latency classification

Configuration OFDM-PON components, connected back-to-back and with fibre, connected to Viavi-Protocol tester

References

Applicability

Pre-Test conditions

Test sequence

Step Type Description Result

1 <stimulus> Connect Viavi Protocol tester to OFDM-PON OLT on sender site, to OFDM-PON ONU on receiver site

Page 66: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 66 of 124

Test Description

Identifier OFDM-PON latency test

2 <stimulus> Select certain test conditions in protocol tester, regarding throughput and frame size

3 <stimulus> Run latency test for 3 minutes

4 <check> Check results Latency minimum <

50 µs, maximum:

461 µs; Throughput:

484.6 MBit/s on

current setup

Result PASSED

4.2 NCSRD Demonstrator

The following subchapters describe the integration tests produced in order to validate the step-by-step integration of the different resources/modules in NCSRD demonstrator. Component-level integration tests have been extensively described in D4.2 [5].

4.2.1 T1.1 – Slice creation

4.2.1.1 OAM to OpenWRT router CAL0

When creating a slice in the testbed in NCSRD, the first resource OAM configures the OpenWRT router located in CAL0. To verify the correctness of the operation, the following test is performed. Note that the parameters are specific to the location of this particular device, in this case CAL0.

Table 21: T1.1 - OAM to OpenWRT router CAL0

Test Description

Identifier OAM to OpenWRT router CAL0

Test Purpose Test to verify the OAM performs a correct configuration of the OpenWRT router.

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Create slice

2 <stimulus> Create virtual OpenWRT

3 <stimulus> Configure virtual OpenWRT

4 <check> Check that virtual OpenWRT has been configured, for that call the OpenWRT API and check that the expected parameters are configured properly

Passed

Result PASSED

4.2.1.2 OAM to SDN switch CAL1

Once the OpenWRT router in CAL0 has been configured, the next step, when creating a slice, is to configure the SDN switch located in CAL1. To validate that the OAM has the ability to configure the SDN switch correctly, the following test has been implemented.

Table 22: T1.1 - OAM to SDN switch CAL1

Page 67: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 67 of 124

Test Description

Identifier OAM_Configure_SDN_Switch

Test Purpose Test to verify that OAM configures the SDN switch correctly.

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Create slice

2 <stimulus> Process slice

3 <check> Check correctness of SDN switch rules via ODL Passed

Result PASSED

4.2.1.3 OAM to OVS Integration Bridge in compute node CAL1

The next element to be configured in the test bed, as a part of the slice, is an OVS integration bridge. The steps described in the following table, validate the correct configuration of the OVS integration bridge.

Table 23: T1.1 – OAM to OVS Integration Bridge in compute node CAL1

Test Description

Identifier OAM_Configure_OVS

Test Purpose Test to verify the correct configuration of the OVS integration bridge using OAM

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Deploy Security Service NS as a VNO in the processed slice

2 <stimulus> TeNOR notifies OAM when the NS has been deployed

3 <stimulus> OAM configures rules in the integration bridge by using the API provided to configure rules in the integration bridge

4 <check> OVS integration bridge has the rules configured with the correct parameters

Passed

Result PASSED

4.2.1.4 OAM to BH

The final step in the process of slice creation is to ensure the correct configuration of the Backhaul device. To guarantee the correct functionality of the device, the following integration test has been produced.

Table 24: T1.1 – OAM to BH

Test Description

Identifier OAM_Configure_BH

Test Purpose Test to verify that the OAM correctly configures the BH device

Configuration

Page 68: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 68 of 124

Test Description

Identifier OAM_Configure_BH

Backhaul devices at NCSRD testbed connected through RF cable

Test sequence

Step Type Description Result

1 <stimulus> Create Slice

2 <stimulus> Create virtual Backhaul network in the slice

3 <stimulus> Create EVC in the virtual Backhaul network

4 <stimulus> Process slice

5 <check> Via ODL check that Backhaul network contains EVC in locked state

Passed

Result PASSED

4.2.1.5 Other devices

The following devices are basically the same as previously described, but located in a different CAL. To test the correct integration from the OAM point of view, the same tests are performed, but replacing the CAL specific parameters, and adding the ones that apply to the particular CAL to be tested.

• SDN switch CAL3

• OVS switch in compute node CAL3

• OpenWRT router CAL3

4.2.2 T1.2 - Security service deployment

To deploy a network service, in this case the security service, the OAM (via TeNOR) instructs OpenStack to deploy its VNF in the correct compute node. In this case the network service consists of two VNFs: an IDS and a FW. Note that in this NS, the two VNF are deployed in the same compute node.

The following tests verify the correct deployment of the two VNFs.

4.2.2.1 IDS VNF deployment

Test to deploy an IDS VNF using OAM. In this case, the IDS VNF is deployed as a part of a NS.

Table 25: T1.2 - IDS VNF deployment

Test Description

Identifier OAM_DeploySecServ_IDS

Test Purpose Test to verify that OAM deploys correctly IDS VNF as a part of a NS

Page 69: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 69 of 124

Test Description

Identifier OAM_DeploySecServ_IDS

Configuration

Test sequence

Step Type Description Result

1 <stimulus> VNO deploys security service in CAL1

2 <stimulus> TeNOR notifies OAM that VNF has been deployed

3 <check> Check that the OVS from CAL1 has the right rules Passed

4 <check> Check that the SDN switch connected to CAL1 has the appropriate rules

Passed

5 <check> Check that a VM has been instantiated with that particular id in OpenStack

Passed

6 <check> Check that the VM is connected to three networks: management network, slice network and VLAN pair network.

Passed

Result PASSED

4.2.2.2 FW VNF deployment

The following test verifies that the OAM has correctly deployed a FW VNF when deploying the network service.

Table 26: T1.2 - FW VNF deployment

Test Description

Identifier OAM_DeploySecServ_FW

Test Purpose Test to verify that FW VNF deployment gets correctly deployed, when OAM deploys a NS

Configuration

Test sequence

Step Type Description Result

1 <stimulus> VNO deploys security service in CAL1

2 <stimulus> TeNOR notifies OAM that VNF has been deployed

3 <check> Check that the OVS from CAL1 has the right rules Passed

4 <check> Check that the SDN switch connected to CAL1 has the appropriate rules

Passed

5 <check> Check that a VM has been instantiated with that particular id in OpenStack

Passed

6 <check> Check that the VM is connected to three networks: management network, slice network and VLAN pair network.

Passed

Result PASSED

4.2.3 T2: Integration for security policy definition (T2.1, T2.2 and T2.3)

4.2.3.1 T2.1: Security policy definition (CHARISMA GUI to SPM)

Because of the iterative nature of the CHARISMA GUI development, the testing of the GUI has been done while iterating, integrating and fine tuning the interface and the views, to allow the user to access the functionality specified in the different use cases.

4.2.3.2 T2.2: MA alert definition (CHARISMA GUI to MA)

For the M&A alert definition, the CMO via the CHARISMA GUI, embeds an iframe that directly shows the Grafana UI in a window as a part of the CHARISMA GUI.

Page 70: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 70 of 124

Once a service is deployed, the CHARISMA GUI consumes the Grafana API, to notify that a new service has been deployed and needs to be monitored. That visualization, provided by Grafana, is what is displayed for that particular service in the iframe embedded in the CHARISMA GUI.

To improve the user experience, the Grafana login happens transparently to the user, thanks to the implementation done at the M&A module.

4.2.3.3 T2.3: IDS alert definition (IDS EMS to IDS)

Table 27: T2.3: IDS alert definition (IDS EMS to IDS)

Test Description

Identifier Test Number

Test Purpose Verification of successful IDS alert rule definition.

Configuration IDS VNF is deployed.

Test Sequence

Step Type Description Result

1 <stimulus> Send request to create a new IDS alert rule.

2 <check> Request alert rule details using the received ID. Passed

Result PASSED

4.2.3.4 SPM – Orchestrator (Security Manager) interface

This test verifies that the SPM is able to properly request that the Orchestrator configures a specific VNF. The Orchestrator and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Table 28: SPM – Orchestrator (Security Manager) interface

Test Description

Identifier Request to configure a new instance of a specific VNF

Test Purpose This test verifies that the SPM is able to properly request that the Orchestrator configures a specific VNF (a virtual Firewall, in this case)

Configuration Required entities: Orchestrator (Security Manager) and SPM server

Preconditions The Orchestrator and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> SPM sends an ConfigureVNF message to the Orchestrator, according to the interface description https://confluence.i2cat.net/display/CHARISMA/ Security+Demo+Integration

2 <check> SPM receives status code 201 from the Orchestrator Passed

Result

4.2.3.5 SPM – OAM (CHARISMA GUI)

This test verifies that OAM is able to create a security policy in the SPM. The OAM and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Table 29: SPM – OAM (CHARISMA GUI): Request to create a security policy

Test Description

Identifier Request to create a security policy

Test Purpose This test verifies that OAM is able to create a security policy in the SPM

Page 71: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 71 of 124

Test Description

Identifier Request to create a security policy

Configuration Required entities: CHARISMA GUI (OAM) and SPM server

Preconditions The OAM and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> OAM sends a CreateSecurityPolicy request to the SPM to create a new security policy, according to the interface description https://confluence.i2cat.net/download/attachments/ 27135041/Policy%20Provisioning%20and%20Mgt%20 -%20GUI%20CHARISMA%20V7%20S.pptx? version=1&modificationDate=1491295927000& api=v2

2 <check> OAM receives status code 201 from the SPM; and and SUPAPolicyObjectID

Passed

Result PASSED

Table 30: SPM – OAM (CHARISMA GUI): Request to read security policies

Test Description

Identifier Request to read security policies

Test Purpose This test verifies that OAM is able to read one or more security policies in the SPM

Configuration Required entities: CHARISMA GUI (OAM) and SPM server

Preconditions The OAM and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> OAM sends a ReadSecurityPolicy request to the SPM to read all/some of the security policies, according to the interface description https://confluence.i2cat.net/download/attachments/ 27135041/Policy%20Provisioning%20and%20Mgt %20-%20GUI%20CHARISMA%20V7%20S.pptx? version=1&modificationDate=1491295927000& api=v2 If a specific “PolicyId” is indicated in the request, then only information about such security policy will be included. If a specific “Source” or “Target” are indicated in the request, then only the information corresponding to those sources (VNOs) or Targets (target resource to which the policy applies) will be returned

2 <check> OAM receives status code 201 from the SPM; and one or more security policies

Passed

Result PASSED

Table 31: SPM – OAM (CHARISMA GUI): Request to update security policies

Test Description

Identifier Request to update security policies

Test Purpose This test verifies that OAM is able to update one or more security policies in the SPM

Configuration Required entities: CHARISMA GUI (OAM) and SPM server

Preconditions The OAM and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Page 72: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 72 of 124

Test Description

Identifier Request to update security policies

Test sequence

Step Type Description Result

1 <stimulus> OAM sends a UpdateSecurityPolicy request to the SPM to update all/some of the security policies, according to the interface description https://confluence.i2cat.net/download/attachments/ 27135041/Policy%20Provisioning%20and%20Mgt %20-%20GUI%20CHARISMA%20V7%20S.pptx? version=1&modificationDate=1491295927000& api=v2 If a specific “PolicyId” is indicated in the request, then the update only applies to such policy

2 <check> OAM receives status code 201 from the SPM Passed

Result PASSED

Table 32: 4.2.2.3 SPM – OAM (CHARISMA GUI): Request to delete security policies

Test Description

Identifier Request to delete security policies

Test Purpose This test verifies that OAM is able to delete one or more security policies in the SPM

Configuration Required entities: CHARISMA GUI (OAM) and SPM server

Preconditions The OAM and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> OAM sends a DeleteSecurityPolicy request to the SPM to delete all/some of the security policies, according to the interface description https://confluence.i2cat.net/download/attachments/ 27135041/Policy%20Provisioning%20and%20Mgt %20-%20GUI%20CHARISMA%20V7%20S.pptx? version=1&modificationDate=1491295927000& api=v2 If a specific “PolicyId” is indicated in the request, then only such security policy will be deleted.

2 <check> OAM receives status code 201 from the SPM Passed

Result PASSED

4.2.4 T3: Integration for attack identification and mitigation (T3.1 and T3.2)

4.2.4.1 T3.1: Attack identification

4.2.4.1.1 IDS alert to MA

The IDS being a monitoring system itself, needs to send the data of detected traffic anomalies to the Monitoring and Analytics platform where a higher level of aggregation can take place. The IDS VNF provides an API exposing the current status of the events that are occurring. The M&A periodically parses all available data provided by the API and stores them as metrics.

Page 73: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 73 of 124

Table 33: IDS alert to MA

Test Description

Identifier Test Number

Test Purpose Verification of the successful detection of network anomalies.

Configuration Monitoring and Analytics platform, and IDS VNF are deployed. The IDS can communicate with M&A. IDS VNF information is registered to the M&A, and network traffic is being directed or mirrored to the IDS monitoring network interface. Finally, traffic, simulating the expected anomaly, can be sent to the monitored network.

Test Sequence

Step Type Description Result

1 <configure> Create new IDS alert rule.

2 <stimulus> Send traffic simulating the expected anomaly in the network.

3 <check> Verify that the IDS VNF has detected the anomaly and is exposing the logged event.

Passed

4 <check> Verify that the M&A has gotten the alert notification information and is now exposing it as a monitoring metric.

Passed

Result PASSED

4.2.4.1.2 MA alert to SPM

M&A evaluates the conditions of the aggregated monitoring metrics. The results of the evaluation are alert notifications that are sent to the SPM module, where the appropriate action can be determined in order to remedy the situation and prevent or mitigate the detected network vulnerability.

Table 34: SPM – M&A interface: Reception of properly built Alert Notification message

Test Description

Identifier Reception of properly built Alert Notification message

Test Purpose This test verifies that the SPM is able to properly understand and process a well-formed Alert Notification message coming from the M&A module, being that either a beginning Alert Notification or an end Alert Notification

Configuration Required entities: M&A server and SPM server

Preconditions The M&A and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> M&A sends Alert Notification message to the SPM using the previously established network connection.

2 <stimulus> M&A receives status code 201 and “alert_rule_id” from the SPM

3 <check> Verify that the SPM has properly stored the Alert in its internal structure

Passed

Result PASSED

Table 35: SPM – M&A interface: Reception of badly built Alert Notification message

Test Description

Identifier Reception of badly built Alert Notification message

Test Purpose This test verifies that the SPM is able to return an error message in case it receives a badly-formed Alert Notification message coming from the M&A module; that being either a beginning Alert Notification or an end Alert Notification

Configuration Required entities: M&A server and SPM server

Page 74: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 74 of 124

Test Description

Identifier Reception of badly built Alert Notification message

Preconditions The M&A and the SPM must be sharing a valid network connection over the Internet (e.g. both able to return “ping” requests)

Test sequence

Step Type Description Result

1 <stimulus> M&A sends Alert Notification message to the SPM using the previously established network connection. The message is poorly built (not following the syntax described in https://confluence.i2cat.net/display/CHARISMA/ 4.+Alert+Notification+Interface

2 <check> M&A receives an error status code (could be 400, 401, 500 depending on the specific error) from the SPM

Passed

Result PASSED

4.2.4.2 T3.2: Attack mitigation

4.2.4.2.1 SPM policy lookup

After an attack has been detected, the SPM determines what appropriate action is required in order to mitigate it.

4.2.4.2.2 SPM – Security Manager send mitigation action

The mitigation action is sent from the SPM to the Security Manager component, which gathers all the information needed to apply the action to the correct VNF.

Table 36: SPM – Security Manager send mitigation action

Test Description

Identifier Test Number

Test Purpose Verification of the communication between SPM and Security Manager in order SPM to convey the mitigation action information.

Configuration SPM and Security manager are deployed and can communicate with each other.

Test Sequence

Step Type Description Result

1 <configure> Create a new SPM security policy.

2 <stimulus> Send alert notification that matches the previously created security policy conditions.

3 <check> Verify that the Security Manager has received the policy action from the SPM.

Passed

Result PASSED

4.2.4.2.3 Security Manager - FW VFN configuration (mitigation action defined by policy)

In order to identify the recipient of the mitigation action, the Security Manager needs to communicate with TeNOR. TeNOR provides the recipient’s address and the action is then enforced. In the case of a DDoS attack, the appropriate recipient is the VNO’s virtual FW, and the action consists of traffic filtering rules.

Page 75: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 75 of 124

Table 37: SPM – Security Manager send mitigation action

Test Description

Identifier Test Number

Test Purpose Verification of the Security Manager functionality to identify and configure the recipient of the policy action to mitigate the attack.

Configuration Security manager and TeNOR orchestrator are deployed and can communicate with each other. A VNO is registered and he/she is assigned a network slice. Network traffic is being directed through the Firewall. Also, traffic, simulating the expected anomaly, can be sent to the monitored network.

Test Sequence

Step Type Description Result

1 <configure> Setup a security network service containing a firewall VNF using TeNOR orchestrator on a specific VNO’s slice.

2 <stimulus> Send mitigation action to the Security Manager that requires configuration of the previously created firewall VNF.

3 <check> Verify that the Security Manager has received the mitigation action.

Passed

4 <check> Verify that the Security Manager requested and received the Firewall VNF information from TeNOR orchestrator.

Passed

5 <check> Verify that the Firewall VNF is configured according to the policy action.

Passed

6 <check> Verify that subsequent malicious attempts are blocked by the Firewall VNF and do not reach their destination.

Passed

Result PASSED

4.3 APFutura Field Trial

4.3.1 Integration for low latency demonstration

The following subchapters describe the integration test produced to validate the step-by-step integration between the different software and hardware in the APFutura field trial.

4.3.1.1 T1.1 OAM to SmartNIC

The tests presented here are targeted to present per-VNO slicing support, and low latency advantages gained by use of SmartNIC. Upon a request for a new VM (VNF), the CMO Module maps this VM to the SmartNIC by using vFunction API. Usually, the vF is characterized by a VLAN dedicated to this virtual Interface. In this way, the NIC, using the SRIOV function transmits a network packet directly to that VM. Another option can be to use the MAC or IP address of same VM running a certain VNF. Once such information is available, the CMO Module can add an additional policy per application (see Figure 58, below) and apply it to the SmartNIC API. This policy can be static or dynamic, as per each traffic analysis.

Page 76: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 76 of 124

Figure 58: SmartNIC Configuration

With respect to Figure 58, the following sequence occurs: (1) A new VNF appears (CMO new event); (2) the CMO collects the info for that VNF; (3) The SmartNIC API is called to set the parameters per each specific VNF (VNO tag, policy); (4) A new virtual port is added to the SmartNIC.

Table 38: SmartNIC installation

Test Description

Identifier

Test Purpose Installation process of SmartNIC in CHARISMA CMO environment

Configuration To use last installation procedure described in Ethernity Installation Guide [1]

Test sequence

Step Type Description Result

1 <stimulus> To install SmartNIC in CHARISMA Server with Ubuntu Server environment. Step one – insert NIC check LEDs are blinked

2 <stimulus> Run scrip mentioned in User Guide

3 <check> MEA CLI validation, commands in Installation Guide Passed

Result PASSED

Table 39: APF2, SmartNIC processor reduction load test

Test Description

Identifier

Test Purpose VNO Function offload verification. To Verify that VNO overlay network is started/terminated on NIC, to reducing Data processing in x86. Measure load

Configuration To use API definition document [2] of REST for vFunction;

Test sequence

Step Type Description Result

1 <stimulus> Call Service Create per any new VNF see call example below.

2 <stimulus> Check the service exists

3 <stimulus> Connect either Test equipment or full setup with CHARISMA running scenario

4 <check> Run traffic to/from VNF and validate that VNO VLAN tag removed/added in SmartNIC

Passed

See Appendix 7.2.1

Result PASSED

Page 77: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 77 of 124

Table 40: APF3, acceleration of L2/L3 forwarding test

Test Description

Identifier

Test Purpose L2/L3 Forwarding acceleration verification To Verify that Forwarding process is offloaded on NIC, to reducing Data processing in x86. Measure Latency, throughput

Configuration To use API definition document [2] of REST for Admin Control;

Test sequence

Step Type Description Result

<stimulus> Call Admin API Create per L2/L3 flow VNF see call example below

<stimulus> Check the Rule added

<check> Connect either Test equipment to full setup with running scenario

Passed

<check> Run traffic one time through CPU and other within offload and validate forwarding in SmartNIC is much more efficient: low latency; num of x86 cores involved

10us latency in

comparison of 1ms in

x86, and less cores are

fused

Result PASSED

The measurement procedure is shown in Figure 59

Figure 59: SmartNIC measurement procedure

Latency 800us – 1.2ms Latency: 10us

The low latency validation presented in the APFutura field trial is measured both e2e, and when all the elements are changed to legacy: Server with standard NIC without functions offload, and standard router with standard approach for store and forward packets, and then compared with the new CHARISMA SmartNIC technology, where the functions are offloaded to programmable logic with deterministic latency and predictive forwarding time, and the 6Tree TrustNode solution, where data is forwarded immediately upon arrival skipping the search procedure.

For the end-to-end measurements, the results are presented in section 5.3.3.1.

Page 78: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 78 of 124

Each PoP (point of presence) SmartNIC and 6-Tree Router was tested separately to show the individual results.

Ethernity Networks declared a standard latency of ~10 µs independent of load rate, in comparison to the theoretical 0.5-1 ms for same function in the standard SW NFV approach. We measured L3 and L2 forwarding with and without offload. The results summary is shown below, and includes additional test results at the Ethernity premises with DPDK mode for SW data forwarding, which for now is most advanced in the world.

4.3.1.2 T1.2 OAM to TrustNode

This integration test is the same as performed in the Telekom Slovenia field trial, and the test and results are shown in the following tables in section 4.1.4.1

4.3.1.3 OAM to OLT PON

Table 41: APF 1

Test Description

Identifier OAM_OLT integration test 1

Test Purpose Test to validate the correct integration between OAM and OLT-ONT by fetching the OLT configuration

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Infrastructure provider adds an OLT resource to its network

2 <stimulus> Infrastructure provider fetches the configuration from the previously added resource

3 <check> Check from the attached virtual resource that the configuration matches with the one from the added physical resource

Passed

Result PASSED

Table 42: APF 2

Test Description

Identifier OAM_OLT integration test 2

Test Purpose Test to validate the when creating a slice the OLT resource is configurated

Configuration We assume that resource is already created in OAM

Test sequence

Step Type Description Result

1 <stimulus> A slice is created

2 <stimulus> A virtual OLT is created in the slice

3 <stimulus> Map the virtual resource with the physical resource

4 <stimulus> Slice is processed with an specific VLAN

5 <check> OLT configuration it is fetched and check that the slice info is already there

Passed

Result PASSED

4.3.2 Integration for Bus User Case demonstration

This chapter describes the integration test to validate the integration for the Bus Use Case. To validate this integration, it is necessary to validate several tests that are already done for other integrations.

4.3.2.1 T2.1 OAM to Trustnode

This test was already done in chapter 4.3.1.2.

Page 79: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 79 of 124

4.3.2.2 T2.2 OAM to OLT PON

This Test was already done in chapter 4.3.1.3.

4.3.2.3 T2.3 OAM to mobCaches

The objectives to demonstrate the bus use case include

• Reduce the service latency by intelligently caching and prefetching the user requested content on the CHARISMA cache nodes (MoBcache and vCache)

• Provide the service continuity by the traffic offloading between WiFi and LTE, and prefetching the user requested content on the MoBcache.

4.3.2.3.1 Bus use case components

MoBcache

The MoBcache is a mobile router-server prototype with autonomous battery, and auto-configuration within a tree radio network. It has the following radio interfaces:

• 1 GB Ethernet Port

• 1 Wifi 5 GHz interface

• 1 Wifi 2.4 GHz interface

• 1 LTE interface

In a system configuration, the MoBcache will configure in a tree network, using usually the 5GHz Wifi as a backbone between the MoBcache devices, and 2.4 Ghz Wifi for user device or M2M connectivity. Connection to Internet can be ensured through the Ethernet or LTE link

The MoBcache can behave as a fixed rootMB (while ethernet is connected) and as a mobile childMB (no cable connection and connect to rootMB by WiFi).

The caching and prefetching functionalities are implemented as the Figure 60. CHARISMA cache nodes have two kinds: MoBcache and vCache where the running caching/prefetching components are the same.

Figure 60: Caching Nodes

Page 80: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 80 of 124

rootMB

A MoBcache is considered as a rootMB while its ethernet is connected. The rootMB is a fixed router, and able to: 1) get Internet connection by the Ethernet cable; 2) connect to a childMB by WiFi 5GHz interface; 3) give the WiFi service to UE by 2.4 GHz WiFi. In this experiement there are 2 rootMB which act as wifi access point (see Figure 45)

childMB

The MoBcache can be considered as a mobile childMB while there is no cable connection, with WiFi or LTE connection to access to the Internet. MoBcache usually automatically connects to an available rootMB by WiFi according to the signal strength. If there is no WiFi available, MoBcache is also able to connect to a known LTE network by the LTE dongle to be able to access to the Internet. In this experiement there is one childMB running in the bus (see Figure 45)

vCache

The caching/prefetching components running on the vCache are shown in the Figure 60 and are the same as the MoBcache.

vCC

vCC running as a VNF is managed by the CMO orchestrator and provides the management of caching services for a VNO. It allows the VNO autonomously managing and configuring vCaches or MoBcaches allocated to them. In our system, each VNO is assigned by one vCC and several affected vCaches or MoBcaches with required resources like network bandwidth, CPU, memory and hard-disk storage.

Content Server

The content Sever provides HLS video streaming services.

4.3.2.3.2 Bus use case deployment

The caching deployment for bus use case is listed as below:

1. vCC (virtual Cache Controller) running as a VNF deployed in the cloud server in CAL3 2. vCaches running as a VNF deployed in CAL3/CAL2. 3. Two root MoBcaches running as rootMB deployed in bus stations CAL1 (see Figure 60) 4. One MoBcache deployed in the bus CAL0 5. MoBcache connects to rootMB by WiFi 802.11ac, or connects to eNB by LTE dongle from a third-

party LTE operator assigned with a public IP address 6. vCC monitors the location and network status on MoBcache 7. Two UEs connecting to MoBcache by WiFi 802.11b watches a HLS video 8. Content server is deployed with a public IP address.

The prefetching procedure of the bus use case is defined as below:

UE watches a HLS video from MoBcache

1. While the bus leaving the rootMB1 station, the 802.11ac WiFi signal strength becomes lower, then the throughput of MoBcache will become lower

2. MoBcache will decide to switch from WiFi to LTE and continue receiving the requested content by LTE

3. vCC predicts a future connection to rootMB2, and analyzes DB about the user requested list, and sends the prefetch order to the rootMB2 with the prefetched content

4. rootMB2 prefetches the user requested video content.

Page 81: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 81 of 124

5. While the bus approaching to the rootMB2 station, MoBcache switches from LTE to rootMB2 WiFi. 6. When UE sends the following video content, the requested content can be received directly since

the content have been cached in the rootMB2

4.3.2.3.3 Bus use case workflow

In the bus station, a rootMB (root MoBcache) with cache functionality is deployed to provide Internet connections for the MBs in the passing buses. The MoBcaches located in the buses will either connect to the rootMBs through WiFi when the bus is approaching to the station, or switch to eNodeB through LTE when the WiFi services are not available on the road between two stations. The traffic will be intelligently offloaded (controlled by vCC) between WiFi and LTE networks to be able to provide a seamless handover and a service continuity. The work flow for the prefetching procedure in Figure 61 shows the steps and communications between different modules.

Figure 61: Bus use case Workflow

In the bus use case scenario, the bus starts from rootMB1 station, and the MoBcache in the bus connects to the first MoBcache access point (see Figure 60). rootMB1 and then MoBcache will switch to LTE connection on the road between the rootMB1 station and the rootMB2 station. As soon as the rootMB2 signal is available, MoBcache in the bus will switch to the rootMB2 from LTE. The MoBcache deployed in the bus reports periodically its position and network status (throughput, WiFi signal strength, etc.) to vCC in the step 1 in Figure 61. This information will be used by vCC to trigger a caching/prefetching alarm to the cache nodes as soon as matching the pre-defined caching/prefetching policies.

When a passenger called UE requests a video, the content will be cached in the MoBcache as shown in Figure 61.

Page 82: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 82 of 124

The demo starts from the rootMB1 station.

• step1: MoBcache periodically exchanges its position and network status (throughput, WiFi signal strength, etc.) to vCC through the port 9999

o When the bus is moving from the rootMB1 station to the rootMB2 station, the MoBcache will switch from rootMB1 to LTE according to the signal strength and throughput change. During the road between two bus stations, the UE is able to continue watching the video through LTE connections.

• Step10: MoBcache is able to decide to switch from WiFi to LTE if the WiFi signal strength reaches a threshold.

o When the bus is approaching the rootMB2 station, vCC detects a future connection, and sends a prefetch command to rootMB2 to prefetch the following video chunks. The MoBcache will switch from LTE to rootMB2 when the WiFi signal becomes strong.

• Step13: Because the feature of bus use case, vCC is able to know the next bus station and the schedule of the bus. Based on this information, vCC can predict a connection on rootMB2 and then initiate a prefetch

• Step19: MoBcache is able to decide to switch from LTE to WiFi if the WiFi signal is available.

As soon as the MoBcache connects to the rootMB2, the user can retrieve the requested content directly since this content has been cached in the rootMB2.

Table 43: MoBcache test APfutura field trial

Test Description

Identifier Bus Use Case

Test Purpose Reducing the service latency by caching and prefetching system Service continuity by using the handover system(LTE and WiFi)

Configuration

Test sequence

Step Type Description Result

1 <stimulus> Replay the hls(http live streaming) content that is cached on the MoBcache in order to get TCP_HIT

2 <stimulus> Prefetch new content and play it in order to get TCP_HIT for all prefetched chunk but not for the manifest file

3 <stimulus> Handover between LTE and WiFi

4 <check> Content is on the caches (tail -f /var/log/access.log) Passed

5 <check> Content is prefetched (tail -f /nf_cache/prefetchlist.list)

Passed

Result PASSED

Page 83: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 83 of 124

5 Validation, Evaluation of results and lessons learnt

In this chapter, the validation and evaluation of the measurements/results obtained from the two field trials and lab demonstrator are presented and compared to the CHARISMA KPIs described in D1.2 [1]. We need to take into consideration that CHARISMA, being a 5G-PPP Phase 1 project, has obtained specific measurements concerning particular performance metrics. A holistic presentation of the high-level KPIs, to match the performance of 5G as perceived by vertical applications, is aimed to be demonstrated by 5G-PPP Phase 2 projects. CHARISMA has elaborated on certain KPIs; in particular on low latency, open access and enhanced security.

At this point we strongly emphasize that CHARISMA is one of the very few 5G-PPP Phase 1 projects that have actually performed trials and pilots in a real-life environment. Actually, this will be the goal of 5G-PPP Phase 2 projects i.e., to consolidate measurements and benchmarking of initial deployments at scale by performing trials and pilots outside laboratory environments.

5.1 CHARISMA mandatory requirements and KPIs

A first attempt for defining the CHARISMA requirements and KPIs was reported in the DoW and D1.1 [2]. However, these KPIs were set according to the generic 5G-PPP KPIs [6] that were initially proposed, so they were either too tightly defined, or not specific enough to allow verification of the project results. In D1.2 [1], we defined the updated CHARISMA requirements and KPIs that have resulted from the two updated use cases, namely the transportation vertical sector, and the support of VNOs in a multi‐tenancy environment, as well as by the work performed in WP3, WP4 and WP5, and from the results of our three initial demos. CHARISMA’s mandatory KPIs were set to be the parameters that will be supported by the CHARISMA architecture, and would be used to verify the project results through the project demos, the final demonstration and the field trials. These KPIs are repeated in the following table for the readers’ convenience.

Table 44: Updated mandatory CHARISMA requirements and KPIs

No. Requirement CHARISMA Support

1 CHARISMA shall offer low latency services

CHARISMA’s architecture shall support low latency services via:

• Routing of data at the lowest common aggregation point;

• Devolved offload strategies for device-to-device, device-to-remote-radio, device-to- baseband, device-to-central office/metro, cloud-to-cloud/cellular, etc.;

• Mobile distributed caching;

• Trust Node enabled secure hierarchical and ID routing. CHARISMA shall support a latency of 10ms or less.

2 The system shall support advanced end-to-end security

CHARISMA’s architecture shall support distributed (decentralized) security, as opposed to centralized security in 4G, as well as physical layer security. The CHARISMA virtualized architecture-level design provides a C&M plane offering improved security. A holistic security approach is proposed where the underlying infrastructure is virtualized and shared among several SPs, who operate simultaneously the same physical resources.

3

CHARISMA shall support open access in a way that multiple virtual network operators can be easily instantiated and become fully functional.

This means that virtualised resources should be easily (e.g., through automated MANO procedures) bundled together into slices of the physical infrastructure. VNO slice instantiation delay (i.e. the time required to instantiate VNFs as well as apply all network, compute and storage resource configurations) shall not exceed 90min.

4 The created network slices must support complete traffic isolation between VNOs, i.e. no traffic from

No specific performance metric is defined for this requirement. During the field trials, tests will be contacted to verify that no traffic crosses administrative (VNO) borders, apart from the case of vCache peering. These tests will include traffic originating from clients residing at a different VNO.

Page 84: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 84 of 124

No. Requirement CHARISMA Support

one VNO should reach the other without explicit consent.

5

Each tenant should be able to define its own security policies, deciding the deployment of desired security services (e.g. virtual IDS, firewall) and their configuration without affecting the other tenant’s services.

No specific KPI to validate this requirement. However, final demonstrations performed at the end of the project will showcase the ability of each tenant to define its security policies and that different policies of tenants for security services that run on the same infrastructure do not cause any conflicts due to the provided tenant isolation.

6

A high throughput will be supported, which is a key parameter for a large part of existing and envisaged 5G applications, including high bandwidth video streaming.

CHARISMA’s architecture shall support data-rates up to 10 Gbps for SMEs and residential users, and up to 1 Gbps for mobile end-users, through the use of a hierarchical intelligent data processing approach at the C-RAN and RRH, where statistical multiplexing, aggregation, and caching allow access data volumes to be significantly increased. In addition, CHARISMA’s architecture shall incorporate mm-wave (60 GHz and E-band technologies), as well as optical LoS and non-Los (NLos) final-drop technologies, including converged wireline (FTTH) connections from the RRH and/or the C-RAN to end-user premises. The typical value of data-rate for video streaming shall be at least 50Mbps per user.

7 Low packet loss rate. The 5G system proposed by CHARISMA shall provide packet loss rate of 10-5 or less.

5.2 Validation

This chapter describes the general validation of the Telekom Slovenije and APFutura field trials and the NCSRD demonstrator.

5.2.1 Telekom Slovenije Field Trial

Table 45 describes the validation of the demonstration scenarios performed in the Telekom Slovnije field trial. The following table provides the list of KPIs and requirements as defined in D1.2 [1] that are validated through the demonstrations implemented in the Telekom Slovenije field trial. Requirements are mapped to the integration test specified in section 4.1. KPIs and requirements are validated via the integration testing.

Table 45: Validation – Telekom Slovenije field trial

No. Requirement Relevant to the pilot demonstrations

Means of validation

1 CHARISMA shall offer low latency services (low service time)

Yes T4 Integration for low latency demonstration

2 The system shall support advanced end-to-end security

Yes T3: Integration for attack identification and mitigation (T3.1 and T3.2)

3

CHARISMA shall support open access in a way that multiple virtual network operators can be easily instantiated and become fully functional.

Yes

T1: Integration for security service deployment

4

The created network slices must support complete traffic isolation between VNOs, i.e. no traffic from one VNO should reach the other without explicit consent.

Yes

T1: Integration for security service deployment

Page 85: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 85 of 124

No. Requirement Relevant to the pilot demonstrations

Means of validation

5

Each tenant should be able to define its own security policies, deciding the deployment of desired security services (e.g. virtual IDS, firewall) and their configuration without affecting the other tenant’s services.

Yes

T2: Integration for security policy definition

6

A high throughput will be supported, which is a key parameter for a large part of existing and envisaged 5G applications, including high bandwidth video streaming.

Yes

T4 Integration for low latency demonstration

7 Low packet loss rate. Yes Not Applicable

5.2.2 NCSRD Demonstrator

The following table provides the list of requirements as those were defined in D1.2 [1] that are validated through the demonstrations implemented in the NCSRD demonstrator. Additionally, we provide the exact list of integration tests that prove the requested functionalities. Requirements are mapped to the integration test specified in section 4.2.

Table 46: Validation - NCSRD demonstrator

No. Requirement Relevant to the pilot demonstrations

Means of validation

1 CHARISMA shall offer

low latency services (low service time)

Yes T1.2 validating caching service deployment

2 The system shall

support advanced end-to-end security

Yes T1.2 validating security service deployment T2 (T2.1, T2.2 and T2.3) validating security policy definition

3

CHARISMA shall support open access in a way that multiple virtual network operators can be easily instantiated and become fully functional.

Yes

T1.1 validating creation of slices

4

The created network slices must support complete traffic isolation between VNOs, i.e. no traffic from one VNO should reach the other without explicit consent.

Yes

T1.1 validating creation of slices Other?

5

Each tenant should be able to define its own security policies, deciding the deployment of desired security services (e.g. virtual IDS, firewall) and their configuration

Yes

T2 (T2.1, T2.2 and T2.3) validating security policy definition

Page 86: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 86 of 124

No. Requirement Relevant to the pilot demonstrations

Means of validation

without affecting the other tenant’s services.

6

A high throughput will be supported, which is a key parameter for a large part of existing and envisaged 5G applications, including high bandwidth video streaming.

No

Not Applicable

7 Low packet loss rate. No Not Applicable

5.2.3 APFutura Field Trial

This table below describes the validation of the demonstration scenarios performed in the APFutura field trial. The following table provides the list of KPIs and requirements as defined in D1.2 [1] that are validated through the demonstrations implemented in the APFutura field trial. Requirements are mapped to the integration test specified in section 4.3. KPIs and requirements are validated via integration testing.

Table 47: Validation – APFutura field trial

No. Requirement Relevant to the pilot demonstrations

Means of validation

1 CHARISMA shall offer

low latency services (low service time)

Yes T1 integration for low latency demonstration

2 The system shall

support advanced end-to-end security

No Not Applicable

3

CHARISMA shall support open access in a way that multiple virtual network operators can be easily instantiated and become fully functional.

Yes

T3 integration of multi-tenancy demo

4

The created network slices must support complete traffic isolation between VNOs, i.e. no traffic from one VNO should reach the other without explicit consent.

Yes

T3 integration of multi-tenancy demo

5

Each tenant should be able to define its own security policies, deciding the deployment of desired security services (e.g. virtual IDS, firewall) and their configuration without affecting the other tenant’s services.

Yes

T3 integration of multi-tenancy demo (without security)

6

A high throughput will be supported, which is a key parameter for a large part of existing

No

Not Applicable

Page 87: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 87 of 124

No. Requirement Relevant to the pilot demonstrations

Means of validation

and envisaged 5G applications, including high bandwidth video streaming.

7 Low packet loss rate. No Not Applicable

5.3 Evaluation of results

5.3.1 Telekom Slovenije Field Trial

This chapter describes the evaluation of results of the demonstration scenarios performed in the Telekom Slovnije field trial.

5.3.1.1 Security demonstration

In the context of the security demonstration at the Telekom Slovenije field trial, two slices are required in order to accommodate the different services that the VNO/VNO´s will be running. In this particular case, one slice will be hosting the residential telephony users. The other slice is dedicated to the Smart Grid operator. Both slices share the same physical resources.

Figure 62 shows the infrastructure provider view, from the Charisma GUI. It shows the different infrastructures created and the resources configured for the selected infrastructure.

Figure 62: TS - network topology dashboard, available of physical infrastructure

The previous figure shows the devices present in the different infrastructures, but the GUI allows the infrastructure provider to add more devices into the topology for a particular infrastructure.

In order to be able to perform the selected demonstrations, the following devices need to be incorporated into the slices:

Page 88: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 88 of 124

• OFDM-PON,

• SmarNIC, and

• TrustNode

Figure 63, Figure 64 and Figure 65 describe the process of adding one or more of the specific CHARISMA developed devices into a particular infrastructure, configuring them through the GUI.

Figure 63: TS – Physical infrastructure management, adding OFDM-PON device to network topology

Figure 64: TS – Physical infrastructure management, adding SmartNIC device to network topology

Page 89: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 89 of 124

Figure 65: TS – Physical infrastructure management, adding TrustNode device to network topology

Once all the CHARISMA devices are added into the physical infrastructure inventory and the initial configuration has been applied, the infrastructure provider can then proceed to create new slices. One more thing is required before creating a new slice: the infrastructure provider needs to create virtual resources that will be then mapped into physical resources. The process is depicted in Figure 66.

When all the virtual resources have been created, the process of creating a new slice, will aggregate/group all the virtual resources under the slice.

Figure 66: TS – OpenAccess dashboard, creating a new network slice on physical infrastructure

Page 90: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 90 of 124

Figure 67 describes the moment when the infrastructure provider creates the slice, by giving it a human readable name and a VLAN id. It is at that very moment that the infrastructure provider assigns the slice to a VNO.

Figure 67: TS – Open Access dashboard, view of the new slice

Once the process is finished, the newly created slice appears in the list of the slices that the infrastructure provider has created. The particular view from the infrastructure provider displaying the created slices is the one displayed in Figure 68.

Figure 68: TS – Open Access dashboard, list of virtual slices on same physical infrastructure

Page 91: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 91 of 124

5.3.1.2 Security – attack identification and mitigation

In order to be able to identify possible attacks the operator needs to monitor the services. Using the Monitoring and Analytics capabilities of CHARISMA, the infrastructure provider can create default monitoring resources for the services that will later on be deployed by the different VNOs. A VNO can also create monitoring resources, but the default ones are created by the infrastructure provider. The view of the infrastructure provider is the one as depicted in Figure 69.

But the other security vector in the attack identification is the ability to set alarms that will be triggered once some defined conditions or thresholds are reached. The infrastructure provider can see a list of all the defined alarm conditions, as can be seen in Figure 70.

Figure 69:TS – M&A: Monitoring resources

Figure 70: TS – M&A List of alert rules

Page 92: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 92 of 124

5.3.1.3 Low latency validation – OFDM PON & TN

The developed OFDM-PON addresses the low latency, high throughput, and packet loss rate KPIs. Slicing, addressing traffic isolation, and security, addressing system protection related directly to possible threats, are handled by other system components. The corresponding KPI values are summarized below, alongside a qualitative evaluation. Please generally note that OFDM-PON is just a demonstration solution today, but can show the way for later practical implementations.

5.3.1.3.1 Short Component Description and Implementation State

The OFDM-PON downstream path consists of the Optical Line Termination (OLT) device, to be placed in a central location (e.g. Central Office - CO). This component generates the downstream for all connected clients by a single DSP-chain, modulating the user streams to their assigned OFDM subcarriers by an IFFT. The system uses 1024 subcarriers in total for all users. Each subcarrier can be driven with an individual modulation format: BSPK, QPSK, or m-QAM with m representing a number to the power of 2 valued between 8 and 64. According to the chosen system architecture, this baseband OFDM signal is digitally up-converted to an intermediate frequency (IF) of 8 GHz. The IF data is then converted to the analogue domain by a 32 GSa/s digital-to-analogue converter (DAC). The DSP chain is depicted in the deliverable D4.2 [1], Figure 50. The electrical OLT signal occupies the frequency range from DC up to 16 GHz.

This signal is modulated optically to a carrier wavelength of 1550 nm by intensity modulation (IM). The optical path is split to various user paths by a symmetric optical splitter in a real-world system. The demonstration system does not use a splitter, since only one user device (Optical Network Unit = ONU) was built for demonstration purposes.

The ONU detects the whole signal bandwidth by a photodiode, followed up by a trans-impedance-amplifier (TIA). Both components are integrated in a single device. Electrical analogue processing then chooses the ONU frequency band by an IQ-Mixer from the IF signal. Resulting real and imaginary signal parts are filtered to the ONU bandwidth of 2x250 MHz. Both signal parts are digitalized with a 2.5 GSa/s analogue-to-digital converter (ADC) each. After symbol synchronization, digital downsampling to 500 MSa/s is performed on both paths, according to the ONU bandwidth. This internal bandwidth of 500 MHz represents 32 out of the 1024 OLT-generated carriers. By choosing other mixing frequencies, the ONU band position is variable. For demonstration purposes, the frequency is fixed to 4 GHz. The whole, but simplified, DSP chain for the physical OFDM-PON layer is depicted in deliverable D4.2 [5], Figure 53. The ONU is currently limited to demodulate QPSK carriers, but flexibility is going to be increased by supporting all OLT modulation formats for good channel quality subcarriers.

Figure 71: OFDM-PON

To switch client data to the corresponding ONU, the OLT implements an OFDM MAC layer, inspecting the Ethernet destination MAC address of incoming frames. The subcarriers are to sets of 4 subcarriers, representing one MAC layer channel (MAC carrier). These MAC carriers are generally independent links, encapsulating one Ethernet frame each. When the destination for at maximum 8 neighbouring MAC carriers is the same ONU, the links are aggregated. ONU outputs one single Ethernet stream. Grouping the physical subcarriers to a fine grid of MAC carrier allows easy block building in the OLT and ONU, and thus saves DSP

Page 93: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 93 of 124

resources. That allows one to implement the OLT DSP in a single upper mid-size FPGA, i.e. with respect to today’s FPGA market; the assignment granularity still stays flexible with 256 channels.

The upstream has not been implemented in the demonstration system. Various concepts could be evaluated for this, including the OFDM technique.

5.3.1.3.2 TrustNode and OFDM-PON: Integration Test

For the integration test, the TrustNode router and OFDM-PON have been assembled in the set up as depicted on Figure 72.

Figure 72: Integration setup for TrustNode router and OFDM-PON

In the downstream direction, the following additional components are involved:

• A notebook running a robot controller, providing steering commands for robot movements.

• The TrustNode, connected to the OFDM-PONs OLT.

• In between those two devices: A commercial “Dell PowerConnect 8024F” 10G/1G SFP+/SFP- Switch.

• A wireless router running customized OpenWRT-Software also enabling Wifi-CPE, , connected to the OFDM-PONs ONU

• In between those two devices: A second 10G/1G-sSwitch (“Allied Telesis XS-916MX”).

• Moreover, connected by the wireless 802.11 Wi-Fi connection, the robot itself.

The commercial switches have been used as media and format converters between 1 Gbit/s copper Ethernet and 10 Gbit/s optical Ethernet connections.

The upstream uses the same data path in opposite direction, but bypassing the OFDM-PON. Upstream data contains collision alarm data from the robot, instructing the robot controller to stop the robot immediately.

It was shown, that the system was fully functional, integrating the two new components developed in the CHARISMA project. The robot could be stopped long before crashing a barrier.

Since the common IEEE 802.11 Wi-Fi adds a relatively high amount of latency, as compared to that expected for the new CHARISMA components, latency measurements have been done for the single components

5.3.1.3.3 OFDM-PON: Latency measurements for integration tests

The system design delay of the OFDM-PON was first quantified, given by the processing, queuing, and transmission delays. The propagation delay just depends on the fibre length and was excluded for the first, component characterizing measurement. Almost complete exclusion was realized by connecting the OLT and ONU back-to-back by an optical patch cable.

Transmission delay, introduced by the high data rate 10 Gbit/s Ethernet transceivers, can be neglected. Ethernet SFP+ modules are used for connecting OFDM-PON to Ethernet system. Contrary to that, the transmission delay becomes a foreground effect on the PON path. Queuing delay can become a multiplication factor for this. The processing delay is assumed to be a less significant factor. These three contributions to the characteristic system delay are discussed after depicting the results.

Page 94: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 94 of 124

For testing, 300 Mbit/s of synthetic test data with equal sized inter frame gaps was generated by a Viavi 5800-100G protocol tester. It was provided to the OLT’s downstream by single mode SFP+ modules and a short patch cable (2 m). The signal was captured by the receiver site of the same Viavi interface from the ONU. In this way, the one-way delay over the PON downstream path was measured. The upstream is standard 10Gbit/s Ethernet and thus out of scope.

The following observations have been done:

Figure 73: Latency measurements for OFDM-PON (back-to-back)

Each displayed measurement was taken over a period of 3 minutes for all observed Ethernet packet sizes, denoted in the x-axis in bytes. Additionally, a measurement was taken for random packet sizes. The result is shown on the right hand side of Figure 73. The y-axis shows the observed delays in µs. The red line represents the median value of observed delays. The green curve shows the observed maximum delays. Further, the purple curves show the 1σ deviation region around the median (µ ± σ). That does not depict the exact deviations region, but gives an idea to the observed delay distributions. A skewness in direction to higher latencies is assumed. Because of that, the deviation region curves stay unlabelled.

5.3.1.3.4 Transmission and queuing delay

The maximum observed latency is approximately twice the mean value for all fixed packet size measurements. Analytically, the main part of the latency results from the parallel transmission of Ethernet frames through the MAC carrier in parallel, as denoted in the preceding section “OFDM-PON”. The bit per sub- and MAC carrier assignment was chosen as follows:

MAC no. OLT 188 189 190 191 192 193 194 195

MAC no. ONU 4 5 6 7 0 1 2 3

Bits / MAC 4 8 8 0 0 8 8 4

Bits / subcarrier

0 0 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 0 0

Subcarrier no. ONU

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Subcarrier no. OLT

752

7

53

754

7

55

756

7

57

758

7

59

760

7

61

762

7

63

764

7

65

766

7

67

768

7

69

770

7

71

772

7

73

774

7

75

776

7

77

778

7

79

780

7

81

782

7

83

Table 48: Assigned bits per sub- and MAC-carrier and its addressing

Page 95: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 95 of 124

There are assigned 4 bits per OFDM symbol to 2 out of the 8 MAC received carriers, and 8 bits per OFDM symbol to 4 MAC carriers. The other two mac carriers are turned off. Thus, by using 4 but 8 bits per OFDM symbol to transmit a frame, transmission time is doubled or data rate is halved. The theoretical lower bound of this latency contribution is:

𝑂𝐹𝐷𝑀 𝑠𝑦𝑚𝑏𝑜𝑙 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛

𝐵𝑖𝑡𝑠/𝑆𝑦𝑚𝑏𝑜𝑙∙

𝑛𝑒𝑡 𝑟𝑎𝑡𝑒

𝑔𝑟𝑜𝑠𝑠 𝑟𝑎𝑡𝑒∙ 𝐵𝑖𝑡𝑠/𝐹𝑟𝑎𝑚𝑒

Equation 1: Possible bit rate per ONU channel

Regarding a FEC overhead of 19.3% (Reed-Solomon 105/88) and a meta symbol overhead of 2.4% (20 out of 860 OFDM frame symbols are training symbols for channel estimation), this results in 169.4 µs for 4 bits/symbol, assuming frame sizes of 1522 bytes. For 8 bits per symbols, 84.7 µs can be reached. Frame sizes of 1522 bytes are maximum non-jumbo-frames including preamble, but excluding CRC, not evaluated in this system setup. When assuming, frames received on another MAC carrier are already scheduled for offloading from ONU, the value can be higher. Queue filling on ONU site for one single MAC carrier can be mitigated, but also is possible in the current implementation, regarding scheduling. It will be mentioned below. Both effects corresponds to the queuing delay at ONU. On OLT site, temporary higher than allocated destination ONU data rate frame bursts can introduce queueing delay. For longer bursts, a good chosen input queue design has to be implemented not to lose packets and not to enlarge delay too much. Nevertheless, queueing delay is present in the system, even for equally distributed, equal frame length traffic: As shown in Table 48, the MAC carrier with the lowest OLT address receives 4 bits per symbol. This carrier is always filled first by the OLT when currently not occupied (thus used all time, but so far corresponding to transmission delay). This enlarges the average delay. Scheduler output occupation (thus queuing delay) will be present by reordering two frames, one received from a high speed carrier while receiving another from low speed carrier. The possibility of ONU’s scheduler output occupation stays. Additionally, the ONU’s output scheduler (thus the link aggregator) works round robin for all filled carriers, introducing a queuing delay by introducing additional frame reordering. Optimizing and equalizing the processing methods, the delay could be significantly lowered. Finally, it cannot exceed the value given by Equation 1 more than temporary (E.g. a 1522 bytes Ethernet frame consist of 14.5 FEC blocks and 1.8 OFDM frames assuming 8 bits per OFDM symbol. This leads to include 1 or 2 OFDM frame header blocks and the frame internal overhead will be 1.3% or 2.6% respectively). The average delay will stay constant by Equation 1, but time jitter is introduced.

5.3.1.3.5 Processing and propagation delay

Processing delay is relative small. E.g. the ONU needs about 30 clock cycles for symbol synchronization, 35 clock cycles for filtering, about 15 clock cycles for equalizing, 8 for FFT, some additional for decision and minor duties, and, as the biggest amount, 203 clock cycles for forward error correction. This results in a bit more than 300 clock cycles for the DSP chain, corresponding to 2 µs at the chosen processing clock speed of 156.25 MHz. Some processes can potentially be optimized, but for higher quality signals also allowing e.g. 16-QAM carrier modulation formats, additional filter taps are needed and the equalizer chain has to be extended. OLT processing also falls into this dimension, but is not quantified exactly here.

Approximately one additional microsecond of processing delay is added for maximum size Ethernet frames by each device, OLT and ONU. Both are using store-and-forward but cut-through technology. Regarding the overall delay cannot exceed the value given by Equation 1, this stays an acceptable factor for the chosen MAC carrier size and bit loadings per subcarrier. It allows an easier and more reliable implementation (e.g., bad frames can be blocked). Store and forward delay scales with the same factor like transmission delay regarding the frame size.

Figure 73 also shows the latency for random sized frames. One can recognize, the average latency approximately is almost the same as that for the average frame size, but deviation increases. That is qualitatively comprehensible by definition of deviation (σ region includes more than 50% of a set). In addition, some frames might be overtaken by shorter ones by the used 6 parallel MAC carriers. As the last

Page 96: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 96 of 124

addressed factor, the OFDM MAC layer at OLT site might distribute frames to more right carriers but for 300 Mbit/s of equal length frames used, and, in some few cases, potentially even loose frames due to short input queues, even in the bounds of allowed throughput. This occurs, when many short frames follow many long frames. Most likely because of that, about 0.09% of frames has been lost in the random size measurement (but none for the other measurements). Thus, the link aggregation method has to be revisited, regarding this point, too.

At least, propagation delay has to be addressed. This is highly predictable by the speed of light inside the fibre (of approx. 200 000 km/s) and the length of the fibre. To evaluate this, the measurements for 64, 256, 512, 1024, and 1518 byte sized frames have been repeated over 10 km of single mode fibre. The measured values have been nearly exactly 50 µs higher, which corresponds to the estimation by

10 𝑘𝑚

200,000 𝑘𝑚/𝑠= 50µ𝑠

Equation 2: Propagation delay for 10 km of optical fibre

In addition, the delay for the switches has been quantified by repeating some of the measurements, including both switches. Delay was about 5 µs higher for both switches, under the condition, that the switches the switches are free from traffic elsewise.

5.3.1.3.6 Throughput estimation and measurement for integration test

For OFDM-PON, the throughput per ONU can be estimated from a certain carrier assignment and modulation format. Regarding the usage of QPSK for all assigned sub carriers, the ONU gross data rate can be calculated as follows:

𝑅𝑂𝑁𝑈(𝑔𝑟𝑜𝑠𝑠) = ∑ (𝑚𝑜𝑑𝑢𝑙𝑎𝑡𝑒𝑑 𝑏𝑖𝑡𝑠𝑎𝑠𝑠𝑖𝑔𝑛𝑒𝑑 𝑠𝑢𝑏𝑐𝑎𝑟𝑟𝑖𝑒𝑟𝑠 )

𝑂𝐹𝐷𝑀 𝑠𝑦𝑚𝑏𝑜𝑙 𝑑𝑢𝑟𝑎𝑡𝑖𝑜𝑛=

2 ∙ 𝑎𝑠𝑠𝑖𝑔𝑛𝑒𝑑 𝑠𝑢𝑏𝑐𝑎𝑟𝑟𝑖𝑒𝑟𝑠

68 𝑛𝑠

Equation 3: ONU gross data rate for QPSK modulated carriers

In the test setup, 20 subcarriers have been assigned. This results in a rate of 588.24 Mbit/s. As described above, PON system overhead results from FEC and sending training symbols. Thus, a simplified estimation for layer 1 Ethernet throughput can be done with:

𝑅𝑂𝑁𝑈(𝐿1 𝐸𝑡ℎ.) = 𝑅𝑂𝑁𝑈(𝑔𝑟𝑜𝑠𝑠) ∙𝑝𝑎𝑦𝑙𝑜𝑎𝑑 𝑑𝑎𝑡𝑎

𝑔𝑟𝑜𝑠𝑠 𝑑𝑎𝑡𝑎≥ 588.24𝑀𝑏𝑖𝑡/𝑠 ∙

88

105∙

840

860= 481.52 𝑀𝑏𝑖𝑡/𝑠

Equation 4: ONU layer 1 Ethernet data rate

This value is not completely exact, since the training symbols are not FEC encoded and the CRC-bits are not transmitted but regenerated at the receiver, according to the chosen system preferences. Considering this, the exact Ethernet layer 1 ONU data rate is calculated to 484.62 Mbit/s for frame sizes of 1518 bytes. Viavi protocol tester measurement verified exactly this value (484.6 Mbit/s), when sending data with 500 Mbit/s, while receiving all data for lower rates.

5.3.1.3.7 Hardware acceleration and hierarchical delay evaluation

This test visualises the latency results in the TS field trial. As described in section 3.1.2 a mobile sensor platform is used for visualisation. Figure 74 gives a short overview about the behaviour of the measurement equipment. The controller and the mobile platform exchange messages containing the actual distance between sensor platform and a possible obstacle in front of the platform. The controller has the possibility to send an emergency stop message if an obstacle comes to close.

Page 97: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 97 of 124

Figure 74: Message flow between controller and mobile sensor platform [10]

Table 49 shows the results of the delay measurements between the controller and robot base station. Also the resulting distance between robot and obstacle in case of the emergency stop is reported.

Table 49: Latency measurement results, measured with IXIA and latency-robot between robot controller - and robot base station location.

Description Configuration Latency Robot distance

Non-hierarchical mode TrustNode: fast mode Robot controller possition: a

67µs (see ) 1.25cm

Hierarchical mode TrustNode: fast mode Robot controller possition: b

4µs (see 4.1.4.1) 1.90cm

Due to Table 49, the difference between the hierarchical and common network structure is obvious. The first optimisation of hierarchical routing is the minimizing of long distances and compute node instances. In this test represented through the OFDM-PON delay. On the other hand cheap and fast network equipment, which benefits the hierarchical address structure can be used. Due to the limitation of devices in the test setup the delay improvement was only shown with one TrustNode and one OFDM-PON, whereas a real world setup would have about 20 hops for the non-hierarchical path. The circumstance is well-adjusted using the new applied ttl function in the robot traffic. Due to this improvement also, the small delay difference of 63 µs can be visualised. The comparison with the results of the APFutura field trial in section 5.2.3 which show a delay difference of about 100 µs per hop for CHARISMA and common devices can be made easily. With an additional delay value of 100 µs the current measurement setup will not be possible and the robot will crash into the wall.

5.3.1.3.8 Low latency

With the measured average one-way latencies below 200 µs for all cases, the 5G mobile networks goal would possibly be reachable, using just one PON path in the network hierarchy and short paths for the traffic in general. Since the radio network part is assumed to be more challenging to address (i.e. satisfy) this KPI, further lowering the rate would be beneficial. This could be reached on the cost of carrier assignment granularity or avoidance of using lower rate MAC carriers. Also using cut-through but store-and-forward technique would become beneficial when reducing the latency significantly before by other factors. Peak

Page 98: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 98 of 124

latencies of about 460 µs should be avoided. Analysis and avoiding strategies (e.g. by using more intelligent schedulers), as well as upper mentioned latency lowering strategies, are discussed in chapter 5.3.1.3.

Additionally using network structure dependent low latency strategies like consequent usage of 6Tree-Routers, SmartNICs, and caching instances close to the UE will help in reaching this goal. Especially caching and pre-PON 6Tree-routing would mitigate the non-reducible fibre one-way propagation delay of 5 µs/km.

5.3.1.3.9 Throughput

The throughput of about 485 Mbit/s per endpoint was reached by the demonstration implementation of OFDM-PON. This is already beyond the per-endpoint-throughput of 4G networks, assuming a 150 Mbit/s capacity cell as such an endpoint. Implementing higher order modulation formats other than QPSK would reach a significant higher value under most conditions. Assigning a broader bandwidth ONU when needed for the endpoint also significantly increases the bandwidth capabilities.

5.3.1.3.10 Packet loss rate

For all fixed packet size measurements, there have been absolutely no packet losses because of we have used a relatively strong FEC. The FEC strength was chosen for applications with higher order modulation formats. There would be a trade-off between the used modulation format (thus a combination of throughput and channel quality) and packet loss rate. An algorithm, providing this trade-off, would find the according modulation format per subcarrier. As an additional parameter, a maximum packet loss rate could be offered.

For varying packet sizes, there are currently issues in the system, bounding the value to almost 10-3. This value is too high and the issue has to be resolved. The failure is addressed by inadequate queuing and scheduling in chapter 5.3.1.3.

5.3.1.3.11 TrustNode router

INR_TN_test_4 (see Table 18) show the security features of the 6tree routing protocol, every node in the network checks packets against its own prefix. Every packet injected into the network at the wrong location will be dropped from the next TrustNode. This implies every recipient of packets can trust that the source address of the packet is valid from a sender located at the announced location of the network. This concept of source address verification will be merchandised from InnoRoute as S.A.VE concept for TrustNode consumers.

5.3.2 NCSRD Demonstrator

This chapter describes the evaluation of results of the demonstration scenarios performed in the NCSRD demonstrator.

5.3.2.1 Performance characterisation of the firewall VSF

In the attempt to measure the maximum bandwidth for the firewall VNF used in CHARISMA we would need high bandwidth network cards. Our limitation was in using only 1 Gbit network cards. For this reason, we have worked in a virtualized environment within the same server. The set-up we used is depicted in the following picture.

Page 99: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 99 of 124

Figure 75: Set-up for testing firewall VNF bandwidth within the same server

The server used to run the tests was Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz with 128 GB of RAM. The “Sender” virtual machine was configured to send packets to the “Receiver” virtual machine, passing through the firewall VNF. The different traffic rates tests were performed using the tool “tcpreplay” continuously sending a sample of real network traffic captured from the NCSRD laboratory network including multiple flows. The command used is “tcpreplay –i {{ network interface }} –M {{ badwidth }} -l 10 {{ capture file with traffic }}”. Tested rates and results are displayed below.

Table 50: CPU usage over different values of traffic rate

Traffic Rate (Gbps) CPU usage %

0,1 1

0,9 4

1,7 6

2,5 15

Figure 76: Variation of CPU usage as traffic rate changes

2.5 Gbps was the maximum traffic rate the “Sender” VM could send under the set-up in test, using the tool “tcpreplay”. This traffic was handled with ease by the firewall VNF which had 1 vCPU and 1 GB of ram. As it shown, CPU consumption increases the higher the traffic rate gets but only up to 15% for 2.5Gbps. Also, there was no correlation of ram usage and traffic rate.

In order to determine the maximum bandwidth of one flow passing through the firewall VM we used another tool, “iperf3”. The mode in iperf used was “TCP”, so the “Sender” VM was sending TCP packets of a defined packet size to the maximum of its ability. Different packet sizes were tested using the command “iperf3 -c {{ iperf server ip}} -p {{ ipref server port }} –B {{ iperf client ip }} –M {{ packet size }} -P 1 -i 1 -f m -t 100 -T 1”.

Page 100: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 100 of 124

Table 51: Traffic rate measurements over different packet sizes

Packet Size (KB)

1 vCPU (Gbps)

2 vCPU (Gbps)

128 26,35 27,71

256 25,33 26,98

512 27,33 27,24

1024 23,1 28,43

1500 29,15 28,58

Figure 77: Change of traffic rate with variable packet size

5.3.2.2 Latency introduced due to virtualisation

To calculate the latency introduced by using a virtualized firewall we compared the two following set-ups:

Set-up A – No virtualized firewall: We send packets from a "Sender" towards a "Receiver" server. The SDN

switch forwards the packets from one port to the other in both directions.

Figure 78: Set-up - No virtualized firewall

Set-up B – Virtualized firewall: We send packets from a "Sender" towards a "Receiver" server, but this time

the SDN switch redirects the packet towards the OpenStack Compute. In OpenStack, again traffic is stirred to

pass through the firewall VNF and leave from the same OpenStack interface to the SDN switch where it is

directed to its final destination: the "Receiver" server. Responses to the "Sender" server are sent following the

exact opposite direction.

Page 101: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 101 of 124

Figure 79: Set -up B - Virtualized firewall

Comparing the latency between these two set-ups we can compute the added latency, which is introduced sending the traffic through the virtualized firewall. The latency measurement was captured using "ping" network tool. . We have measured the latency with different setups of the firewall VSF, variating RAM and vCPU allocation.

Below are the results:

Table 52: Measurements of latency with different setups of RAM for the firewall VSF

MIN (ms)

AVG (ms)

MAX (ms)

Without virtualization 0.475 0.499 0.58

512MB RAM - 1 vCPU 0.706 0.961 1.224

1GB RAM - 1 vCPU 0.657 0.967 1.118

2GB RAM - 1 vCPU 0.751 1.03 1.194

4GB RAM - 1vCPU 0.678 1.024 1.182

Figure 80: Latency variation for the different setups of RAM of the firewall VSF

As the table above suggests the added RTT latency is an average of 0.5 ms regardless of the RAM capacity of the firewall VM used. Moreover, RTT latency showed no relevance to the number of vCPUs on the deployed firewall VNF. The added latency in all the tested conditions was approximately 0.5 ms.

Page 102: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 102 of 124

Table 53: Measurements of latency with different setups of the RAM for the firewall VSF

MIN (ms)

AVG (ms)

MAX (ms)

NO FIREWALL 0,475 0,499 0,58

1GB RAM - 1 vCPU 0,657 0,967 1,118

1GB RAM - 2 vCPU 0,693 0,997 1,189

1GB RAM - 4 vCPU 0,702 0,941 1,161

Figure 81: - Latency variation for the different vCPU allocations of the firewall VSF

5.3.2.3 Security attack detection and mitigation time

One important factor of the CHARISMA security functionality is the time interval required to detect and mitigate a DDOS attack. The test case measured refers to a simulated attack of 100 different malicious wireless devices connected to the access point. The total number of requests was 1,000 per second.

Figure 82: DDoS attack detection and mitigation interactions between CHARISMA components

The DDOS attack scenario begins launching the attack simulation, the traffic being mirrored to the IDS VNF causes the IDS to generate an alert which then exposes to its API. The M&A requests monitoring data from

Page 103: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 103 of 124

all the monitored resources at a predefined period. When it requests IDS monitoring data it will be informed about the alert. Then, again in a predefined period, it will evaluate all the defined conditions that need to be true in order send an alert to the SPM. In the event of an alert notification the SPM assess the data received by the M&A and determines the suitable policy action for the attack, which is sent to the Security Manager. The Security Manager, in order to determine the recipient of the policy action requests this information from the TeNOR orchestrator, which in turn communicates with OpenStack retrieve the required information. When the recipient of the policy action is identified (in the DDOS case this is the vFirewall of the VNO), the Security Manager applies the policy action to the recipient.

Table 54: Measurement of the time required to complete the several procedures that are part of attack mitigation

Procedure Seconds to complete

IDS detection of malicious IPs 20

Monitoring and Analytics IDS data acquisition 30

Monitoring and Analytics alert rule condition evaluation 60

Monitoring and Analytics alert notification towards Service Policy Manager 120

Service Policy Manager alert notification assessment and sending of a Policy Action to the Security Manager

30

Security Manager acquisition of Policy Action destination details from TeNOR orchestrator

20

Security Manager application of Policy Action to the vFirewall 60

The total time needed to detect a DDOS attack originating from 1,000 malicious devices was measured to be approximately 5.5 minutes.

5.3.2.4 Multi-tenancy – vCache peering

Our experiments intend to validate the expected benefits of vCache peering, focusing on user and (V)NO perceived benefits, but also investigating resource utilization issues. To this end, our investigation focuses on:

• Average download times (ADT), i.e. the time required to download a Web object. This metric is intended to quantify user perceived benefits. We also further consider a fine-grained assessment of the download times perceived, considering:

o ADTLH: ADT for requests that resulted in the content being retrieved from the local vCache (Local Hit)

o ADTM: ADT for requests that resulted in the content being retrieved from the origin server (Cache Miss)

o ADTPH: ADT for requests that resulted in the content being retrieved from the vCache of the peer VNO (Peering Hit)

• Cache hit ratios (CHR), i.e. the percentage of HTTP requests served by a vCache, instead of the content origin (Web Server). This metric is intended to quantify both user perceived benefits and high-level impact on traffic savings. We refine the assessment of the CHR, considering:

o the local CHR (CHRL) measured as the ratio of HTTP requests sent within a VNO that were served with content cached locally, and

o the CHR over the peering link (CHRP) measured as the ratio of the client requests that were served by the peering cache.

• CPU utilization, as reported by the Prometheus monitoring tool, employed by CHARISMA. It corresponds to the percentage of CPU time devoted to the operation of the cache and it is affected

Page 104: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 104 of 124

by several operations such as cache index lookups, content retrieval and transmission. This metric is intended to quantify resource utilization.

Our baseline setup corresponds to the simplified vCache peering deployment, as described in [4] and shown in the following figure for completeness.

Figure 83: Simplified vCache peering setup. The vCC is omitted for simplicity.

We later revisit the resource isolation aspects and the enhanced setup of the vCache peering architecture, also described in D3.4 [5] and shown in the following figure for completeness.

Figure 84: vCache peering setup. The vCC is omitted for simplicity.

For our setup, we configured vCaches with the default Least Recently Used (LRU) cache replacement policy. A web server is instantiated within a separate VM outside the VLAN segments of both VNOs/tenants. As web servers cannot be expected to typically reside in close network proximity to the virtualized infrastructure, we artificially introduce additional latency between both VNOs and the web server VM, following a normal distribution with a mean value of 150ms and a standard deviation of 50ms [13].

For our experiments we employ synthetic web workloads, generated by the Globetraff tool [14]. The workload in our experiments consists of a typical content catalogue that follows a Zipf-like popularity

Page 105: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 105 of 124

distribution (with slope a=0.8) [12], with exponentially distributed request inter-arrival times (with rate r expressed in HTTP requests per second; default value r=1 req/sec). The distribution of the content items size is modelled as the concatenation of the Lognormal (body) and Pareto (tail) distributions [14]. Focusing in this work on resource allocation, and utilization/isolation aspects, we set the workload at both VNOs to come from the same content catalogue, randomizing however, the arrival of content item requests at each VNO.

Figure 85 shows the CHR (CHRL and CHRP) observed in the two VNOs’ vCaches A and B for different cache sizes, with and without cache peering. The leftmost pair of bars shows an approximate CHR of 22.58% for both VNOs, in the case of no cache peering. The rightmost pair of bars shows the achieved CHR when the cache peering link is established. We notice an increase of the overall CHR to approximately 26.1% i.e., a 15.6% increase of the total CHR. The increase owes to the peering link, as shown by the addition of the CHRP. Though the 15.6% increase of CHR (and corresponding traffic savings) may not appear as substantial at first sight, it is important to assess it in the light of potential alternatives for VNOs. Typically, a targeted increase of the CHR would involve an investment in additional storage space. Figure 85 examines this case, comparing cache peering benefits with those of an increase of storage space (middle pair of bars). We see that the total CHR benefits of vCache peering compares to those of an approximate increase of 33% of the leased storage space, though without necessitating the allocation of additional resources (and the associated OpEx).

Figure 85: Effect of cache size on cache hit ratio (CHR)

Figure 86 further shows the impact of cache peering on ADT, as perceived by the clients. We see that the introduction of the peering link between the two tenants reduces the ADT for both sides, i.e. clients of both VNOs, with a reduction rate of up to 22%. We also observe that the introduction of the peering link results in a slightly better ADT than in the case of an increased caching space (without a peering link).

Figure 86: Effect of cache size on average download time (ADT)

Page 106: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 106 of 124

Table 55 provides a finer grained view of the measured ADT in the various scenarios. As expected, we first observe the substantial reduction of ADT in the case of either a local or a peering cache hit, by almost an order of magnitude. Interestingly we notice that content retrieval from a peering vCache results in lower download times, on average, compared to local cache hits, i.e. it is more efficient, in terms of ADT to hit a peering vCache, than a local one. A closer look at this result reveals the importance of TCP session management by Squid. Namely, when possible, Squid persists with TCP connections with reoccurring IPs. In our experiments we deactivated this feature of connections established between Squid instances and clients or the content (web) server, so as to not bias our results with the specifics of the testing environment, e.g. there is a unique web server (and corresponding destination IP), which in the presence of TCP connection persistence would result in omitting all TCP handshake overheads. In contrast, we kept the TCP persistence feature activated for peering connections, since this would correspond also to a real, operational environment. Our measurements show that this feature has a significant impact to ADT, resulting in lower download times by up to 40.18%.

Table 55: Average Download Time analysis in vCache peering scenarios

No Peering 100% No Peering 133% Peering 100%

VNO A VNO B VNO A VNO B VNO A VNO B

ADT 929.50 ms 982.68ms 908.15 ms 813.23 ms 891.62 ms 800.98 ms

ADTLH 130.79 ms 125.01 ms 133.71 ms 113.44 ms 135.17 ms 111.55 ms

ADTM 1162.50 ms 1215.27 ms 1186.12 ms 1060.17 ms 1164.86 ms 1055.71 ms

ADTPH - - - - 96.42 ms 85.58 ms

While the previous results show the benefits of cache peering in terms of CHR and ADT, it is not clear what the corresponding costs are, in terms of resource utilization. This is a particularly important aspect, since the increase of resource utilization would be expected to deteriorate local vCache performance, effectively dis-incentivising the adoption of cache peering. As explained in [5], the introduction of the peering vCaches in the setup illustrated in Figure 84, aims to hide the resource utilization costs from the local vCache instances, so as to isolate the vCache peering slices, while allowing the transfer of content between the peering VNOs. We have experimented with both setups so as to validate the motivation behind the design. Our experiments focus on the CPU utilization observed at the local vCaches serving the customers /end clients of each VNO. Initially we instantiate the simplified setup and initiate the sharing of content through the peering link. We observe an increase in average CPU utilization of 8.57%. We then deploy the peering Caches as shown Figure 84, pre-fetch the content there, from the local vCache instances and start the sharing of the content over the peering link. As expected, the CPU utilization at the local vCaches decreases by an average of 19.93%. This is attributed to the fact that the local vCache instances are not burdened with providing the content over the peering link to the counterpart VNO.

5.3.2.5 Intelligent Traffic Handling

The motivation behind the Intelligent Traffic Handling mechanism has been to avoid the delays imposed by the unnecessary traversal of a vCache VNF, i.e., in the cases where we have cache misses. Our validation efforts have focused on demonstrating the effect of such traversals and further quantifying the delay savings yielded by our Intelligent Traffic Handling mechanism. To this end, we focused on the following two simplifying scenarios, for the baseline workload presented in the previous section:

1. “Only Proxy”: a vCache VNF is configured to not cache any content, effectively acting as a mere proxy. In this scenario all HTTP requests lead to cache misses, effectively allowing the measuring of the delay penalty of unnecessarily traversing the vCache VNF.

2. “No Caching”: the network is configured to bypass the vCache VNF for all traffic/HTTP requests. The purpose of this scenario is to allow the measuring of the delay savings by not traversing a vCache VNF.

Page 107: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 107 of 124

Figure 84 shows the Average Download Time experienced in the two scenarios. The impact of the unnecessary vCache traversal is obvious for the entire duration of the experiment. Bypassing the vCache VNF results in an average reduction of the average download time by approximately 58%, effectively demonstrating the motivation for the Intelligent Traffic Handling mechanism in CHARISMA. The achieved reduction obviously corresponds only to HTTP requests that would lead to a cache miss if they traversed the vCache VNF. This means that the measured reduction applies to part of the workload (i.e., requests) that can be identified as prone to cache misses. As discussed in [5], this identification can be performed either by the vCache/vCC instances themselves, i.e. the VNOs (by processing the access logs of the caches), or by directly configuring certain bypassing rules for certain destinations. It is important to note that our work in CHARISMA aims to highlight the importance of the Intelligent Traffic Handling feature as a management and control mechanism that can dynamically improve performance for tenants of the architecture.

Table 56: Effect of Intelligent Traffic Handling on Average Download time

5.3.3 APFutura Field Trial

This chapter describes the evaluation of results of the demonstration scenarios performed in the APFutura field trial.

5.3.3.1 Low Latency Results.

Following the demo description in chapter 3.3.1, the Latency measurement was made end-to-end using two different computers, one connected to the ONT and the other one connected directly to the SmartNIC. The traffic goes through all devices in the network.

For the measurement the jPerf program is used.

The first test done is to test the bandwidth and check that the slice made for this demo can reach the maximum bandwidth configured, that is 600 Megabits/s.

Page 108: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 108 of 124

Figure 87: Bandwith test APfutura Low Latency Demo

After checking that the bandwidth is correct, the next test is the latency test.

Trying to create a measurement as realistic as possible, the server CPU is overloaded using “stress” command for linux. The server CPU is at 99.7%. This command simulates the CPU utilization in the server with several virtual machines managing several IT functions like, OSS/BSS, users control, etc.

The traffic over the network is used as well to simulate several users using different slices, so traffic that is not only for this test, goes through the TrustNode as well.

To make the latency measurement, the parameters used were 1,000 kbits/sec and 122 kBytes packet size.

The measured jitter and the latency are shown in Figure 89.

The results are: Jitter: 0.20 ms and different latency in each packet each second, for 10 seconds.

This measurement is one-way only. With this type of measurement, the latency added for the computers to return the packets are avoided.

Page 109: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 109 of 124

Figure 88: 1st Latency Test APFutura

For the next low latency test, the SmartNIC was replaced with the server integrated NIC card and the TrustNode device replaced by a regular router, in this case a Cisco router (7200 series).

Both devices are configured manually because their integration is out of the scope of CHARISMA and this test is being done for the purpose of comparing the CHARISMA devices that reduce latency and provide security, as compared to devices that can currently be found in any datacentre. For this test, the parameters used are the same. First, a bandwidth measurement for the slice.

Figure 89: Bandwidth test without CHARISMA Devices.

Page 110: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 110 of 124

As we can see in the Figure 89 the bandwidths are pretty similar with the one in the Figure 87. We can say that in both cases the devices have the same behaviour with no variations. This was the expected result.

For the latency test, the first test was done with the CPU at 1%. This means that nothing was running in the server except the Operative System (Ubuntu Server). The results show that the latency in perfect conditions are greater using regular devices that using CHARISMA devices.

Figure 90: Latency Test without Charisma Devices and low CPU use.

The latency measurement shows that the jitter grew from 0.20 ms to 0.30 ms and the latency grew as well. Comparing results, with the CHARISMA devices the latency is between 0.272 ms (worst result) and 0.204 ms (best result) Figure 88. Without CHARISMA devices the latency is between 0.382 ms and 0.283 ms. Even the worst result of the CHARISMA devices is better than the best result of the regular devices.

But, this is not a realistic test, because the server is never at 1% of the CPU use. Therefore, the tests have been redone with an overloaded server CPU now at 99.7%.

Page 111: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 111 of 124

Figure 91: Latency test with CPU overloaded at 99.7%.

In this test, the jitter was increased to 0.39 ms as compared to the 0.30 ms obtained with previously with the same devices but with lower CPU use. The latency was increased as well and new results are between 0.557 ms and 0.309 ms.

We have to take into account that these are lab tests, in the sense that in the lab we have ideal conditions (e.g. there are not as many devices as we could find in a deployed network). What these results show is that using CHARISMA devices the latency could be reduced drastically.

The end-to-end latency KPI for the CHARISMA architecture is less than 10 ms and the CHARISMA architecture, using CHARISMA devices achieved these KPI obtaining a latency of 0.272 ms in the worst case.

5.3.3.2 Multi-tenancy Results

The FTTH solution designed by Altice Labs based on the GPON protocol, provides the setup to prove the provisioning of slices dedicated to VNOs.

Each time that an ONT is configured it should be assigned to a VLAN (slice) that is mandatory to that OLT. This VLAN belongs to one VNO, so every computer connected to this ONT will use the slice created.

As we can see in the low latency demo, each slice is separated and the traffic is isolated, so no traffic from one slice can affect a different slice because the traffic is isolated using layer 2 VLAN.

This configuration is integrated with the CHARISMA GUI so when you create a VNO using the CHARISMA GUI, you have to assign a VLAN as well.

Page 112: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 112 of 124

Figure 92: VNO dashboard expanded

Figure 93: VNO view

Using the CHARISMA GUI, the service deployment is reduced just to follow several screens, and the service is deployed as a VNF. Adding new devices to the topology is fast as well, just filling the form created.

Page 113: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 113 of 124

Figure 94: MobCache deployment

According with these results, the APFutura testbed fulfilled the agreements for the network slicing, isolating the traffic and deploying Virtual Network functions easily in less than 90 min.

5.3.3.3 Bus use case

Evaluation which was conducted for the bus use case was mainly qualitative so far, as the main parameter of improvement of caching in this mobile context is the user QoE (transport operator considers that session curt is a major churn inhibitor for this service). Additionally, traffic load reduction through caching is an important improvement, as the cost of connectivity is and will remain another major issue for communication in transport. For both of these tests we need panels of users, which will be available in the next large scale pilot in preparation with the current MoBcaches (see 5.4.3). In the meanwhile, additional testing is in progress in the coming week, mainly to find methods of assessing traffic reduction.

Some preliminary results were confirmed:

• The overall system was functional: o vCC receiving and controlling cache content information in the MoBcaches o pre-fetching triggered by MoBcache disconnection and handover

• WIFI handover is an issue and sometimes takes a few seconds, due to absence of handover standards. This was addressed partly by implementing faster discovery algorithms, but requiring proprietary implementations.

• MP-TCP is a layer 4 transport standard which can improve handover issue by setting several connections in parallel on different media (here WIFI and 4G), then ensuring full session continuity while at least 1 media is connected. MP-TCP is proxy is being implemented in the MoBcache (MP-TCP proxy was tested separately). However, testing was successfully performed without MP-TCP

• Handover only with WIFI was possible with a simple decision engine algorithm, and prefetching, when keeping the user on the same session. Nevertheless, the distance between the WIFI access points and the slow handover process did not allow the user to set-up a new session fast.

• Testing with WIFI and LTE connection was showing full session continuity (handover time was less then 1second, ensuring therefore good QoE when switching sessions).

Page 114: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 114 of 124

Additional testing will be done until June 2018 as a larger scale pilot is planned in Rennes (France) on several bus lines.

5.4 Best Practices and Lessons Learned

This chapter gives an overview of the best practices and lessons learned during demonstrations of CHARISMA feature in the two field trials at Telekom Slovenije and APFutura and the demonstrator in NCSRD.

5.4.1 Telekom Slovenije Field Trial

Following the CHARISMA project, the defined architecture would enable Telekom Slovenije to further enhance the network infrastructure towards the edge computing. One of the objectives was to demonstrate the possibility to push the IMU towards the edge, as close to the user as possible. Services such as video streaming and providing smart grid connectivity are ideal candidates for edge computing. Pushing intelligence and compute functionality towards the edge enables traffic offloading and processing at the lowest aggregation level. A huge amount of data reported by the IoT devices is transformed into information and processed locally. Central office application at the top aggregation level is triggered only in the case of orchestration of the information across different geographical edges. In case of streaming, dispatching is performed at the edge, therefore reducing the throughput from lowest CAL towards the IPTV head-ends for example.

From the security perspective, deploying security services at edge enables telecom operators and virtual infrastructure users to mitigate potential security issues and threats at the source of threat.

From the hardware point of view, acceleration as shown in SmartNIC and TrustNode can flexible extend and speed-up existing network functions. A hierarchical network structure, which can be easily configured using the CMO-management systems, supports low latency and the security functions as shown with the 6tree routing concept. The integration of existing devices, especially supporting new features like IPv6, causes some difficulties for the setup, which can be avoided with the integration of more actual technologies into the setup.

The virtualisation enables the operator to reuse existing hardware resource in an efficient manner, scaling the underlying hardware according to actual processing and traffic requirements and limiting the resource overhead usual for standalone hardware solution deployment.

The CHARISMA management solution (CMO) offers an efficient way of network, infrastructure and software components management centralisation. Although the infrastructure according to CHARISMA architecture may be spread across aggregation levels (CALs), geographically distant, it is managed and monitored centrally, simplifying operations from operator perspective.

5.4.2 NCSRD Demonstrator

Undoubtedly, technologies such as NFV and SDN represent 5G enablers and major catalysts towards industry transformation and innovation. The CHARISMA security demonstration implemented over the NCSRD testbed was mainly focused on taking advantage of these software-defined paradigms to deliver policy-based automation for security purposes. In particular, although through a narrow selection of security attacks, we have examined how the integration and co-operation of service orchestration, policy management and monitoring and analytics systems can assist the identification of security incidents and accelerate the response process towards security attacks. To this end, we have implemented the CHARISMA CMO that follows and extends the ETSI MANO reference architecture with several security related components. Some of the defined CMO elements are already part of today’s network management systems (e.g. OSS/BSS, policy manager, monitoring & analytics). However, their co-operation and integration with the ETSI MANO

Page 115: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 115 of 124

components and the frontier between those is not clear, nor is it trivial. Thus, CHARISMA takes a step towards a clearer definition of the different functions implemented by these components and also provides a first approach of how these components should interact between them.

Another significant result is the flexibility and agility that the SDN controller can provide for network automation purposes. By using the same protocol (OpenFlow) it was possible to program effectively in a vendor agnostic way the SDN enabled devices at the NCSRD testbed (SDN-enabled backhaul and OVSs of the cloud infrastructure) in order the accommodate the needs for network slicing-virtualization.

A valuable result of our work for the NCSRD security demonstration is the implementation of a testbed that follows the Multi-access Edge Computing (MEC) paradigm. MEC proposes running of virtual functions on cloud/fog implementations that are closer to the network edge to offer an environment with high performance and lower service latency, improving significantly the experience of mobile users. The security and multi-tenancy demonstrations presented in NCSRD testbed are showcasing the deployment of security (virtual firewall and IDS) and caching services closer to the end user. Commercial off the shelf (COTS) X86 servers have been used in the implementation of the CHARISMA IMUs in which these services are running. An obvious improvement of our setup would be the upgrade of those implementations with more light-weight cloud deployments (e.g. ARM-based SoCs or similar) and not full OpenStack installations.

One of the main lessons learnt during our experimentation was that virtual machines are not the appropriate workload execution environment to drive automated security service orchestration. Although on-demand provisioning of security services implemented as virtual machines is possible, most security attacks demand for quick response times, which is impossible to achieve when security services are deployed as virtual machines. For those reasons, other forms of VNF packaging, such as containers or unikernels, have to be explored and adopted that are more light-weight, offering lower instantiation times. Those virtualisation enablers that offer a much smaller footprint are most proper to take advantage of the optimal and effective resource management, which is strongly associated to NFV.

From a deployment and also implementation perspective, one conclusion drawn was that provider networks OpenStack deployment seems to be the most appropriate setup to achieve traffic steering through virtual services deployed in compute nodes. This is especially the case when traffic is not directed through the OpenStack network node (running Neutron service) as usually happens for in-service function chaining implementations in OpenStack-based NFVI-PoP deployments. For the CHARISMA IMU implementation we needed traffic to enter directly into the OpenStack compute nodes. Other deployment types, such as the most common one, “self-service OpenStack deployment” are not disapproving for the specific traffic steering implementation, however require further modifications and changes. A second conclusion concerning implementation was reached through our experimentation with the OpenDaylight SDN controller and its use for the configuration of the SDN-compatible elements of our deployment. SDN switches within OpenStack were not configured properly through the deployed external instance of SDN controller (OpenDaylight Beryllium) due to rule conflicts with the already deployed internal SDN controller found in Openstack Newton (Ryu). Obviously, the deactivation of the internal SDN controller would provide a solution to this issue. The exploration of this fix was not possible in the timeframe of CHARISMA project. However, further experimentation towards this direction to resolve the issue has been added as a future research topic. Finally, for the Monitoring & Analytics implementation, Prometheus, one of the most popular open source monitoring solutions was adopted to enable monitoring of physical and virtual resources of the CHARISMA infrastructure. Although Prometheus offers a complete set of features, several extensions were required to be implemented for CHARISMA project. The most important one amongst the implemented extensions are those that extend Prometheus with regards to resource and alert rule registration. The implemented interfaces for this purpose are those needed for a more complete integration of the monitoring system with the NFV Orchestrator and the Policy Manager, providing the ability to automatically register newly created resources and reporting alert notifications to external orchestration systems, such as the Policy Manager.

Page 116: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 116 of 124

5.4.3 ApFutura Field Trial

As demonstrated in the field trial, CHARISMA project has helped towards reducing latency latency, because this is the main objective of the innovative hardware devices developed in the project. However, the latency reduction and the security functions with the 6Tree routing concept work better when multiple 6Tree routers can be linked across different geographical areas with different load. Of course, in the lab there are improvements, but the real-life improvement is likely to be significantly better.

The same thing is true with the SmartNIC. The possibility of supporting security options, and firewalling traffic routing, and not using the server CPU is highly advantageous, because as an ISP one wants the server to use the CPU for the VM, or for the processes whatever they are (OSS/BSS, Users controller, etc..) but not for controlling the traffic or firewalling etc.

Regarding the MoBcaches, these devices help with the mobility of functions and improving the user experience, by moving the cache system closer to the end-user. In the bus use case, the video watched by the travellers had no cuts, and experienced no problems whatsoever. This is something very difficult to achieve when travelling in motion in a bus, because the coverage is not always as good as possible, or the station is overloaded, etc. The tests went smoothly as far as handover was concerned; session continuity is important, and features like faster handover will be important in actual deployments of heterogeneous 5G networks. MP-TCP is an existing protocol that allows partial resolution of this issue.

Additionally, 5G deployed scenarios will require a communications system that supports multiple access techniques and currently, cost is the main issue for transport operators when offering internet services to their users. In this context, local caching and prefetching bring essential savings to the system by decreasing the traffic load requirement for a given access technique; and ensuring service continuity in shadow areas by prefetching content during a session, or during the handover phase.

Some limitations, like the need to have agreements with OTT content providers to cache HTTPS sessions (although transparent caching is still technically feasible) will need to be solved. This will also introduce additional load to the system.

Some features like full scalability require a larger pilot. The deployment and tests of MoBcache in APFUTURA were quite effective as they allowed for the preparation of a second broader pilot in Rennes, France, (INOUT) where we plan to deploy the system over several bus lines. This deployment will be done in April/May 2018 with a satellite operator (Eutelsat) and a WIFI operator (FON). In this pilot 3 access techniques (satellite, 4G and WIFI) will be possible.

The CHARISMA hierarchical architecture concept is “moving” the datacentre close to the end user and this helps improving the user experience; but it could be a problem managing the security and their associated policies. CHARISMA has advance towards solving this problem by creating a managing system that is common to every VNO, so each VNO can manage each policy very easily, and of course, totally independent from other VNOs. This is a very strong point, because as an ISP, APFutura knows the cost of network deployments and their management. With the CHARISMA concept, there is only one network deployment, and many VNOs can use the network as easily as if they owned the network.

Virtualization is key, because each VNO can instantiate its own functions with its own requirements. Here, the SmartNIC concept is also useful, because one can allow the CPU to focus its processing power on just the VNFs, and not share it with traffic handling.

Page 117: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 117 of 124

6 Conclusions

In this deliverable D4.3 “Validation field and test results and analysis evaluations” of the CHARISMA project, we have provided a comprehensive overview, results and analysis of the two field trials at the Telekom Slovenije and APFutura premises, and the lab demonstrator at NCSRD. The field trials and demonstrators were designed to exhibit the three key features of the 5G CHARISMA architecture: low latency, open access, and security. The deliverable D4.3 represents a continuation and completion of the work previously reported in the earlier CHARISMA deliverables D4.1 “Demonstrators design and prototyping”, and D4.2 “Demonstrators infrastructure setup and validation”.

In this report, we have provided the test results, evaluations and validation analyses of the two field trials and lab demonstrator (each representing realistic networking testbeds) to confirm that they indeed satisfied the requirements of the CHARISMA use cases that were specified in WP1. Indeed, each demonstrator scenario was validated and a final results analysis carried out to also compare the results to the 5G-PPP quantifiable KPIs, as well as the refined indicators and use-case specifications from the deliverable D1.2 “Refined architecture definitions and specifications”.

The low latency objective was addressed with the architectural aspect of the “nearest shortest path”, which enables the 5G architecture to deploy services over the shortest possible physical path, and decentralizes the distribution of Intelligent Management Units (IMUs). From the hardware perspective, the fast IPv6 router TrustNode and the SmartNIC technology devices were specially designed to lower the latency and deployed in the Telekom Slovenije and APFutura field trials.

Open Access and multi-tenancy as an objective was efficiently addressed by deploying a unified software package for the Control Management and Orchestration (CMO) that also features hierarchical addressing of the network resources across the different Converged Aggregation Levels (CALs). CALs reside at geographically distant locations, and this therefore emphasised the importance of a unified and central management system.

The CHARISMA security objective was particularly implemented over the NCSRD laboratory testbed, which took advantage of the virtualisation (NFV) and software-defined (SDN) paradigms to deliver policy-based automation for security purposes, as well as end-to-end virtualised security functions. Through a selection of security attacks (principally DDoS) we have examined how the integration and co-operation of service orchestration, policy management, and monitoring and analytics systems can assist in the identification of security incidents and accelerate the mitigation response process.

In this D4.3 report we have also provided the details of the physical and logical architectures of each of the field trials and lab demonstrator, with detailed descriptions of how to install the various items of software required, how to configure the various items of hardware equipment, and how to confirm successful operation and functioning of the resulting infrastructures. In combination with the previous deliverables D4.1 and D4.2, the information contained within this D4.3 report should therefore be sufficient to enable any other 5G technologies developer/manufacturer or 5G services provider to understand and reproduce any aspect of the 5G demonstrations that have been showcased by the CHARISMA project.

This deliverable D4.3 therefore wraps up the experimental work undertaken by all the partners in the CHARISMA project over the past 30 months. We have completed the final validations and evaluations of the two different field trials and lab demonstrator at three different locations, and found that they do indeed conform to the three principal objectives of the CHARISMA architectural concept, as well as the requirements and specifications of the defined use case scenarios. In addition, the CHARISMA performance capabilities were verified to conform to the relevant 5G-PPP KPIs.

Page 118: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 118 of 124

References

[1] Deliverable D1.2 – Refined architecture definitions and specifications - CHARISMA consortium

[2] Deliverable D1.1 - CHARISMA intelligent, distributed low-latency security C-RAN/RRH architecture - CHARISMA consortium

[3] Deliverable D3.4 - Intelligence-driven v-security including content caching and traffic handling - CHARISMA consortium

[4] Deliverable D4.1 - Demonstrators design and prototyping - CHARISMA consortium

[5] Deliverable D4.2 - Demonstrators infrastructure setup and validation - CHARISMA consortium

[6] 5G-PPP KPIs - https://5g-ppp.eu/kpis/

[7] https://ec.europa.eu/energy/en/topics/markets-and-consumers/smart-grids-and-meters

[8] EU SUNSEED - Sustainable and robust networking for smart electricity distribution - https://sunseed-fp7.eu/

[9] 3GPP TS 22.261 General 5G Requirements TS22.261, 2017

[10] Article CHARISMA -- 5G Low Latency Technologies and their Interaction with Automotion Control Loops, Marian Ulbricht, Philipp Dockhorn, Umar Farooq Zia, Christian Liss, Eugene Zetserov, Kai Habel, and Mike Parker, to be published in Journal of Communications (JCM, ISSN: 1796-2021) March 2018

[11] Cisco, “Design best practices for latency optimization”, , 2007. [Online]. Available: https://www.cisco.com/application/pdf/en/us/guest/netsol/ns407/c654/ccmigration_09186a008091d542.pdf

[12] S. Woo et al., “Comparison of caching strategies in modern cellular backhaul networks,” in Proc. of ACM MobiSys, 2013, pp. 319–332.

[13] J. Vesuna et al., “Caching doesn’t improve mobile web performance (much),” in Proc. of the 2016 USENIX Conference, Berkeley, CA, USA, 2016, pp. 159–165.

[14] K. V. Katsaros et al., “Globetraff: A traffic workload generator for the performance evaluation of future internet architectures,” in Proc. of IFIP/IEEE NTMS, May 2012, pp. 1–5.

[15] Luz Fernandez del Rosal et al., iCIRRUS DELIVERABLE: D3.2 “Preliminary Fronthaul Architecture Proposal”, 2016

[16] L. Fernandez del Rosal, K. Habel, S. Weide, P. Wilke Berenguer, V. Jungnickel, P. Farkas and R. Freund, “Multi-Gigabit Real-Time Signal Processing for Future Converged Networks,” in ITG Fachtagung Breitbandversorgung in Deutschland, Berlin, 2016

[17] K. Habel, et al. “iCIRRUS DELIVERABLE: D5.3 Validation test setup and execution report.”, 2017

[18] ENISA, «Threat Landscape and Good Practice Guide for Software Defined Networks/5G,» 2016.

[19] Europol, "Internet Organised Crime Threat Assessment", 2016.

Page 119: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 119 of 124

Acronyms

VNI Virtual Network Infrastructure VNO Virtual Network Operator RAN Radio Access Network C-RAN Cloud RAN E2E End to End 5G-PPP 5G Infrastructure Public Private Partnership LTE Long Term Evolution CAL CHARISMA Aggregation Layer OFDM-PON Orthogonal Frequency Division Multiplex-Passive Optical Network CPE Customer Premises Equipment IMU Intelligent Management Unit vMME Virtual Mobility Management Entity vPGW Virtual Packet data network Gateway vSGW Virtual Serving Gateway vIDS virtual Intrusion Detection System VLAN Virtual Local Area Network IoT Internet of things SLA Service Level agreement DSO Digital Storage Oscilloscope DC Data Center QoS Quality of service GPIO General Purpose Input Output OVS Open vSwitch MEC Multi-Access Edge Computing MA Monitoring & Analytics CPU Central Processing Unit DoS/DDoS Denial of Service/Distributed DoS GUI Graphical User Interface SPM Service Policy Management Y2 Year 2 TCP Transmission Control Protocol UDP User Datagram Protocol ICMP Internet Control Message Protocol IMU Intelligent Management Unit IP Internet Protocol SSID Service Set Identifier SDN Software Defined Network vCache Virtual Cache vCC Virtual Cache Control EVC Ethernet Virtual Connection S-VLAN Service-provider VLAN C-VLAN Customer VLAN REST API Representational State Application Programming Interface InfP Infrastructure Provider

Page 120: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 120 of 124

UNI User Network Interface PCP Priority Code Point NFVPoP NFV Point of Presence OLT Optical line termination ONT Optical network terminal SyncE-aware Synchronous Ethernet (aware) mmW Millimeter Wave CU Central Unit DU Distributed Unit BH Back Haul NIC Network Interface Card QPSK Quadrature phase-shift keying FPGA Field-programmable gate array DSP Digital Signal Processing MAC Media Access Control FEC Forward Error Correction vCPU Virtual CPU CHR Cache hit ratios CHRL Cache hit ratios local CHRP Cache hit ratios peering ADT Average download time ETSI European Telecommunications Standards Institute SoC System on a chip KPI Key Performance Indicator CO Central Office ONU Optical Network Unit IM Intensity Modulation QAM Quadrature Amplitude Modulation m-QAM Quadrature Amplitude Modulation with m constellation points C&C Command and Control DoS Denial of Service DDoS Distributed Denial of Service IDS Intrusion Detection System EMS Element Management System VNO Virtual Network Operator SPM Service Policy Manager

Page 121: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 121 of 124

7 Appendix

7.1 Definitions

7.1.1 7.1.1 Network latency source s

This section gives a short overview about the delay-definitions used in CHARISMA. Figure 95 shows the context of the delay groups.

Figure 95: Network delay visualisation [11]

Processing Delay: Delay added during the Packet processing inside a device. e.g. delay added for the routing decision to determine the output-port of an packet.

Queuing Delay: Delay added for queue management and packet queuing inside a device.

Propagation Delay: Delay added to a packet during the propagation of an entity from input to outport. This includes delay for propagation trough a device which is the sum of all delay added inside the device and also the delay added by transporting the packet troth the ether e.g. cable or wireless channel.

Startup Delay: Delay for content delivery. Time period between content request and content delivery.

7.2 Integration testing

7.2.1 APF SmartNIC

To create vFunction overlay (in our case it can be VNO support) at least one parameter AceVlan or AceAppSrcMac or AceAppSrcIp has to be mentioned, all flow generated from this vF would be append by VNO tag, and for each flow to this vF the VNO tag would be removed. Unit, Username and Password parameters are for future use:

curl -H "Content-Type: application/json" -X POST -d '{"unit":"0","Username":"xyz","Password":"xyz","Priority":"102","AceName":"dhcp-vm","AcePort":"2","AceVlan":"20","AceVnoTag":"10","AceAppSrcMac":"00:01:02:03:04:05","AceAppDstMac":"00:01:02:03:04:06","AceAppEthType":"0x800","AceAppSrcIp":"1.1.1.1","AceAppDstIp":"2.2.2.2","AceAppIpProto":"6","AceAppSrcL4Port":"67","AceAppDstL4Port":"68","AcePolicerCir":"10000000","AcePolicerCbs":"64000","AcePolicerEir":"0","AcePolicerEbs":"0","AceShaperState":"yes","AceShaperCir":"10000000","AceShaperCbs":"64000","AceMonStat":"yes"}' http://localhost:8080/createAceVf

Page 122: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 122 of 124

7.3 D4.2 Addendum

This addendum section is to contain additional contributions to deliverable D4.2 “Demonstrators infrastructure setup and validation”.

7.3.1 D4.2 section 4.2.6 update - optical wireless link testing

Table 57: D4.3 section 4.2.6 update – optical wireless link testing

Test Description

Identifier OW_Link FieldTest

Test Purpose The purpose of this test to verify the correct function of the Optical Wireless (OW) backhaul link in Mediterranean weather conditions.

Configuration OW link (2 nodes) Control/Logging PC Weather sensor

OW Link FieldTest

References See below

Applicability

Pre-test conditions Function of OW link (working at a HHI installation)

Test sequence

Step Type Description Result

<configuration> Setup at Altice Labs (see above)

<check> Check connectivity Passed

<check> Check weather sensor information Passed

<check> Check logging function Passed

Result PASSED

Due to availability issues, the optical wireless link could not been tested before M29 in the project. The Therefore, the testing results were not available for D4.2, but a description can be found in section 4.2.6. In the meantime, the link has been installed in Aveiro at the site of Altice Labs in Portugal (Figure 96).

Transceiver 1

Transceiver 2

Page 123: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 123 of 124

Figure 96: Installation of VLC link for long-term measurement

The link could established and an initial data rate of 528Mbps and 613 Mbps could be achieved (Figure 97). This is typical for this link length.

Figure 97: Initial debugging information from VLC link at Aveiro

The initial SNR has been measured too. The value over the used spectrum from 5 to 100MHz can be seen in Figure 98 for both directions. The colours show the minimum (red), maximum (yellow) and mean (green) value for a measurement period. The right figure includes a minimum curve for a non-aligned link.

Figure 98: SNR over spectrum for VLC link installed in Aveiro.

7.4 6Tree routing algorithm

The following two diagrams show the logical behavior of the TrustNode in the 6Tree forwarding mode. The TrustNode has two uplink ports; packets entering there are forwarded in downstream direction. Packets entering the downlink ports are forwarded either to other downlink ports (local traffic) or to the uplink ports.

Page 124: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2020. 9. 9. · CHARISMA – D4.3 Page 1 of 124 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent

CHARISMA – D4.3 Page 124 of 124

7.4.1 Downlink

Figure 99: 6tree routing algorithm - downlink

7.4.2 Uplink

Figure 100: 6tree routing algorithm - uplink

<END OF DOCUMENT>


Top Related