+ All Categories
Home > Documents > Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3...

Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3...

Date post: 20-Jul-2020
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
43
CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for Intelligent and Secure Media Access Project no. 671704 Research and Innovation Action Co-funded by the Horizon 2020 Framework Programme of the European Union Call identifier: H2020-ICT-2014-1 Topic: ICT-14-2014 - Advanced 5G Network Infrastructure for the Future Internet Start date of project: July 1 st , 2015 Deliverable D3.3 Initial Content Caching and Traffic Handling at SW Regex Integration Due date: 30/06/2016 Submission date: 15/07/2016 Deliverable leader: Yaning Liu Dissemination Level PU: Public PP: Restricted to other programme participants (including the Commission Services) RE: Restricted to a group specified by the consortium (including the Commission Services) CO: Confidential, only for members of the consortium (including the Commission Services) Ref. Ares(2016)3477231 - 15/07/2016
Transcript
Page 1: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 1 of 43

Converged Heterogeneous Advanced 5G Cloud-RAN Architecture for

Intelligent and Secure Media Access

Project no. 671704

Research and Innovation Action

Co-funded by the Horizon 2020 Framework Programme of the European Union

Call identifier: H2020-ICT-2014-1

Topic: ICT-14-2014 - Advanced 5G Network Infrastructure for the Future Internet

Start date of project: July 1st, 2015

Deliverable D3.3

Initial Content Caching and Traffic Handling at SW

Regex Integration

Due date: 30/06/2016

Submission date: 15/07/2016

Deliverable leader: Yaning Liu

Dissemination Level PU: Public

PP: Restricted to other programme participants (including the Commission Services)

RE: Restricted to a group specified by the consortium (including the Commission Services)

CO: Confidential, only for members of the consortium (including the Commission Services)

Ref. Ares(2016)3477231 - 15/07/2016

Page 2: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 2 of 43

List of Contributors

Participant Short Name Contributor

JCP-Connect JCPC Yaning LIU, Matthias Sander Frigau

Demokritos NCSRD NCSRD Eleni TROUVA, Yanos Angelopoulos

Intracom Telecom ICOM Konstantinos KATSAROS

Fundació i2CAT I2CAT Shuaib SIDDIQUI, Eduard ESCALONA

University of Essex UESSEX Michael PARKER (Internal Reviewer)

Page 3: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 3 of 43

Table of Contents

List of Contributors ................................................................................................................ 2

1. Introduction ...................................................................................................................... 6

1.1. Background ........................................................................................................................................ 6 1.1.1. State-of-the-art .......................................................................................................................... 6 1.1.2. Gap analysis ............................................................................................................................... 8

1.2. Design requirements ........................................................................................................................10

2. CHARISMA Content Delivery System ........................................................................12

2.1. Objectives and System Overview .....................................................................................................12

2.2. CHARISMA distributed caching framework .....................................................................................16 2.2.1. Identification of cache location ...............................................................................................16 2.2.2. Virtualized cache as a VNF .......................................................................................................16 2.2.3. Virtualized cache controller as a VNF ......................................................................................19 2.2.4. VNFD Descriptors .....................................................................................................................21

2.3. CHARISMA traffic handling ..............................................................................................................22 2.3.1. Problem statement ..................................................................................................................22 2.3.2. The CHARISMA approach .........................................................................................................24

2.4. Security issues related to CHARISMA content delivery system ........................................................26

2.5. ICN based content delivery in CHARISMA ........................................................................................27 2.5.1. Latency benefit ........................................................................................................................28 2.5.2. Mobility benefit .......................................................................................................................28

3. Interface design of CHARISMA cache system .........................................................29

3.1. Overview and requirements of interface design ..............................................................................29

3.2. Communication between cache nodes ............................................................................................29

3.3. Interfaces between cache nodes and cache controller ....................................................................31

3.4. Interfaces to VNF manager ..............................................................................................................31

4. Overview of primitive implementation .......................................................................33

4.1. Cache Nodes and Cache Controller ..................................................................................................33

4.2. VNF Descriptor .................................................................................................................................35

5. Conclusions ..................................................................................................................39

References ............................................................................................................................40

Acronyms ..............................................................................................................................42

Page 4: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 4 of 43

List of Figures

Figure 2-1: CHARISMA four converged aggregation levels (CALs) architecture .............................................. 13 Figure 2-2: CHARISMA Caching solution with management ........................................................................... 14 Figure 2-3: Call flows for the management of cache slice and cache/pre-fetch ............................................. 15 Figure 2-4: CHARISMA Cache Node Software Components ............................................................................ 17 Figure 2-5: Content Delivery Procedure Initiated by User Equipment ............................................................ 18 Figure 2-6: Netconf Architecture ..................................................................................................................... 20 Figure 2-7: CHARISMA Content Caching Architecture .................................................................................... 20 Figure 2-8: CHARISMA Prefetching Procedure Controlled by CC .................................................................... 21 Figure 2-9: Delay components in the case of v-Caching ................................................................................. 23 Figure 2-10: CHARISMA solution for cacheable traffic identification and handling (managed by the VNO) .. 26 Figure 3-1: Example operation of vCache peering in multi-tenancy scenarios. A cache miss in the vCache of

VNO 1 results in the redirection of the content request to the vCache instance of the co-located VNO 2. ............................................................................................................................................................. 30

Figure 3-2: CHARISMA solution for cacheable traffic identification and handling (managed by the infrastructure operator).......................................................................................................................... 32

Figure 4-1: The MoBcache prototype .............................................................................................................. 33 Figure 4-2: Diagram of the logical architecture ............................................................................................... 34

Page 5: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 5 of 43

Executive Summary

This deliverable D3.3 “initial Content Caching and Traffic Handling at SW Regex Integration” provides the

initial results arising out of the Task 3.4 “Content Caching and Traffic Handling at the RAN node” in Work

Package 3 during the first year of the CHARISMA project. The purpose of Task 3.4 is to provide a CHARISMA

content delivery service featuring the key CHARISMA performance characteristics of: low latency, open

access and security. This deliverable’s main aim is to describe in detail the design of the CHARISMA content

delivery system that has been developed in T3.4. As part of that design process we have investigated how an

information-centric networking (ICN) approach allows the CHARISMA content delivery solution to benefit

from the low latency and mobility of ICN technology.

We describe how content delivery solutions in current networks operate, with an analysis of the gaps in these

current technology solutions, in terms of their inability to support virtualized caching solutions and caching-

related traffic handling solutions.

The distributed caching locations in the CHARISMA network have been identified with each of the different

Converged Aggregation Levels (CALs) of the CHARISMA architecture. The advanced CHARISMA MoBcache

system has been designed to provide a virtualised and intelligent seamless service in a mobile 5G scenario.

Virtualization of the cache nodes and cache controller allows the dynamic deployment of network caches,

and cache management for CHARISMA subscribers according to their requirements and network status.

Through dynamically allocating virtualized caches and the cache controller to a virtual network operator

(VNO), the CHARISMA content delivery system is also able to provide open access and multi-tenancy services.

Traffic handling has also been investigated in this deliverable, with traffic cacheability and the caching

operation analysed, such that a selective traffic offloading strategy can be deployed in order to improve the

service access time latency.

Security issues related to the content delivery system have also been identified and discussed, but the design

and implementation of solutions for these security issues will be considered as future work in the second

year of the project.

As part of integrating the work of T3.4 with the rest of the work package WP3, the CHARISMA control,

management & orchestration (CMO) system has been interfaced to the CHARISMA content delivery system.

The virtualized caching resources are also to be managed and controlled by the CHARISMA CMO system.

Initial integration of the MoBcache content delivery system with the CHARISMA intermediate low latency

demonstrator has also been achieved, and is described in the parallel deliverable D4.1. On-going integration

of MoBcache into the overall CHARISMA system pilots is a feature of the T3.4 work into the second year of

the CHARISMA project.

Page 6: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 6 of 43

1. Introduction

This document presents the initial design of the CHARISMA content delivery system based on in-network

caching, as well as the traffic handling solutions that will assist in the optimum management of such a caching

system. The definitions of the different components and their associated interfaces forming the CHARISMA

MoBcache are described in this document, as are the interfacing and integration with the CHARISMA control,

management & orchestration (CMO) system. The work presented here represents the output of the first

year’s technical work in the CHARISMA task T3.4 “Content Caching and Traffic Handling at the RAN node” of

work package WP3.

The CHARISMA content delivery system is targeting to support the three main objectives of the CHARISMA

architecture: open access, low latency and security. It has been designed to provide an open accessible,

highly configurable, efficient and transparent in-network caching service to improve the content distribution

efficiency by distributing content across the different hierarchies of the CHARISMA Converged Aggregation

Levels (CALs) in the network. The MoBcache has been designed to be an open cache access solution using

virtualization of the caches and the Cache Controller (CC). This allows the dynamic allocation of virtualised

slicing of caches and the CC to different service providers (SPs) or virtualized network operators (VNOs) over

the same common infrastructure. Furthermore, the security issues related to the caching solutions have been

discussed and will be designed and implemented in the future work.

We note that the SW regular expression (Regex) technique suggested in the title of this deliverable is not

actually discussed in this document, for several reasons: First, in order to meet the three key objectives of

the CHARISMA project (low latency, open access and security), a design of the CHARISMA content delivery

system including the definition of the architecture and the integration to the CHARISMA CMO infrastructure

is required, that goes beyond the SW regex integration of the caching solution. In addition, although regex

matching solutions have been used for Deep Packet Inspection (DPI), the SW regex technique is only one of

various techniques which we can use for our DPI solution for the identification of requested content items;

however, it is not especially optimised for low latency, especially in the vCache context, and so we have

adopted an alternative DPI approach, as described in chapter 2, based on SDN primitives, that targets the

identification of traffic amenable to caching at the TCP/IP flow level.

1.1. Background

1.1.1. State-of-the-art

Internet traffic keeps growing at pace as a consequence of the steadily increasing number of users and the

adoption of new bandwidth-intensive services (such as video services) by end users. According to recent

studies [1], global IP traffic has increased more than fourfold in the past 5 years, and is expected to increase

a further threefold over the next 5 years.

In order to reduce network traffic and improve Quality of Service and Quality of Experience (QoS/QoE) for

end users, caching schemes have been proposed in Content Delivery Networking (CDN). In a CDN

infrastructure, a set of CDN servers is distributed in the network to replicate content and facilitate its delivery

to end-users. Commercial CDNs like Akamai and Limelight have been successfully deployed in the current

Internet. Recent reports [1] show that about one third of Internet traffic was carried by CDNs in 2012, and

by 2017 the traffic crossing CDNs is anticipated to have grown by 51%. Moreover, due to the increasing

Page 7: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 7 of 43

popularity of smart phones and emerging mobile applications, mobile Internet is dramatically expanding. The

Cisco Visual Networking Index (VNI) report [1] shows that Internet traffic from wireless and mobile devices

will exceed traffic from wired devices by 2016 and nearly half of Internet traffic will originate from non-PC

devices by then. Global mobile traffic is anticipated to increase nearly eleven-fold by 2018.

To meet the growing bandwidth demands of mobile users, commercial Long-Term Evolution (LTE) networks

are being deployed to significantly increase the bandwidth and reduce the latency experienced in the mobile

network. Though LTE improves the network performance between end users and the mobile network, the

service latency still highly depends on the distance between the data centre and the exchange point where

the mobile network connects to the Internet. Ensuring low service latency is crucial to satisfying the QoS of

current applications, especially for latency-sensitive applications such as audio and video services. Moreover,

LTE network backhaul bandwidth will be heavily consumed by duplicated data streams when content (such

as high-popular videos) is requested simultaneously and frequently. The authors in [2] analysed the gains of

Hypertext Transfer Protocol (HTTP) content caching at the location of Serving Gateway (SGW) in an LTE

Wireless network, and they found that 73% of the data volume and around 30% of the responses are

cacheable. Mobile network caching could be a cost-effective solution to improve the service latency and

reduce the mobile backhaul traffic by replicating popular and frequently demanded content in Internet

Protocol (IP)-based 3G/LTE network elements closer to mobile users.

Historically, content was stored in centralized locations owned and operated by the original creators of the

content. Caching content closer to the users in a network is an effective way for network operators to offload

traffic from the core segment of the network, where contents are usually stored in data centres. Because of

the tremendous growth of traffic expected in the next few years, various studies [1] and [4], have identified

the risk of traffic bottlenecks in the core network infrastructure that could severely compromise the

performance of services such as Video on Demand (VoD), Online Gaming, File Sharing, Media Streaming, etc.

The deployment of distributed metro/access replicated caches in the aggregated infrastructure allows

decreasing congestion of the core network and ensures fast content delivery in a distributed way. It also

enables a more scalable architecture, since the content is replicated in several locations of metro/access

networks. As VoD and Over the Top (OTT) services increased in volume, network operators began to search

for improved models with the intention of reducing the total load in the network, improving user experience

and maintaining customer satisfaction, as well as providing a mechanism to monetize the delivery of content-

based distribution, perhaps by enabling advert insertion. Operators therefore adopted content cache

support on or collocated with less centralized, Broadband Remote Access Server (BRAS) devices until such

time that a distributed approach was standardized. The Broadband Forum specifications for Digital

Subscriber Line Access Multiplexers (DSLAMs) incorporate cache facilities, including content offload

interfaces, content injection and Deep Packet Inspection (DPI, an enabler of content based functions). Such

tools have supported operator business models and enabled a distributed content distribution model. For

fixed line services, the edge access aggregation point (OLT or G.fast) is the deepest one might want to go

with cache facilities while for mobile services this model could be insufficient. As mobile services traverse

more devices before accessing the packet gateway such as the base station itself, and more than likely, a cell-

site aggregator to which multiple base stations are attached for efficient backhaul, so the latency and Quality

of Experience demands cause operators to anticipate a deeper, more distributed content cache topology.

The in-network caching system can be managed in both proactive and reactive ways. The proactive approach

manages the in-network caching by means of pushing content into the caching devices. This pushing

operation could be particularly useful in live audio/video streaming delivery since it is easy to anticipate the

Page 8: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 8 of 43

users’ request. The pushing method also allows for prefetching of content that a user could potentially

requested onto a cache located in the network at a location that a mobile user will move to. The popularity

of the content and the users’ social behaviour can also be used to improve the accuracy of predicting the

pre-fetched content. On the other hand, on-path caching is applied in a reactive approach. Here, the on-path

caching allows every caching device that forwards the content to temporarily keep the data in its cache.

Software Defined Networking (SDN) has been investigated in a study [5] to provide cache as a service. The

main idea here is to separate the data plane and control plane that orchestrates the caching and distribution

functionalities, and transparently push the content as close as possible to the users. An SDN based in-network

service has several advantages. First, it improves network utilisation and minimises the external link usage

on the last mile that is often costly. Second, it reduces the distribution load for the content service. Third,

through transparent caching enabled closer to the end users, the distance between the content server and

end users is reduced. Furthermore, SDN allows vendor-agnostic deployment of the caching solution on

commodity hardware while providing a centralised network overview, via monitoring, enabling rapid load

balancing which in turn improves QoE of the end users. Another study [19], has investigated content based

traffic engineering techniques based on software-defined Information Centric Networking (ICN). The

content-based traffic engineering and load balancing in the caching solution is supported by a controller

based on SDN principles. All the cache nodes in the network announce their content to the controller, which,

in turn having a global view, performs traffic engineering and load balancing to achieve maximum efficiency.

Network functions virtualization (NFV) (also known as virtual network function (VNF)) offers a new way to

design, deploy and manage networking services. NFV decouples the network functions, such as network

address translation (NAT), firewalling, intrusion detection, and caching, from proprietary hardware

appliances so they can run in software to accelerate service innovation and provisioning. For example, a

virtual Router or vRouter is a software function that represents the functionality of a hardware-based IP

routing. Virtual router is a form of NFV where the functions of hardware-based network appliances are

converted to software that can be run on standard commercial-off-the-shelf (COTS) hardware. In this way,

in-network caching can be presented as software functions and be virtualized as a virtual Cache or vCache.

In a more sophisticated distributed caching environment, pieces of the caching software can be moved

around entire networks while managed with a centralized control plane. In this way, caching functions can

be easily and quickly installed and provisioned in the network, as well as dynamically configured or adapted

to the network needs. With virtualization of in-network caching, network operators can lower the costs for

managing caching services deployed in the network, allow easier scaling of available hardware cache

resources, and be able to achieve security and integrity through separation and isolation.

The two-abovementioned technologies, NFV and SDN, together can enable dynamicity and flexibility for a

caching solution as the former allows flexible provisioning for the caching functions itself whereas the latter

provides the required rapid automatic network provisioning.

1.1.2. Gap analysis

The realization of caching functionality on top of virtualised computing, storage and network resources

presents a series of opportunities for novel features, as well as challenges on how to take advantage of the

elasticity of resources available. In the following we discuss how the current state-of-the-art fails to address

the new emerging challenges, calling for corresponding advances on several fronts.

Page 9: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 9 of 43

The virtualised character of the resources and the corresponding software based realization of the caching

functionality support a flexible model for the dynamic instantiation of cache instances across both time (i.e.,

selecting when to instantiate a cache) and space (i.e., selecting the location to place a cache). As the vast

majority of current caching infrastructure relies on dedicated hardware, the cache placement problem (i.e.,

which network locations caches should be deployed at) has been traditionally addressed as a one-off long-

term decision pertaining to network planning and dimensioning. The current state of the art therefore only

foresees the formulation of problems related to the static placement of the cache locations, subject to long

term observations of the demand, usually in the form of a facility location problem e.g.,[12]. However,

virtualization introduces the aspect of time, decoupling the deployment of a cache instance from any physical

deployment constraints. This opens new challenges in also identifying when a cache is mostly needed in a

certain network, subject to workload dynamics and estimations of latency and traffic savings. The elasticity

of the virtualised resources enables the efficient management of the available resources, allowing (Virtual)

Network Operators ((V)NOs) to utilize the leased resources when there is actually a well-justified need for

this.

The elasticity of the resources further poses challenges related to the auto-scaling of the virtual caching

infrastructure. As the caching workload may vary through time, it is required that the correct amount of

resources is devoted to the caches avoiding cases of overloading or underutilization. Though auto-scaling

approaches have been traditionally based on the instantiation of new Virtual Machines (VMs) in the cloud

environment (scale out), the case of caching functionality presents important specificities. Namely, the scale

out operation of virtualized caches is expected to lead to new VM instances with non-initialized caching state,

since caches typically require a warm up period to gradually store the most popular content. The instantiation

of new VM images bears therefore the risk of not reaping the benefits of scaling out immediately i.e., the

required warm up period is expected to result in the reduced efficiency of the newly spawned cache instance.

Though a potential solution can be based on the use of VM snapshots (i.e., a copy of an existing VM is

instantiated, including the current state), this is expected to lead to the non-efficient utilization of the

allocated resources: two identical virtual caches are expected to share the expected load for the common

most popular content items, but at the same time consume duplicate resources for the same content e.g.,

duplicate copies of the same least popular content. Therefore, a right balance between popular and non-

popular content, as well as the right cache content diversity across virtual cache instances is required.

On another front, given the flexible control of the overall virtual topology, a new opportunity arises regarding

the handling of traffic in the context of virtual caching, with a particular focus on achieving low delays. This

opportunity stems from the observations that not all traffic can benefit from caching (e.g., conference calls),

and that steering traffic through a cache results in delay overheads related to the protocol stack traversal

and the cache index lookup operation. While such delays are acceptable in the case of a cache hit, where the

latency to retrieve the content from its origin is avoided, the same does not hold for a cache miss, since in

this case, latency increases as compared to the case of not employing a cache. An important challenge then

relates to the identification of the traffic amenable to caching and the corresponding steering of the traffic

through the cache or not i.e., bypassing caches when necessary. Recently proposed approaches aim at

maintaining up-to-date state for the content currently cached in the available virtual caches (vCaches),

supporting thus an explicit lookup of all requested items before taking a forwarding decision i.e., each

content item requested is looked up in a (logically) centralized repository in order to locate (or not) the cache

location(s) that hold(s) a copy and subsequently forward the request accordingly [13]. This approach is

considered as inherently tight to large overheads related to (i) the required deep packet inspection (DPI)

Page 10: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 10 of 43

functionality for the identification of the requested content item, and (ii) the maintenance of up-to-date state

for all the content in the vCaches i.e., cached content may have limited lifetime within the cache, calling for

a frequent updating of the state maintained in the aforementioned (logically) centralized repository; also the

volume of information for this mechanism may be considerably high, especially for multiple cache locations

of large capacity. Later in this document we discuss an alternative approach, which based on SDN primitives,

targets the identification of traffic amenable to caching on a TCP/IP flow level.

Finally, considering caching in the context of multi-tenancy scenarios opens up opportunities for new models

of co-operative caching and business relationships between Virtual Network Operators (VNOs). As the same

physical infrastructure can support multiple tenants, it is possible that virtual cache instances belonging to

different VNOs are physically collocated on top of the same hardware. Since the communication between

such closely located instances is expected to present significantly low latencies - a major objective for

CHARISMA - a new model of VNO inter-domain relationship emerges, foreseeing the peering of virtual caches.

In this model, caches of different VNOs are configured to act as siblings in a co-operative caching scheme.

Such an approach is expected to yield mutual benefits for VNOs, which may in this manner avoid excessive

traffic across the physical network infrastructure. Note that the dedicated use of physical realizations of

caches in the current network model presents no such opportunity, with the exception of co-location of

operators at Internet Exchange Points, which is however based again on dedicated hardware in isolated

cache installations. The interconnection of content distribution networks (CDNi) IETF working group1 has

identified similar objectives, by foreseeing the interconnection of cache instances of different CDNs. However,

the related approaches focus on mechanisms explicitly tailored for the interconnection of overlay CDNs, and

as such they do not provide experience and technical solutions in the context of the network functions

virtualization (NFV) paradigm adopted by CHARISMA.

1.2. Design requirements The design of CHARISMA content delivery system should meet the three key drivers (low latency, security

and open access) targeted by CHARISMA.

It is a challenge to provide continuous service with low latency in a mobile scenario where user equipment

switches from different wireless/mobile networks. For example, in the automotive – bus use case presented

in Section 4.2.3 of CHARISMA’s Deliverable D1.1, some intelligence like caching, switching and routing closer

to end users has been defined to assist in reducing network latency. However, providing intelligence in a

wireless / mobile scenario requires a sophisticated design for the hardware device. Hence, two kinds of cache

nodes need are being designed and implemented in CHARISMA: a dedicated cache box called MoBcache and

virtualized caches (vCaches). MoBcache is specially designed for mobile scenarios in the CHARISMA system

with two Wi-Fi interfaces (802.11n and 802.11ac) and one LTE dongle, in order to keep service continuity

while a user moves about and hand-overs are made between networks. The vCache will be deployed as a

Virtual Network Function (VNF) in the different CHARISMA Converged Aggregation Levels (CALs) and will be

managed by the SDN-based CHARISMA management system. In this way, caching functions can be easily and

quickly installed and provisioned in the network, as well as dynamically configured or adapted to the network

needs.

In order to meet the requirements of the CHARISMA open access approach, the CHARISMA content delivery

system implements virtualization of cache nodes and cache controller to enable the deployment of in-

1 https://datatracker.ietf.org/wg/cdni/charter/

Page 11: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 11 of 43

network caching as software functions running directly over the network commodity hardware. The system

allows virtualised slicing of in-network cache resources to different service providers (SPs) over the same

common infrastructure. Furthermore, the CHARISMA content delivery system is able to allocate virtual slices

on top of the physical infrastructure to different network operators (i.e., not just service providers) and lease

the remote application programming interfaces (APIs) to offer access to the virtualised network resources.

Finally, security issues related to the deployment of content caching in the network need to be considered

in the CHARISMA content delivery system. The security requirements considered by the CHARISMA system

include:

o Content Access Trackability: content providers are more and more interested in tracking every access to

their content (i.e., YouTube, Netflix) to avoid copyright problems or legal issues.

o Content Confidentiality: we must enforce content confidentiality for authorized users.

o Access Privilege Violation: an end-user can still retrieve a popular content even if its access privilege to

this content has been revoked. Content origin server will not be aware of this access violation.

o Cache Monitoring Attacks: a malicious user can monitor the behaviour of other users in the same local

cache (e.g., DSLAM, home gateway). Implementing smart caching policies can prevent this.

Page 12: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 12 of 43

2. CHARISMA Content Delivery System

2.1. Objectives and System Overview This document aims at providing a CHARISMA content delivery system to meet the key drivers in the

CHARISMA project, which include: low latency, open access and security. The CHARISMA content delivery

system is designed to provide an open accessible, highly configurable, efficient and transparent in-network

caching service to improve the content distribution efficiency by distributing the content in the different part

of the network. In the CHARISMA vision, we do not plan to support caching ofall kinds of Internet traffic;

rather we are only focusing on the HTTP content caching solution. HTTP traffic is used by the CHARISMA

content delivery system to show as a set of proofs of concept, including latency, virtualisation and security

achieved by deploying caches in different levels of the network. One potential issue is that the amount of

HTTPS traffic used by some popular websites such as Facebook and YouTube is currently increasing.

Measurements show that, approximately 35% of the total HTTP traffic is HTTPS [2]. In CHARISMA, we assume

this issue can be resolved through agreements between network operators and OTT (or content providers),

which will provide network operators the necessary control over content delivery. This issue could lead to a

standardisation initiative. Today the interface between network related header fields and content related

packet parts is blurred. Ideally packet header fields that are related to network functions such as routing

should be open and set correctly by the originator of a packet. But sometimes these fields are not interpreted

or even deleted by the operators. A typical example is the QoS-relevant Differentiated Service Code Point

(DSCP) field in the IP header: some operators ignore this field, and others even clear it. Complex DPI

algorithms are put in place to guess the wish of the user, although it could be explicitly stated in appropriate

header fields. Instead, a user indication to declare a stream eligible for caching would simplify the task of the

network.

Table 1: Qualitative effect of distance on throughput and download time

Distance

(Server to User)

Network

RTT

Typical Packet

loss Throughput

4GB DVD

Download Time

Local

< 160 km 1.6 ms 0.6%

44 Mbps

(high quality HDTV) 12 min

Regional

(800 – 1600 km) 16 ms 0.7%

4 Mbps

(basic HDTV) 2.2 hrs

Cross-continent

(5000 km) 48 ms 1.0%

1 Mbps

(SDTV) 8.2 hrs

Multi-continent

(10000 km) 96 ms 1.4%

0.4 Mbps

(poor) 20hrs

CDNs can improve the QoS of content delivery services because they deliver content from a location near to

the end consumers of the content. Table 1 illustrates the impact of deploying CDN content servers in different

Page 13: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 13 of 43

locations in the network on throughput and network round-trip time (RTT) [6]. It is obvious that the RTT and

throughput are improved if content servers are deployed close to the end users.

We use the transparent caching approach in our CHARISMA content caching system. Transparent caching is

quite different from most traditional content caching solutions that are usually managed by CDN providers

like Akamai or Google other than the network operators. In traditional CDN solutions, the network operator

has little or no control over the cache servers, and as a result has little visibility into the actual productivity

of those servers or what is being delivered from them. Transparent caching transparently picks up the

appropriate packets and cache requests, so that clients do not need to configure their user equipment to

access the cache. Through deploying transparent caching inside the network, network operators are able to

address a broader range of Internet content. With the knowledge of the network status and the full control

of the network, transparent caching deployed by network operators can make better decisions about which

content can and should be cached locally to optimize the network. By deploying intelligent caches

strategically throughout their networks, operators can cache and deliver popular content close to subscribers,

thus reducing the amount of transit traffic across their networks. Popular content can be delivered very close

to the edge, and content of decreasing popularity can be cached further back in the network, forming a

hierarchical structure.

In the CHARISMA content system, caches will be deployed at the four levels of the converged aggregation

levels (CALs) as defined in CHARISMA deliverable D1.1 and shown in Figure 2-1. A Cache Controller (CC) is

designed to manage and configure cache nodes distributed in the network. The basic function of the CC is to

split the management of content flow and placement from the management of content delivery

infrastructures (e.g., forwarding and caching equipment). Through a collaborative caching solution among

caches in the four aggregation levels managed by the cache controller, CHARISMA caching system can

achieve an optimized caching solution and provide better QoE for end users with both low latency and high

throughput.

Figure 2-1: CHARISMA four converged aggregation levels (CALs) architecture

The CHARISMA project is also targeting to provision of an open cache access solution by virtualization of

caches and CC. The specialist hardware (MoBcache or Ubuntu based cache nodes) with instances of virtual

Page 14: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 14 of 43

services is provided on cache nodes in the network. This allows the dynamic allocation of virtualised slicing

of caches and CC to different service providers (SPs) or Virtualized Network Operators (VNOs) over the same

common infrastructure. Figure 2-2 shows the necessary components (in yellow) of the CHARISMA

architecture required to realize the CHARISMA content delivery system.

Figure 2-2: CHARISMA Caching solution with management

These components include:

• vCache, vCC as a VNF, web manager as a Element Management (EM) which will be created for a

specific VNO;

• vCC interfacing to Service Monitoring and Analytics (M&A) to collect required monitoring data for a

specific VNO;

• Cache system interfacing to VNF manager which will interface with the orchestrator;

• Service Policy Manager running as a policy manager in the service policy manager;

• Service Policy Manager interfaces to Service M&A:

• Caching related requirements to Service M&A;

• User location, throughput, RTT, packet loss, etc. ;

• Service M&A has interfaces to VNOs for collecting required monitoring data.

In the control plane of our CHARISMA caching system, there are mainly two kinds of control: one is the slice

management for different VNOs operated by the VNF manager and orchestrated by the Service Orchestrator;

and the other one is cache/pre-fetch management controlled by vCC for a specific VNO as shown in Figure

2-2. In slice management, each VNO is assigned by a cache slice including one vCC and several affected

vCaches located in different parts of the network with required resources like network bandwidth, CPU,

Page 15: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 15 of 43

memory and hard-disk storage. In cache/pre-fetch management, CHARISMA content delivery systems allows

VNOs to configure, deploy and manage their own network slide by vCC, without intervention from the whole

CHARISMA platform. With the cache slice configuration provided by the CHARISMA content delivery system,

each of the use cases can be configured with a different configuration of requirements and parameters in

network devices by requiring its own cache slice.

Figure 2-3 presents the procedures of cache slice management and cache/pre-fetch management. When the

CHARISMA content delivery system receives from a VNO a demand for a cache service that is presented by a

cache slice, the Cache Policy Manager first needs to communicate with the Service M&A to verify if the

affected network elements can support both the new and legacy functionalities. Based on the VNO

requirement and the current resource status, the Cache Policy Manager can make an optimised decision,

based on where to add vCaches, and how many resources are to be allocated to each of the vCaches, etc. In

this way, the cache slice can be initiated for the VNO. During the cache slices running time, the Cache Policy

Manager is also responsible for reconfiguring the slices with a global view of network status while still

matching the requirements of the VNOs.

As described before, each VNO is assigned by a vCC that is used to allow this VNO configuring and managing

the assigned resources by its own. The interface between vCC and Service M&A is enabled to allow the VNO

to manage its own slice and perform caching/prefetching operations with sophisticated designed strategies.

Figure 2-3: Call flows for the management of cache slice and cache/pre-fetch

Page 16: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 16 of 43

2.2. CHARISMA distributed caching framework

2.2.1. Identification of cache location

In this section we focus on the feasibility and potential methods for an efficient caching solution in access

and aggregation networks. We define a set of key network elements enabling cache functionality in the

networks. Figure 2-1 depicts a scenario where storage and caching functions are enabled in: customer

premises network (Home GateWay (HGW) or Customer-premises equipment (CPE)) in CAL0; mobile access

network (micro base station or small cell,) in CAL1; mobile access network (Advanced Remote Node (ARN),

or eNodeB) in CAL2; and backhaul network (data centres) in CAL3. By caching in the access network and the

collaborative caching among this equipment, content can be intelligently duplicated closer to the mobile

users, and efficiently delivered to/from different parts of the network.

Caching on Home / Residential Gateways

Today’s dominating technique for delivering Internet to the home is based on copper access technologies,

Fibre to the Home (FTTH) or Fibre to the Building (FTTB). A converged residence gateway integrating

DSL/GPON/Ethernet, Wi- Fi and 3G/LTE (such as femtocell, nanocell, etc.) has been designed to provide a

unified service and reduce the mobile backhaul traffic. Several advantages can be achieved by this converged

gateway. First, a mobile device accessing to a Wi-Fi Access Point (AP) does not have services like SMS,

telephone and mobility. Second, femtocells can well support these services; however, the data are normally

tunnelled between the Internet and mobile network, which still consumes a lot of bandwidth in the mobile

network, and probably deteriorates the user experience. In-network caching can be further enabled in a

converged residential gateway to improve network performance, especially when the residential gateway is

located in a public place or at enterprises.

Caching on eNodeB, Macrocell / Cell-Site Gateways

As previously indicated, content cache topologies for mobile services are anticipated to be deeper and more

distributed than has been observed in fixed line residential services. In a mobile network, the first location

where a content cache could be placed would be at the base station itself.

Caching in the Data Centre in the CO

The content distribution servers controlled by the network operator can be also deployed in the Central

Office (CO). The measurement for the access network in FTTH access [7] with 30,000 customers and in an

xDSL network [8] shows that almost 50% of the requests are cacheable and that traffic could be reduced by

one third. A collaborative caching algorithm can further improve the benefit. According to the study of the

interaction of a telco-CDN (ISP regional CDN) [9], an additional 12% to 20% of global traffic can be reduced if

we use collaborative CDN caches located in different COs.

2.2.2. Virtualized cache as a VNF

The CHARISMA Cache Nodes (CNs) are responsible for caching the appropriate HTTP content and delivering

it to end users that have requested it.

Page 17: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 17 of 43

Figure 2-4: CHARISMA Cache Node Software Components

Each CN includes several software components shown in Figure 2-4:

Local caching Management Primitive (LMP) manages the caching/prefetching behaviour of local

cache node.

NetConf Server (NCS) allows managing the configuration of LMP based on Network Configuration

Protocol (NETCONF) protocol that is used to manage and configure network devices.

The LMP component is further divided into two components: Cache Proxy (CP) and Pre-Fetch Proxy (PFP),

one for reactive caching and the other for prefetching (proactive caching).

Reactive (or opportunistic) caching is a technique that stores recently delivered content so that it can

be quickly accessed at a later time. The cached content should have already been explicitly requested

by some user, otherwise, it would never be stored in the location.

Prefetching (proactive caching) is a technique that retrieves and stores content in a location in the

network according to the requests of OTT service providers. Therefore, the content is pre-fetched to

the in-network caching device without receiving a prior request for it.

The basic caching procedure provided by CHARISMA content delivery system is illustrated in Figure 2-5. We

assume that Cache Node 1, Cache Node 2 and Cache Node 3 are located in CAL0, CAL1 and CAL2 respectively.

While an end user (UE) sends a content request, the nearest cache node Cache Node 1 behaving as a proxy

will check if the content is cached on itself. Cache Node 1 gets a Cache Miss and talks to other cache nodes

distributed in the network. If the requested content is not cached in the CHARISMA caching system, Cache

Node 1 will send the content request to the original content server to retrieve this content, and then send to

the UE. If the requested content is available in Cache Node 2 in CAL1, Cache Node 1 will retrieve this content

from Cache Node 2 and forward it to UE. Since the distance (network distance or latency distance) between

CAL0 and CAL1 is quite short, the UE is able to get the first content with a lower latency.

Page 18: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 18 of 43

Figure 2-5: Content Delivery Procedure Initiated by User Equipment

The NCS is integrated in the CHARISMA CNs in order to support managing caches (including enable/disable

the cache nodes, and configure cache size, cache algorithm, etc.) distributed in the network through a

standard network management and configuration protocol. The NETCONF protocol uses a remote procedure

call (RPC) paradigm. Between NETCONF client and server, the contents of both the request and the response

are fully described in XML DTDs or XML schemas. A client encodes an RPC in XML and sends it to a server

using a secure, connection-oriented session.

To control the behaviour of the CP, the NCS modifies the configurations in the file, and restarts the CP each

time the configuration is changed. We use the Netopeer tool as the implementation of the NCS. Therefore,

the installation of NCS is in fact the installation of Netopeer. The configurations take place through the

TransAPI interface, whose broader role is to allow the NCS to change the settings of various devices (e.g.,

network interfaces), the system (e.g., time zone) and the configuration of caching program on the machine.

The Netopeer tool provides two transAPIs for network interfaces and system configuration. They can be

found in the repository “netopeer/transAPI”. To configure software programs or other devices, customized

transAPIs need to be developed. Fortunately, libnetconf offer the lnctool that facilitates the development of

transAPI by generating the source code, and the Makefile based on the Yang model that is a data modelling

language can be used to model both configuration data as well as state data of network elements.

Page 19: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 19 of 43

The overall creation and deployment of caches as VNFs goes through the creation, configuration and

instantiation of the corresponding VMs, as well as the configuration of the network enables the transparent

operation of the cache in the network.

The overall realization of the CN functionality is based on software components installed and deployed on

top of the two distributions of the Linux operating system (Ubuntu 14.05 and OpenWrt chaos calmer 15.05).

The described CN architecture (see Figure 2-4) has been implemented and tested on CHARISMA partner JCP-

Connect’s cache nodes, in which the described protocol stack is installed on top of commercial off-the-self

(COTS) hardware. The realization of a CN within a virtual machine (VM) is therefore straightforward.

The configuration of the network for the support of cache VNFs, includes the configuration of the underlying

switching elements of the VI so as to deliver traffic to the virtual CNs. This corresponds to simple flow rules

applied on the switches by the network (SDN) controller of the architecture. This is accomplished during the

deployment of the overall service and handled by the VIM.

Next, the delivery of the traffic to the CP/PFP components of the CN is based on the appropriate inspection

of the corresponding flow characteristics. In a simple approach, all traffic is delivered to the CN VM, which is

configured to inspect the destination port of incoming flows. This is accomplished by the appropriate

configuration of the VM operating system, e.g., configuration of iptables2. Such a configuration is handled by

the Element Manager of the corresponding VNF. Flows targeting port 80 are delivered to the CP/PFP

components. All other flows are configured to be further forwarded towards their original destination. In an

alternative approach, inspection of incoming flows takes place in the (v)switching fabric, so as to alleviate

the CN VM from the associated overheads. This is considered as especially important for high traffic loads.

For instance, a vSwitch can be instantiated as a separate VNF, allowing the provisioning of the required

resources for the inspection of the incoming flows.

2.2.3. Virtualized cache controller as a VNF

2.2.3.1.The description of cache controller

The objective for the design and implementation of the cache controller is to allow VNO providers to manage

the distribution of content in the caching system (vCaches assigned to them). The cache control and

management operated by the CHARISMA Cache Controller (CC) is based on the NETCONF protocol. NETCONF,

as shown in Figure 2-6, provides mechanisms to install, update, and delete the configuration of network

devices, such as routers, switches, and firewalls. It uses Extensible Markup Language (XML)-based or

JavaScript Object Notation (JSON) data encoding for the configuration data and as the protocol messages.

The NETCONF protocol operations are realized as remote procedure calls (RPCs).

2 https://en.wikipedia.org/wiki/Iptables

Page 20: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 20 of 43

Figure 2-6: Netconf Architecture

Netopeer is a set of NETCONF tools built on the libnetconf library. It allows operators and developers to

connect to their NETCONF-enabled devices and control their devices via NETCONF. In our cache system, the

Netopeer client runs on the CC, and the netopeer server runs on each CN.

Figure 2-7: CHARISMA Content Caching Architecture

CC is designed to manage and configure all the CNs distributed in the network. The components mainly

include:

Netconf client implemented by netopeer-cli to allow CC managing CN by Netconf protocol;

CCD daemon is responsible for communicating with Netconf and external modules in an automatic

way. CCD is able to configure the CN and send controlling messages to LMP automatically;

DB stores the full information of CNs, user requested content and the content cached in LMP on

each CN.

Page 21: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 21 of 43

Figure 2-8: CHARISMA Prefetching Procedure Controlled by CC

Figure 2-8 shows the CHARISMA prefetching procedure controlled by the CC and initiated by the OTTs or

content providers who have the motivation to optimize the content delivery by caching solutions. When a

content provider receives many requests for the same content from many different users in the same area,

it can also decide to activate content caching for this specific content. The content provider sends their

requirements to the CC based on the agreement between the OTTs and network operators. According to the

information relative to user location and network performance, the CC makes an optimised caching decision

to ask the CNs to prefetch the content requested by OTTs.

Similarly to CNs, the CC has been realized on top of the Ubuntu 14.05 Linux distribution, installed on COTS

hardware. As a result, the realization of a CC as a VM is straightforward.

2.2.4. VNFD Descriptors

As defined in [20], a VNF Descriptor (VNFD) is “a deployment template which describes a VNF in terms of

deployment and operational behaviour requirements”. As the VNFs participate in the networking path, the

VNFD also contains information on connectivity, interfaces and KPIs requirements. The latter is critical for

the correct deployment of the VNF, as it is used by the NFVO to establish appropriate Virtual Links within the

NFV Infrastructure (NFVI) between VNF Component (VNFC) instances, or between a VNF instance and the

endpoint interface to other Network Functions. This section attempts a specification of the VNFDs that we

will use for the implementation of the cache node and cache controller VNFs. The described VNFD has been

defined and is currently used by the T-NOVA project [21].

Assuming a VNFD for a VNF that is composed of more than one VDUs, the VNFD contains the following

segments:

Page 22: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 22 of 43

VNFD preamble: The VNFD preamble provides the necessary information for the release, id, creation,

provider etc.

Virtual Deployment Unit (VDU) segment: The VDU VNFD segment provides information about the

required resources that will be utilised in order to instantiate the VNFC. The configuration of this part

may be extremely detailed and complex, depending on the platform specific options that are

provided by the developer. However, it should be noted that the more specific are the requirements

stated here, the less portable the VNF might be, depending on the NFV Orchestrator (NFVO) policies

and the service-level agreement (SLA) specifications. It is assumed that each VDU describes

effectively the resources required for the virtualisation of one VNFC. Fields within the VDU VNFD

segment include:

i) IT resources and platform related information;

ii) The internal structure of the VNF and the connection points of the VDU (name id, type) and the

virtual links where they are connected;

iii) Monitoring parameters to be collected by this VNF, including system related information and

VNF specific metrics;

iv) Scaling parameters defining the thresholds for scaling in-out.

VNF lifecycle events segment: This section of VNFD is related to lifecycle events, where the

technology used for interfacing with the VNF is defined, as well as the appropriate commands

allocated to each lifecycle event.

2.3. CHARISMA traffic handling

2.3.1. Problem statement

Caching in CHARISMA is seen as a means to reduce latency by serving the requested content from locally

stored copies3. However, a careful look at the operation of caches reveals the potential introduction of

additional delay components in the end-to-end communication of end hosts. Such delays first stem from the

need of the user (HTTP) requests to reach the vCache instance in order for a lookup to be performed on the

respective cache index and a potential cached copy of the requested content item to be retrieved. As

vCaching in CHARISMA is primarily considered on the application level4 (i.e., HTTP caching), request packets

are required to traverse the entire protocol stack to reach the application level cache instance (we denote

the associated overhead as tstack). In the particular CHARISMA context of virtualised resources, this traversal

also includes the hypervisor level, i.e., packets need to pass from the hypervisor to the guest operating

system of the VM supporting the vCache (VNF) where the protocol stack traversal takes place. Once request

packets reach the cache on the application level, an additional delay is imposed for the lookup operation (we

denote the associated overhead as tcache). Figure 2-9 schematically shows the described constituent delay

components. User equipment (UEs) send their content (HTTP) requests towards the content provider. An

Intelligent Management Unit (IMU)/micro-Data Center (μDC) hosting a vCache can be reached through a

switching device. We assume that a ta delay overhead is associated with reaching the vCache network

3 Additional benefits include the reduction of workload at the origin server and the overall reduction of traffic within the network. 4 See Section 2.1 for a discussion on an alternative approach.

Page 23: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 23 of 43

location. A corresponding tb delay overhead is considered between the IMU and the Content provider. In

general, we assume that:

tstack < ta, and tstack << tb

tcache < ta, and tcache << tb

For simplicity, we consider that these delay overheads remain the same in both communication directions,

e.g., the stack traversal delay overhead of an HTTP request is considered to be equivalent to the stack

traversal delay overhead of the corresponding HTTP response. A more detailed assessment of these delays

will be conducted in our future work.

Figure 2-9: Delay components in the case of v-Caching

We consider the following cases with respect to the lifecycle of a content (HTTP) request:

Cache hit

In this case, a request is directed to the vCache where a cached copy of the requested content is found and

subsequently delivered back to the User Equipment (UE). The content provider is not contacted. While the

tcache and tstack delay overheads are suffered, a cache hit avoids tb.

Cache miss

In this case, the request is directed to the vCache where a cached copy of the requested content is not found

and the request is subsequently delivered to the content provider who returns the content back to the UE.

In this case, all delay overheads suffer, namely: tcache, tstack and tb (ta is suffered in all considered cases).

Cache bypass

In this case, the request is not delivered to the vCache but rather delivered directly to the content provider.

In this scenario, the tcache and tstack delay overheads are avoided while the tb overhead suffers.

Given the above, we next make the following important observations:

Page 24: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 24 of 43

Not all traffic is amenable to caching.

The above assessment relies on the implicit assumption that the user traffic (or else, the UE flow)

corresponds to an HTTP request for content that can be cached. However, not all UE flows

correspond to HTTP requests, as is the case, for instance, for flows in a Skype session, and not all

HTTP requests are about cacheable content, i.e., frequently changing content marked by content

providers as non-cacheable. Obviously, forwarding such traffic to an HTTP-level vCache will result in

effectively the same results as a cache miss, i.e., paying the tcache and tstack delay overheads. At the

same time, the use of HTTPS results in encrypted communication resulting in non-identifiable and

hence non-cacheable content. A potential solution for these cases would be to rely on a Deep Packet

Inspection (DPI) component responsible for identifying flows carrying cacheable HTTP traffic.

However, such an approach would also result in the associated delay overheads for the DPI

functionality, as well as for the requirement for the allocation of additional resources for this

operation.

Not all content can be cached.

Though the cost of storage resources has been constantly decreasing, still, storage resources are

finite and do not necessarily allow the caching of all the potentially requested content in the network.

As a result, a cache replacement policy is traditionally employed to select the content to keep cached,

trying to identify content items of the highest popularity5.

2.3.2. The CHARISMA approach

In view of the above observations, CHARISMA’s objective is to reduce the amount of unnecessary delay

overheads suffered in cache misses, by trying to identify the UE flows that are more likely to lead to an actual

cache hit. Equivalently, this pertains to the identification of the UE flows for applications not amenable to

caching (e.g., conference call), as well as UE flows for applications amenable to caching, but with a high

likelihood of a cache miss. The objective then is to steer all identified flows to the available vCache(s), while

bypassing the vCache for the remainder of the UE flows, so as to avoid the aforementioned overheads. Our

approach aims at achieving this identification at the flow level, by identifying destination IP ranges that

correspond to the delivery of content that is amenable to caching. Choosing this identification at the flow

level serves the following purposes:

We avoid the overheads of keeping track what content is actually cached in the available vCaches,

and observing what content is requested, before forwarding to the vCache (or not) takes place.

Though such approaches have been proposed in the past [14], we consider them as inherently having

large overheads related to both the required DPI functionality, for the identification of the content

items requested by UEs, and the maintenance of the up-to-date state by keeping track of all content

cached throughout the network.

By identifying flows, we can define and apply corresponding forwarding rules towards the vCache or

directly towards the content provider.

The solution envisioned by CHARISMA is based on the gradual identification of flows amenable to caching6

based on the close monitoring of the deployed vCache(s) and the subsequent handling of the flows on the

5 Additional criteria have been identified in literature [12], which are beyond the scope of this work. 6 Also termed as “cacheable flows”.

Page 25: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 25 of 43

switching/routing level. Figure 2-10 provides a high level illustration of the solution, based on the CHARISMA

generic vCache architecture. Initially all traffic is forwarded through the deployed vCaches. The Element

Managers (EMs) of the instantiated vCaches inspect the cached content and access logs, and derive

information about the destination IPs for both cache hits and cache misses7. For instance, in the case of a

Squid8 web cache, the native access logs report the Uniform Resource Locators (URLs) of the cached content

(which can be resolved to the corresponding IP addresses via DNS), as well as the destination IP addresses of

a cache miss, which correspond to the IP addresses of the content providers, when no

hierarchical/cooperative caching is in place. The collected information from all Element Management (EM)

instances across the network (e.g., the slice of a VNO) is periodically passed to the vCC, which consolidates

the received IP destination addressed in continuous IP address ranges.

The collected destination IP address ranges are subsequently communicated to the available SDN controller,

which is responsible for setting the corresponding rules in the underlying switching/forwarding infrastructure.

The SDN controller may be under the operational control of the infrastructure operator or (as shown in Figure

2-10) realized as a separate VNF so as to be under the direct control of the VNO9. For IP ranges related to

cached content, the SDN controller applies flow rules that instruct the underlying physical or virtual

switching/forwarding infrastructure to forward the respective flows (i.e., the flows targeting these ranges)

towards the vCache. As the identified IP ranges and corresponding rules may reach high volumes, VNOs may

rely on virtualized switches so as to support the scalable operation of this scheme as shown in Figure 2-10.

The whole procedure is subject to a continuous update cycle so as to identify changes in the classified IP

destinations and correspondingly adapt the established rules, e.g. the content delivered from a remote

destination IP is no longer cacheable.

By applying the identified forwarding rules, the VNO also enables the detailed monitoring of the

corresponding traffic i.e., since OpenFlow v1.3, a packet counter can be associated to each rule, enabling the

fine grained assessment of traffic towards promising destinations, i.e. destinations which have been

associated with cacheable traffic. Such information can be subsequently used across a VNO slice for the

selection of network locations for the placement of vCaches. Namely, a VNO can identify the locations where

UEs tend to target providers of cacheable content, thus providing a hint of where (and also when) the

instantiation of a vCache could prove beneficial.

7 Note that within a vCache instance, a cache miss corresponds to cacheable traffic but denotes that the specific item was not cached at the time of the request. 8 www.squid-cache.org/ 9 In the latter case, the VNO has complete control of the overall functionality, limiting the exposure of related monitoring information (traffic and content access) to the infrastructure operator. An alternative model is presented in Section 3.4, where the infrastructure operator manages the envisioned functionality.

Page 26: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 26 of 43

Figure 2-10: CHARISMA solution for cacheable traffic identification and handling (managed by the VNO)

2.4. Security issues related to CHARISMA content delivery system CHARISMA provides intrinsically in-network content caching through the use of ICN-based content delivery

networks. Thus, content can be replicated throughout the network. Content can be cached in any arbitrary

location and digitally signed to guarantee its authenticity and integrity. The ICN paradigm comes with many

benefits such as a reduced delivery delay and network congestion thanks to content caching, which greatly

improves the scalability and efficiency of content delivery. It also provides a simpler device configuration and

security at the data level: the lack of source and destination addresses enforces privacy, and caching protects

the content provider from excessive load and more particularly from application-level DoS attacks [17].

Moreover, as the content is provided closer to the end user, traffic is decreased, which reduces the

information to be used by an attacker. While these advantages are particularly attractive for the CHARISMA

content delivery system, ICN-based solutions also introduce many open privacy and security challenges. For

instance, publicly available content is not encrypted and can be inspected by any intermediate router

between the user and the content provider. In addition, ICN suffers from a lack of centralization for

authentication, content access feedback and security, making it more difficult for network administrators to

improve services they provide to end-users. Another concern in in-network caching is that it is hard to avoid

the propagation of illegal and fake content (content with valid names but invalid signatures). Signature

verification countermeasures have been proposed but they require verifying the content signature at each

intermediate router, which is not scalable due to the cost of cryptographic operations.

The security requirements considered by the CHARISMA system can be summarized as follows:

Content Privacy & Confidentiality

As shown during the Netflix Prize Contest, the video rental history of an individual can potentially reveal

Page 27: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 27 of 43

sensitive personal information, thus, it is of paramount importance to enforce content confidentiality for

authorized users.

While traditional IP-based solution such as secure tunneling (SSL, TLS) can still be a viable approach, it tends

to weaken the cache capabilities, increasing network load and latency.

In CHARISMA, we envision confidentiality and privacy enforcement by the means of an encryption-based

mechanism. Such a mechanism should include the content hash, and the provider’s identity and signature to

enhance the security of the content delivery system.

A provider can encrypt content and disseminate it with a cryptographic key. The client can retrieve the

encrypted content from a vCache or from the content provider itself. Upon reception of the encrypted

content, the client can then request the private key from the content provider, which decides whether or

not the client is legitimate. If the client is authorized to access the content, the key is encrypted with the

identity of the client and then sent to the client. Upon reception of the key, the client extracts the key to

decrypt the content.

Access Privilege Violation

An end-user can still retrieve popular content even if their access privilege to this content has been

revoked. Content origin server will not be aware of this access violation.

The addition of an access privilege only requires granting authorization to a new user by retrieving the

decryption key. However, to revoke a privilege, the content needs to be encrypted with a new key that is

then only made accessible to the set of users who remain authorized after the evolution of the policy.

To prevent access to revoked users, a security layer like ConfTrack-CCN [18] has recently been proposed that

allows users access an encrypted version of the content, forcing them to authenticate by directly contacting

the content producer for the decryption key. We believe that multi-layer encryption is a good approach that

should be further explored.

Content Access Trackability

In ICN, content is cached at intermediate routers that serve interests with objects regardless of the user

privilege. As the original content producer has no more control on the data it pushed in the network, it

raises new concerns, especially in terms trackability. Content providers are more and more interested in

tracking every access to their content (i.e., YouTube, Netflix) to avoid copyright problems or legal issues.

Moreover, content providers do not offer the possibility to fetch a local copy of content.

Cache Monitoring Attacks

A malicious user can monitor the behavior of other users in the same local cache (e.g., vCache, MoBcache).

Implementing smart caching policies and forwarding rules can prevent this.

2.5. ICN based content delivery in CHARISMA By using the ICN content distribution paradigm, the end user requests content from the network by sending

an interest message with the content name. The network delivers it from any node having a copy of the

content. This approach can seamlessly provide distributed in-network caching at any network level: each

intermediate node can make a local copy of the content that they are forwarding and serve the local copy to

Page 28: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 28 of 43

any subsequent request of the same content, thus moving replicas of the most popular content towards the

nodes closer to the end user without the need of any application-layer support, and even without support

from the content producer. ICN is therefore a tool capable of giving the network provider several operational

knobs to unleash the potential of a Fixed Mobile Convergence (FMC) access network. In practice, the network

can dynamically choose the best repository, route, and interface from which to serve any content or even

part of content.

2.5.1. Latency benefit

Low latency can be achieved by the caching and nearest replica routing infrastructure envisioned by ICN

solutions. ICN enables cache memories on the communication data path to improve service performance by

caching, especially caching popular content at network edge, which allows reducing retrieval latency. Every

ICN router manages a cache, storing previously requested objects in order to improve content delivery by

reducing the path length for popular content. If content is available locally in the cache, the router sends it

back directly to the requester; otherwise, it forwards the request to the next hop to find the nearest replica

in the network. In this way, in-network caching intrinsically relieves the fallouts of network distance or traffic

congestion. In ICN, in-network caching designed as a native building block of the network makes ICN to be

considered as a promising 5G-network technology.

Through the designs of latency-aware cache management mechanisms for ICN, the service latency can be

further improved as described in [10] [11]. The authors in [11] leverage the local latency in a distributed way,

to make the decision on whether to store an object in the network cache. The distance to bottleneck on a

per-flow basis has been taken into account, with the results showing that the content mean delivery time

can be reduced by up to 50%.

2.5.2. Mobility benefit

The ICN paradigm promises a wide set of qualitative and quantitative benefits, beyond the low latency and

traffic/load savings associated with caching. In the particular context of mobile networks, ICN promises the

simplification of mobility management. By shifting from the host-centric IP paradigm, ICN allows clients to

move across network locations without breaking existing communications sessions. In the case of the most

prominent ICN architecture, i.e., CCN/NDN, this is enabled by the fact that communication is inherently based

on a pub/sub, communication mode where a Data packet is delivered to a network location by consuming

trails established by a corresponding Interest packet (that serves the purpose of a request or a subscription).

As a result, communication sessions are not defined by the network locations of the involved clients, but

rather the trails (soft state) established by their request packets. Simply re-issuing pending requests promises

a quick diversion of data packets to the new location of a moving client. Such a capability obviously promises

lower disruption for mobile communications. It must be noted, however, that the same solution does not

apply in the case of a mobile content provider, necessitating a location management scheme as in traditional

mobile networks.

Page 29: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 29 of 43

3. Interface design of CHARISMA cache system

3.1. Overview and requirements of interface design In this section, we describe the interface designs of the different modules of the CHARISMA content delivery

system. As shown in Figure 2-2, there are three main components: Cache Nodes (including vCache and

physical cache), virtualized Cache Controller (vCC), EM, and cache components in the CHARISMA control,

management & orchestration (CMO). The interface designs of particular interest include:

Communication between cache nodes;

Interfaces between cache nodes and vCC;

Interfaces between VNFs (vCache and vCC) and VNF manager.

The interface designs between cache nodes mainly include two levels: communications between cache nodes

allocated to a specific VNO, and communications of cache nodes across different VNOs. Inside a VNO, through

collaborative caching solutions between the cache nodes distributed in the different aggregation levels (CALs)

of the network, the VNO is able to configure and manage their own cache nodes in order to optimize the

service latency and bandwidth consumption and improve QoS of their end users. The CHARISMA content

delivery system further enables the cooperative caching solutions between VNOs to achieve the potential

benefits of co-locations. Through a peering configuration of cache nodes based on an agreement between

the involved VNOs, the caches can be efficiently shared between VNOs that can achieve a further

improvement of their service provisions.

The interfaces between cache nodes and vCC are designed to allow cache controller configuring, and

managing the involved cache nodes. Each vCC is assigned to a specific VNO that is able to manage the

associated cache nodes, without interference from the CHARISMA infrastructure provider. Using the vCC, the

VNO can flexibly configure the cache algorithms, cache sizes, cached content type, etc. according to their

requirements. Furthermore, exchanging information between vCCs that have an overview of the cached

content allows cache sharing between VNOs and the monitoring data of the involved cache nodes.

In the CHARISMA content delivery system, the vCache and the vCC are implemented as VNFs by using

virtualization of caches and cache controller. The VNF manager integration interface is designed to integrate

between VNF manager and VNFs/Ems. For example, the set of VNF Manager interfaces captures the

operation that the VNF Manager needs to complete for VNF life cycle management. It also allows the VNF

Manager to update the vCC and vCache status.

3.2. Communication between cache nodes As mentioned in Section 0 multi-tenancy fosters the emergence of new relationships between VNOs, focusing

on the potential benefits of co-location. In the particular case of caching, these opportunities refer to the

opportunity for the support of cooperative caching across VNO administrative borders, in the form of cache

peering. Figure 3-1 presents an example setup of the envisioned scenario, for the simple case of two tenants

VNO 1 and VNO 2, which hold vCache instances on the same IMU. Content requests from UEs subscribed to

VNO 1 are delivered to VNO 1’s vCache instance at a certain IMU. Upon a cache miss, the request is re-

directed to the co-located VNO 2’s vCache instance. As the two vCaches are located within the same IMU,

the latency for the redirection of the content request is expected to remain at a low level, as the whole

operation is confined within the IMU.

Page 30: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 30 of 43

Figure 3-1: Example operation of vCache peering in multi-tenancy scenarios. A cache miss in the vCache

of VNO 1 results in the redirection of the content request to the vCache instance of the co-located VNO 2.

The technical realization of the vCache peering scenario builds upon the appropriate configuration of the

vCache instances of the involved VNOs. Well-established cache implementations, such as Squid, support such

configurations in the form of sibling caches 10. The element manager (EM) component of the CHARISMA

architecture is responsible for carrying out this configuration, as it takes place at the application level and is

specific to the particular VNF functionality.

Such a configuration, however, calls first for the establishment of a peering agreement between the involved

VNOs. Cache peering agreements are expected to take place in a fashion similar to peering agreements for

inter-domain traffic exchange, being established by out-of-band means. Service Level Agreements (SLAs) are

expected to document the exact terms of the agreement, including the load accepted by the peering caches,

expressed in terms of request/response rates, as well as the expected response times. In consequence,

appropriate monitoring of exchanged request and response (data) rates is performed by each vCache, with

the derived information being collected by the corresponding EM.

10 Upon a cache miss, a request can be redirected to another sibling or a parent cache. A parent cache will check its own parents/siblings if it does not have the requested item cached. A sibling cache will only check its local state, returning either the requested item itself or a negative response.

Page 31: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 31 of 43

3.3. Interfaces between cache nodes and cache controller The cache control and management is based on the NETCONF protocol. Network Configuration Protocol

(NETCONF) provides mechanisms to install, update, and delete the configuration of cache nodes distributed

in the network. The NETCONF server runs on cache nodes and the NETCONF client is installed in the cache

controller. Usually, the cache controller establishes a NETCONF session with cache nodes to obtain and

manipulate the configuration data.

The interface implementation should be done in both cache nodes and the cache controller. In the cache

controller part (NETCONF client), a Yang project needs to be created that can generate Java APIs from Yang

model files. In cache nodes, new TransAPIs need to be designed and developed to support a new

configuration operation. TransAPI is the interface through which the NETCONF server changes the setting of

various devices (e.g., network interfaces), the system (e.g., time zone), and the configuration of software

programs (e.g., Squid Proxy) on the machine. The possible configurations on a cache node that can be

managed by the cache controller include:

HTTP port of the cache node

Cache memory size

Cache memory replacement policy, like Least Recently Used (LRU) or Least Frequently Used (LFU)

Cache storage replacement policy

Maximum/minimum cached object size

Maximum opened disk file handlers

High/Low cache swap value

3.4. Interfaces to VNF manager

Section 2.2 presented an approach to the identification of traffic amenable to caching, with the purpose of

reducing latencies associated with cache misses. The model that was presented aims at handing over full

control of the overall functionality to VNOs, but not allowing them to expose sensitive information related

to the content access patterns of their customers to the infrastructure provider. In this respect, the SDN

controller and the vSwitches, that are enabling the traffic monitoring, management and control functionality

of the proposed scheme are realised as VNFs. In an alternative scenario, vCache is operated and managed by

the infrastructure operator itself, in a non-multi-tenancy context, with the purpose of enhancing the

performance of its own network. In this model, the design of the overall scheme can be simplified by placing

the corresponding functionality onto the existing components of the CMO architecture.

Figure 3-2 illustrates an example solution for this alternative model. In this case, the design is substantially

simplified, as the already deployed SDN components of the architecture are utilized. At the same time the

VNFM takes a central role in coordinating the overall scheme. This coordination is enabled by the delivery of

content access statistics from the EMs of the instantiated vCaches over the Ve-Vnfm interface; this

information describes the IP addresses where the cached content comes from. The VNFM is then responsible

for configuring the cacheable flows rules on the underlying (v)Switching components, by interacting with the

SDN controller within the VIM. This interaction takes place over the Vi-Vnfm interface.

Page 32: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 32 of 43

Figure 3-2: CHARISMA solution for cacheable traffic identification and handling (managed by the

infrastructure operator)

At the moment of writing this D3.3 deliverable, we have not yet decided whether the CHARISMA CMO will

make use of a single VNF generic VNFM, or implement a different VNFM for the management of each VNF,

which would require the implementation of one VNFM for the cache VNF, and another VNFM for the cache

controller VNF. Taking into account that CHARISMA implements four different VNFs, for efficiency reasons

the implementation or the adoption of a VNF-generic VNFM seems more appropriate.

Currently there is no standard interface between the VNFM and the VNFs (Ve-Vnfm), and the VNF developers

have the freedom to choose the technology that best suits their needs. However, independently of the

technology to be chosen, a VNF management interface should at the least be defined and implemented to

support the VNF lifecycle. All generic operations expected for the lifecycle management of a VNF should be

provided through this interface. These operations include:

Instantiate/Terminate VNF: The VNFM shall use this interface to instantiate a new VNF or terminate

one that has already been instantiated.

Retrieve VNF instance run-time configuration: The VNFM SHALL use this interface to retrieve the

VNF instance run-time information (including performance metrics).

Configure a VNF: The VNFM SHALL use this interface to (re-)configure a VNF instance.

Manage VNF State: The VNFM SHALL use this interface to collect/request the state/change of a given

VNF (e.g. start, stop, etc.).

Scale VNF: The VNFM SHALL use this interface to request the appropriate scaling (in/out/up/down)

metadata to the VNF.

Page 33: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 33 of 43

4. Overview of primitive implementation

4.1. Cache Nodes and Cache Controller

During the first year of the CHARISMA project, a primitive implementation and a first integration with other

modules has been undertaken. We have implemented the basic functions of the cache nodes (in a MoBcache

as shown in Figure 4-1, and as a VM) and the cache controller.

The MoBcache is a mobile router-server prototype enabling content caching and prefetching functionalities.

It has a built-in battery, and it configures itself automatically within a tree radio network. It has the following

interfaces:

1 GB Ethernet Port

1 Wi-Fi 5 GHz interface

1 Wi-Fi 2.4 GHz interface

1 optional LTE interface (to be added during the CHARISMA project)

Figure 4-1: The MoBcache prototype

A set of programs has been implemented and integrated in the software implementation, including Squid,

Prefetcher, Netopeer server, IPTABLES rules, etc., which have been described in Section 2.2.1. The MoBcache,

specially designed for mobile scenarios in the CHARISMA system to keep service continuity while a user

moves and makes hand-overs between networks, is implemented based upon OpenWrt and deployed at the

CAL0. Another version of a Cache Node based on Ubuntu has also been implemented and can be flexibly

deployed at different CAL levels in the CHARISMA infrastructure.

Page 34: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 34 of 43

The cache controller (implementation is based on Ubuntu) is deployed at CAL3. The CC is designed to manage

and configure all the CNs distributed in the network over the NETCONF protocol. The CC is responsible for

communicating with Netconf and external modules to configure CN and send controlling messages to the CN.

Figure 4-2: Diagram of the logical architecture

In the first integration as shown in Figure 4-2 that has been demonstrated in EUCNC 2016 in Athens (and

described in greater detail in the parallel deliverable D4.1), we have integrated the MoBcache prototype (L0

Cache) at CAL0 and an Ubuntu-based cache node (L1 Cache) at CAL2. The L0 Cache connects to the network

by Ethernet and provides WiFi 802.11 access to user equipment. The L1 Cache connects to the TrustNode

router by a 1 GbE link. The user equipment runs an Http Live Streaming video from the content server

connecting to TrustNode. The CC is deployed at the CAL3.

Table 2: Experimental results from the first integration

Without caches (s) With caches Improvement

HLS on UE

(3.4 Mbps video) 1.503 1.04 0.463

Wget by UE 1.502 0.945 0.557

We made two tests: 1) by VLS to play a video with 3.4 Mbps, and 2) by wget (a tool that retrieves content by

HTTP, HTTPS and FTP protocol) to retrieve the first chunk, with the cache and without the cache. The results

in Table 2 present the playback startup delay, that is the latency between sending a user request to starting

to play the video on the screen (i.e. get the first video chunk to play). The analysis is based on monitoring the

traffic using Wireshark. When we run VLC to play HLS video on user equipment without caches, the playback

startup delay is 1.503 seconds; whilst with caches near to the user, the delay is 1.04 seconds. The result

clearly shows the benefits of service latency (0.463 seconds and 0.557 seconds have been improved

respectively for the two tests) by deploying caches near to the end users.

Future implementation for the CHARISMA caching system is to virtualize the cache nodes and the cache

controller that are being managed by the CHARISMA CMO system. Then we can flexibly deploy vCaches and

vCC at the different CAL levels, according to the requirements of VNOs and the network status.

Page 35: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 35 of 43

4.2. VNF Descriptor

In this section we provide an example VNF Descriptor implemented in JSON. The VND presented in the listing

below follows the structure we defined in section 2.2.4. The definition of the cache node and cache controller

VNFDs is currently work in progress; however, their final versions will follow a similar structure to the one

we provide below.

{

"provider": "NCSRD",

"id": 1,

"type": "cache node",

"description": "The cache node (CN) is responsible for caching the appropriate HTTP

content and delivering it to end users that have requested it",

"name": "Cache_Name",

"created_at": "2016-01-29T04:44:22Z",

"modified_at": "2016-01-29T04:44:22Z",

"vdu": [

#1 vdu

{

# Image repository download link

"vm_image": "http://192.168.1.2/images/cache0.img",

"vm_image_md5": "2345094b3b92832abb35162a61f6aa03",

"vm_image_format": "raw",

"id": "vdu0",

"alias": "cache0",

# Resource requirements

"resource_requirements": {

"hypervisor_parameters": {

"version": "10002|12001|2.6.32-358.el6.x86_64",

"type": "QEMU-KVM"

},

"network_interface": {

"bandwidth_unit": "",

"bandwidth": "",

"card_capabilities": {

"SR-IOV": true,

"mirroring": false

}

},

"storage": {

"size_unit": "GB",

"persistence": false,

"size": 80

},

"memory":{

"size_unit": "GB",

"persistence": false,

"size": 8,

"additional_parameters": {

"large_pages_required": false,

"numa_allocation_policy": ""

}

},

Page 36: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 36 of 43

"cpu":{

"vcpus": 8,

"hardware_acceleration": "AES-NI"

}

},

"connection_points": [

{

"vlink_ref": "management_net",

"id": "mngt0"

},

{

"vlink_ref": "internal_net",

"id": "int0"

}

],

# Monitoring Parameters

"monitoring_parameters": {

"generic": ["cpu", "memory_total", "memory_free", "memory_used", "disk_io",

"disk_free", "bps_in", "bps_out", "packets_in", "packets_out", "errors", "dropped"]

"specific": ["number_of_http_requests_received", "cache_hits_percentage_for_5

minutes", "cache_disk_utilization" , "cache_memory_utilization",

"number_of_users_accessing_cache"]

},

# Number of vms

"scale_in_out": {

"minimum": 1,

"maximum": 4

}

},

#2 vdu

{

"vm_image": "http://192.168.1.2/images/cache1.img",

"vm_image_md5": "2345094b3b92832abb35162a61f6aa03",

"vm_image_format": "raw",

"id": "vdu1",

"alias": "cache1",

# Resource requirements

"resource_requirements": {

"hypervisor_parameters": {

"version": "10002|12001|2.6.32-358.el6.x86_64",

"type": "QEMU-KVM"

},

"network_interface": {

"bandwidth_unit": "",

"bandwidth": "",

"card_capabilities": {

"SR-IOV": true,

"mirroring": false

Page 37: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 37 of 43

}

},

"storage": {

"size_unit": "GB",

"persistence": false,

"size": 80

},

"memory":{

"size_unit": "GB",

"persistence": false,

"size": 8,

"additional_parameters": {

"large_pages_required": false,

"numa_allocation_policy": ""

}

},

"cpu":{

"vcpus": 8,

"hardware_acceleration": "AES-NI", # Not sure about this

},

"vswitch_capabilities": {

"type": "ovs",

"version": "2.0",

"overlay_tunnel": "GRE"

}

},

"connection_points": [

{

"vlink_ref": "monitor_net",

"id": "mon0"

},

{

"vlink_ref": "internal_net",

"id": "int0"

}

],

# Monitoring Parameters

"monitoring_parameters": {

"generic": ["cpu", "memory_total", "memory_free", "memory_used", "disk_io",

"disk_free", "bps_in", "bps_out", "packets_in", "packets_out", "errors", "dropped"]

"specific": ["number_of_flows", "number_of_flows_classified", "dropped_packets"]

},

# Number of virtual machines

"scale_in_out": {

"minimum": 1,

"maximum": 2

}

}

],

"vnf_lifecycle_events": [

{

"authentication_username": "ubuntu",

"driver": "ssh",

Page 38: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 38 of 43

"authentication_type": "PubKeyAuthentication",

"authentication": "<<SSH KEY>>",

"events": {

"start": {

"command": "service xxx start",

"template_file": "<< lifecycle event template >>",

"template_file_format": "JSON"

},

"stop": {

"command": "service xxx stop",

"template_file": "<< lifecycle event template >>",

"template_file_format": "JSON"

},

"restart": {

}

}

}

]

}

Page 39: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 39 of 43

5. Conclusions

In this deliverable D3.3, we have defined and specified the initial CHARISMA content caching solutions and

traffic handling methods arising from the Task 3.4 “Content Caching and Traffic Handling at the RAN node”

of work package WP3 during the first year of the CHARISMA project. The main objectives of the CHARISMA

content caching system are to reduce service latency by deploying caches near to end users and provide open

access service by virtualization of the caches and cache controller. We have also discussed the security issues

related to caching, with the initial designs of the security solutions to be undertaken in the future work of

T3.4 in the second year of the project.

The CHARISMA caching system provides two kinds of caches: JCP-Connect’s MoBcache system with an

advanced design of the hardware devices to provide a seamless service in a mobile 5G scenario; and SDN-

based virtualized caches providing dynamic deployments. The CHARISMA caching system provides open

access and multi-tenancy services by allocating virtualized caches and the virtualized cache controller to a

Virtual Network Operator (VNO) depending on the VNO’s requirements and network status.

In order to optimize service latency through the use of caching solutions, this deliverable has also further

analysed the operation of such caches, e.g. also by identifying appropriate traffic cacheability. By identifying

the UE flows amenable to caching, selective traffic offloading can be deployed to remove unnecessary

caching operations and therefore reduce the service access time latency. Each of the different Converged

Aggregation Levels (CALs) of the CHARISMA architecture have also been identified as a distributed caching

location, so that the MoBcache system provides a virtualised and intelligent seamless service in a future 5G

scenario requiring mobility.

The CHARISMA caching system is managed by the CHARISMA control, management and orchestration (CMO)

system, such that the virtualized caches and virtualized cache controller are dynamically

created/deleted/reconfigured by the VNF manager and orchestrated by service orchestrator. Furthermore,

the CHARISMA caching system allows VNOs to autonomously configure, deploy and manage their own

allocated virtualized caches by the virtualized cache controller, without intervention from the complete

CHARISMA platform.

The first implementation and integration with the other low latency technologies in the associated

intermediate demonstrator has also been performed, and reported in the parallel deliverable D4.1. Future

developmental work on the MoBcache in the second year of the CHARISMA project will be directed towards

the design and implementation of the virtualization of caches and cache controller, and to full interfacing

with the CHARISMA CMO plane. In addition the integration of the CHARISMA content caching system into

the project pilots for the end of the project will also continue to be pursued.

Page 40: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 40 of 43

References

[1] Cisco VNI report, “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2013–

2018”, 5 Feb 2014.

[2] B. A. Ramanan, L. M. Drabeck, M. Haner, N. Nithi, T. E. Klein and C. Sawkar, "Cacheability analysis of

HTTP traffic in an operational LTE network," Wireless Telecommunications Symposium (WTS), 2013,

Phoenix, AZ, 2013

[3] Diego Perino and Matteo Varvello. “A reality check for content centric networking”. In Proc. of ACM

SIGCOMM workshop on Informationcentric networking (ICN 11). ACM, New York, NY, USA.Ò

[4] Tom Leighton. “Improving performance on the internet.” Communications of the ACM 52.2 (2009): 44-

51

[5] P. Georgopoulos, M. Broadbent, B. Plattner and N. Race, "Cache as a service: Leveraging SDN to

efficiently and transparently support video-on-demand on the last mile," 23rd International Conference

on Computer Communication and Networks (ICCCN), Shanghai, 2014, pp. 1-9.

[6] Erik Nygren, Ramesh K. Sitaraman, and Jennifer Sun. 2010. “The Akamai network: a platform for high-

performance internet applications”. SIGOPS Oper. Syst. Rev. 44, 3 (August 2010), 2-19.

[7] Claudio Imbrenda, Wuyang Li, and Luca Muscariello. “Analyzing cacheable Traffic for FTTH Users using

Hadoop”. In Proceedings of the 2nd International Conference on Information-Centric Networking (ICN

'15). 2015, New York, USA.

[8] Claudio Imbrenda, Luca Muscariello, and Dario Rossi. “Analyzing cacheability in the access network with

HACkSAw.” In Proceedings of the 1st international conference on Information-centric networking (ICN

'14). 2014, ACM, New York.

[9] Giyoung Nam and KyoungSoo Park. 2014. Analyzing the effectiveness of content delivery network

interconnection of 3G cellular traffic. In Proceedings of The Ninth International Conference on Future

Internet Technologies (CFI '14). ACM, New York, NY, USA

[10] Giovanna Carofiglio, Leonce Mekinda, Luca Muscariello:FOCAL: Forwarding and Caching with Latency

Awareness in Information-Centric Networking. GLOBECOM Workshops 2015: 1-7

[11] Giovanna Carofiglio, Leonce Mekinda, Luca Muscariello:LAC: Introducing latency-aware caching in

Information-Centric Networks. LCN 2015: 422-425

[12] Qiu, Lili, Venkata N. Padmanabhan, and Geoffrey M. Voelker. "On the placement of web server replicas."

INFOCOM 2001. Twentieth Annual Joint Conference of the IEEE Computer and Communications

Societies. Proceedings. IEEE. Vol. 3. IEEE, 2001.

[13] Stefan Podlipnig and Laszlo Böszörmenyi. 2003. A survey of Web cache replacement strategies. ACM

Comput. Surv. 35, 4 (December 2003), 374-398. DOI=http://dx.doi.org/10.1145/954339.954341

[14] Broadbent, M., Georgopoulos, P., Kotronis, V., Plattner, B. and Race, N., 2014, April. OpenCache:

Leveraging SDN to demonstrate a customisable and configurable cache. In Computer Communications

Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on (pp. 151-152). IEEE.

Page 41: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 41 of 43

[15] G. Xylomenos et al., "A Survey of Information-Centric Networking Research," in IEEE Communications

Surveys & Tutorials, vol. 16, no. 2, pp. 1024-1049, Second Quarter 2014.

[16] Jacobson, Van, et al. "Networking named content." Proceedings of the 5th international conference on

Emerging networking experiments and technologies. ACM, 2009.

[17] S.Triukose, Z.Al-Qudah, M.Rabinovich. “Content Delivery Networks: Protection or Threat?”, In

Proceedings of the 14th European conference on Research in computer security (ESORICS'09). Springer-

Verlag, Berlin, Heidelberg, 371-389.

[18] M.Mangili, F.Martignon, S.Paraboschi. “A Cache-Aware Mechanism to Enforce Confidentiality,

Trackability and Access Policy Evolution in Content-Centric Networks”, Comput. Netw. 76, C (January

2015), 126-145.

[19] Chanda, Abhisek, et al. “Content Based Traffic Engineering in Software Defined Information Centric

Networks”, 2013.

[20] ETSI NFV ISG, ETSI GS NFV-MAN 001: "Network Functions Virtualisation (NFV); Management and

Orchestration" v1.1.1, ETSI, Dec 2014

[21] T-NOVA D5.31 - Network Functions Implementation and Testing

Page 42: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 42 of 43

Acronyms

AP Access Point

API Application Programming Interface

BRAS Broadband Remote Access Server

CAL Converged Aggregation Level

CC Cache Controller

CDN Content Delivery Network

CDNi Interconnection of Content Distribution Networks

CMO CHARISMA Management & Orchestration

CNs Cache Nodes

COTS Commercial off-the-shelf

CP Cache Proxy

DB Database

DPI Deep Packet Inspection

DSL Digital Subscriber Line

DSLAM Digital Subscriber Line Access Multiplexer

EM Element Manager

FMC Fixed Mobile Convergence

FTTB Fiber to the Building

FTTH Fiber to the Home

HTTP Hypertext Transfer Protocol

ICN Information Centric Networks

IP Internet Protocol

JSON JavaScript Object Notation

LFU Least Frequently Used

LMP Local caching Management Primitive

LRU Least Recently Used

LTE Long Term Evolution

NCS NetConf Server

NAT Network Address Translation

NETCONF Network Configuration Protocol

NFV Network functions virtualization

OTT Over the Top

Page 43: Converged Heterogeneous Advanced 5G Cloud-RAN Architecture … · 2016-09-13 · CHARISMA – D3.3 – v2.0 Page 1 of 43 Converged Heterogeneous Advanced 5G Cloud-RAN Architecture

CHARISMA – D3.3 – v2.0 Page 43 of 43

PFP PreFetch Proxy

QoS Quality of Service

QoE Quality of Experience

RPC Remote Procedure Call

RTT Round-Trip Time

SDN Software Defined Networking

Service M&A Service Monitor and Analytics

SGW Serving Gateway

SLA Service-Level Agreement

SP Service Providers

UE User Equipment

vCache virtual Cache

vCC virtual Cache Controller

VDU virtual Deployment Unit

vRouter virtual Router

vSwitche virtual Switch

VoD Video on Demand

VM Virtual Machine

VNF Virtual Network Function

VNFD VNF Descriptor

VNO Virtual Network Operator

XML Extensible Markup Language

<END OF DOCUMENT>


Recommended