Contents
TheoryHWConnectivitySecuritySynchronizationQOSDemo Case
PRANIub over IP/Ethernet in WCDMA RAN P6 are
supported in all IP transport enabled RBSs. Any RBS equipped with CBU can support IP transport. Any RNC HW release prior to R4 does not support IP transport.
BSS 07B ,WCDMA RAN P6
Challenges to Mobile Operators The introduction of IP/Ethernet in GSM/WCDMA RAN is driven
today by a number of challenges faced by mobile operators: Introduction of Mobile Broadband applications (such as Mobile TV):
increasingly large volumes of data traffic are setting up a new scenario where operating costs and revenues are decoupled. Mobile operators cannot charge as much per bit for data services as for voice services and thus there is a need to reduce the transmission cost per bit. Ethernet-based RAN backhaul networks may enable mobile operators achieve this goal.
It is expected that Ethernet will bring lower OPEX and CAPEX for operators. In terms of CAPEX, Ethernet equipment may be cheaper than TDM/ATM (also valid for interface cards). In terms of OPEX, Gigabit Ethernet and Fast Ethernet interfaces can be remotely managed so they can be dynamically adjusted to meet operator’s requirements (such as additional capacity).
Challenges to Mobile Operators Expand coverage: new RBS indoor
solutions can be deployed without major investments. FTTx and DSL deployments will provide widely available access to Ethernet based MAN and WAN networks.
Convergence fixed-mobile: many operators are merging fixed and mobile transport networks into one common access transport network typically based on Ethernet.
2G & 3G Transport NetworkCore Network
GSMAccessNetwork
WCDMA Access
Network
RBS
RBS
RBSRBS
RBS
RBS
Abis
Abis
Abis Iub
Iub
Iub
Iur
BSCBSC BSCBSC RNCRNC
RNCRNC
RBS
RBS
RBS
RBS
RBS
RBS
RBS
RBS
RBS
IP in combined GSM/WCDMA RAN networks
Changes ATM IP
AAL2 QoS classes DiffServ Code Point or Ethernet priority QoS separation on IP and/or Ethernet level is supported by Ericsson
Nodes IP DiffServ Code Points are operator configurable should be configured
according to the IP/Eth network configuration priority orders In the IP case there is no QoS differentiation in the default configuration
(DiffServ Code Point 0 and Ethernet priority 0 for all traffic classes)
ATM overhead IP overhead Node Synch on AAL0 Node Synch over FACH (for both ATM & IP) Network Synch extracted form PDH/SDH NTP/UDP/IP IPv4
Generic Iub Interface Protocol Reference Model (Ericsson)
Control Plane User Plane
DCH, HS-DSCH, E-DCH FACH, PCH, RACH FPsNBAP
TransportNetworkLayer
RadioNetworkLayer
Radio Network Control Plane
ALCAP (Q.2630 , Q.2150.2)
Physical Layer (E1, J1, STM-1 etc)
Transport Network Control Plane
Radio Network User Plane
Node Synch
AAL2SAAL-UNI
ATM
AAL0SAAL-UNI
Ethernet
and Node Sync (over FACH)
IP
UDPSCTP
Network Synch
NTP
RBS site 1
RNC 1
GE GE
RNC 2
GE GE
RBS 1
GE
RBS 2
GE
RBS 31
GE
RBS 32
GE
XLANSwitch
XLANSwitch
XLANSwitch
COMINF
OSS-RC-
Primary Site
XLANSwitch
RBS site 2 RBS site 3
RNS 1 RNS 2
RNC RBS
ETMFX
ETMFX
ETMFG
IubIur
Iu
Changes ATM IP in P5 (Cont.)
User Plane (AAL2)AAL2 class A: PCH, RACH, FACH, DCH AMR/CS
AAL2 class B: DCH PSAAL2 class C: HSDPA
ALCAP (SAAL-UNI/AAL5)
NBAP-Common (SAAL-UNI/AAL5)
NBAP-Dedicated (SAAL-UNI/AAL5)
Network syncCCH + Node syncGBR (CS and PS)
NBAPDCH BE
Interactive 1O&M
Interactive 2Interactive 3Background
Carrier Network -- L2 VPN Ethernet has become the predominant technology for Local Area
Networks (LAN) connectivity and is gaining acceptance as a transport technology, specifically in Metropolitan and Wide Area Networks (MAN and WAN, respectively).
Layer 2 VPNs provide point-to-point tunnels (pseudo wires) between two provider edge (PE) nodes across a Packet Switched Network. Customer Ethernet frames can be sent transparently across the transport network so that the customer Ethernet frame is delivered unmodified at the other edge of the tunnel.
L2 VPNs are increasingly gaining momentum among service providers and are definitely an option for IP RAN transport since Ethernet is used as the link layer technology in Ericsson’s RAN nodes (STN devices, BSC LAN switches, WCDMA RBS, RNC). Carrier Ethernet networks already provide this type of VPN services.
Carrier Network -- L3 VPN L3 VPNs are currently the most widespread solution for VPN
connectivity. BGP/MPLS networks have become the de-facto standard for this type of services.
Defines a mechanism so that service providers can use their IP/MPLS backbones to provide VPN services to their customers. A Layer 3 VPN is a set of sites that share common routing information and whose connectivity is controlled by a collection of policies. The sites that make up a Layer 3 VPN are connected over a provider's existing public backbone.
L3 VPNs are also known as BGP/MPLS VPNs because BGP is used to distribute VPN routing information across the provider's backbone and MPLS is used to forward VPN traffic across the backbone to remote VPN sites.
xDSL xDSL is a family of technologies that provide digital data
transmission over the wires of a local telephone network. xDSL networks are very common today and have been deployed by many fixed operators, typically as ADSL. Provided that the required characteristics are fulfilled (see section 9) xDSL technologies can be used in IP RAN backhaul networks. However neither ADSL nor ADSL2+ offers the capacity required for HSPA bearers (14.4 Mbps DL and 1.44 Mbps UL) and thus they are not adequate for this purpose. VDSL2 can offer the capacity to transport HSPA bearers.
ContentsTheoryHWConnectivitySecuritySynchronizationQOSDemo CaseDimensioning
Macro Site
Micro Site Main-Remote
New RBSs in P5
RBS 3216/3116 Macro Base Station Main Characteristics: • Macro coverage and extended macro coverage• Small size with compact design• Outdoor or Indoor cabinet• Up to 3 sectors in one cabinet• Multiple power output options with up to 60 W per cell-carrier• Up to 768/768 channel elements in uplink/downlink• HSDPA and Enhanced-Uplink support.• Support for remote radio units (RRU)
RBS3518 Main-Remote Base-stationCapacity Channel elements: up to UL/DL 512/768 HSDPA/Enhanced-UL prepared
Other 6x1 and 3x2 configurations Dualband support Transmission options: E1/J1/T1, STM-1, E3/T3 Wall, pole or floor mounting 15 km optical fibre length
RBS3308 Micro Radio 1x1 to 1x2 per cabinet Channel Element: UL/DL 256/256 CE Supports: RX-diversity, HSDPA and Enhanced
UplinkTransport 8 x E1/T1/J1 or 2 E3/T3 or 2 STM-1 4 x E1/T1/J1 + 2 E3/T3 or 4 x E1/T1/J1 + 2
STM-1 Cascading up to 3 RBS 3308 IP over Ethernet (prepared)Other TMA/ASC/RET support Site LAN External alarms
RBS 3206
ET-M4 For RNC & NodeB STM-1 The unit is a line terminal board with two
STM-1/OC-3c interfaces for ATM transport ET-M4 is an ATM Exchange Terminal
board with two unchannelized STM-1/OC-3C ports.
Type Max value
Number of ports 2
Number of VPs 48
Number of VCs 1800 (of which a maximum 750 can be of UBR+ service category)
Number of AAL2 paths 128
Number of F4 PM flows (see MO VpcTp for details)
128 per board
ET-MC1 For NodeB E1 The unit is a line terminal board with 8 E1/DS1/J1
interfaces for ATM and TDM transport. Type Max value
Number of E1/DS1/J1 ports 8 (2 per port)
Number of IMA groups 4 per per board
Number of VPs 2 per E1/DS1 port
3 per IMA Group (assuming at least two links in the group)
Number of VCs 28 per E1/DS1 port
Number of AAL2 paths 16 per board
Number of F4/F5 PM flows 1 per E1/DS1 port
Number of Aal1TpVccTp MOs
4 per E1/DS1 port
Number of DS0 Bundles 248 per board, if it is used for TDM switching (attribute tdmMode = enabled). 4 per E1/DS1 port, if it is used for Circuit Emulation (that is, connected to an Aal1TpVccTp MO) 1 per E1/DS1 port, if it is used for fractional ATM (that is, connected to an AtmPort MO)
GPB for RNC General purpose Processor Board (GPB) The unit functions as a main processor in a
processor cluster with various tasks, such as SS7 signalling and O&M termination. The unit is equipped with Ethernet and serial access for management purposes. The flash disk on the board stores the software for all units on the node.
The board occupies one slot (15mm).
TUB for RNC Timing Unit Board TUB is a Timing Unit with dedicated
synchronization reference input ports that can be a 1 544kHz, 2 048kHz or 10MHz reference. The unit senses the frequency of the dedicated references and automatically adapts to it.
TUB2 can additionally have a GPS antenna giving a 1PPS signal as synchronization reference.
The unit has a system and radio clock that filters the selected reference. The system clock is distributed to all Plug-In Units in the node. The radio clock is used only in RBS nodes.
TUB1 does not support GPS synchronization references.
ET-PSW The ET-PSW board is a Third Party Product
(3PP) by Axerra Networks and the board implements the conversion between ATM and IP/Ethernet
The ET-PSW board has separate SW and does not use the RBS SW. Consequently, the board is not visible in the normal RBS or RAN management interfaces, such as, OSS-RC. No alarms are issued from the board.
ET-PSW must be used in combination with CBU or ET-MC1 and cannot be used independently.
Iub encapsulated in IP and transported over an IP, MPLS or Ethernet network. A Pseudo Wire card (ET-PSW) is used in the RBS. At the RNC site a Pseudo Wire Gateway is required (AXN1600 or AXN800). The ET-PSW card and PWG uses Ethernet Interfaces towards the network transport. From the ET-PSW towards the RBS, E1/T1 interfaces are used. From the PWG towards RNC, STM1 or E1/T1 interfaces are used.
ET-MFX11 Native IP Ethernet interface in RBS Need CBU It provides six 10/100/1000BASE-T electrical ports on Emily
connectors and one connector which can connect an SFP module. One of the 8 ports on the switch is internally connected to the IP
transport part of the board. The board supports both NTP server and client mode The board supports UDP terminations but not RTP terminations. No IPsec supported
ET-MFX11 ET-MFX12/13 ET-MFX13
For NodeB RNC RNC
RJ 45 10/100/1000Base-T 6 6 1
SFP for 1000 Base-X 1 1 6
Number of simultaneous UDP sessions 5 000 9 000 9 000
Number of session setup and releases per second
200 430 430
Maximum board throughput 100 500 500
Number of possible VLAN per port on the switch
256 256 256
Number of possible VLAN terminations per IP interface
8 8 8
Number of MAC addresses on the switch
4 K 4 K 4 K
ET-MFX11 board in the RBS
Ethernet/IP Backhaul (port 6 for electric and port 7 for optical cable)
O&M from CBU (port 3)
O&M from ET-MFX port 2
3206
ET-MFX11 board in the RBS
Connect the Ethernet transport cable in the System port cable, the O&M cable must be connected from the CBU board to the ET-MFX11 board.
Add “jumper” cable for O&M from ET-MFX (System port 2) to CBU.
ET-MFX12 for RNC & RXI RNC is Hardware R4 or higher, RAN in P6 The unit is a multiport Ethernet switch blade with IP
termination and interworking functionality. It provides six 10/100/1000BASE-T electrical ports on Emily connectors and one connector which can connect an SFP module.
An 8-port Ethernet switch. The switch is non-blocking and supports the Rapid Spanning Tree Protocol (RSTP).
One of the ports on the switch is internally connected to the IP transport part of the board. This part provides termination of IP traffic and conversion to node-internal formats, as well as distribution of IP traffic using node-internal formats to other boards.
The board supports both NTP server and client mode. The board supports UDP terminations but not RTP
terminations. Not support IPsec. Auto-negotiation on the physical layer is supported.
For the electrical links, it is possible to negotiate the bandwidth in the range of 10 to 1000 Mbps, including half-rate, but it is also possible to set the bandwidth to a fixed value. For the optical link, it is only possible to negotiate between full duplex and half duplex connections. The board supports automatic MDI/MDI-X crossover detection links.
ET-MFX12 boards in RNC 3810
RNC model C
Iub
Ethernet/IP Backhaul (port 6 for electric and port 7 for optical cable)
Redundancy Jumper from other ET-MFX (Port 1)
ET-MFX13 for RNC & RXI The unit is a multiport Ethernet switch blade with IP
termination and interworking functionality. It provides one 10/100/1000BASE-T electrical port on Emily connector and six connectors which can connect SFP modules.
An 8-port Ethernet switch. The switch is non-blocking and supports the Rapid Spanning Tree Protocol (RSTP).
One of the ports on the switch is internally connected to the IP transport part of the board. This part provides termination of IP traffic and conversion to node-internal formats, as well as distribution of IP traffic using node-internal formats to other boards.
The board supports both NTP server and client mode. The board supports UDP terminations but not RTP
terminations. Auto-negotiation on the physical layer is supported. For
the electrical link, it is possible to negotiate the bandwidth in the range of 10 to 1000 Mbps, including half-rate, but it is also possible to set the bandwidth to a fixed value. For the optical links, it is only possible to negotiate between full duplex and half duplex connections. The board supports automatic MDI/MDI-X crossover detection links.
Migrate ATM/IP Over Pseudo-Wire to IP ATM RBS using IP for transport over Pseudo-Wire, is migrated to IP. Iub
control plane and user plane are migrated at the same time. prerequisites:
All involved nodes are upgraded to P6 software There is a slot available in the RBS for the introduction of ET-MFX IP addresses are allocated IP backbone network is prepared O&M network is prepared for the migration of Mub from IP over ATM to IP
over Ethernet Configuration and re-configuration files are prepared RNC is prepared for IP-based Iub
Migration procedure: Configure ET-MFX, network synchronization and Mub in the RBS From this point starts downtime for the RBS: lock IubLink In the RBS site, move Ethernet cable from ET-PSW to ET-MFX Verify O&M connectivity and network synchronization Create Planned Area in OSS-RC and import configuration file Activate configuration files containing data for IP-based Iub in RBS and RNC Create CV in all involved nodes Unlock IubLink in RNC. From this point, downtime ceases for the RBS node. Create CV in all involved nodes Verify traffic on migrated RBS Activate configuration files to remove ATM configuration using OSS-RC Create CV Remove ATM boards and cables
Migrate ATM RBS to IP prerequisites:
All involved nodes are upgraded to P6 software There is a slot available in the RBS for the introduction of ET-MFX IP addresses are allocated IP backbone network is prepared O&M network is prepared for the migration of Mub from IP over ATM to IP over Ethernet,
including configuration in the O&M router for IP over Ethernet Configuration and reconfiguration files are prepared OSS planned areas are prepared RNC is prepared for IP-based Iub Network synchronization server is configured
procedure: Configure ET-MFX, network synchronization and Mub in the RBS. Verify O&M over Ethernet connectivity and synchronization over Ethernet. Create Planned Area in OSS-RC and import configuration file. Activate configuration files containing data for IP-based Iub in RBS and RNC, using OSS-RC Create CV in all involved nodes From this point, starts downtime for this RBS: lock IubLink For the MOC IubLink in the RNC, set: userPlaneTransportOption.ipv4=TRUE and userPlaneTransportOption.atm=FALSE controlPlaneTransportOption.ipv4=TRUE and controlPlaneTransportOption.atm=FALSE For the MOC Iub in the RBS, set: userPlaneTransportOption.ipv4=TRUE and userPlaneTransportOption.atm=FALSE controlPlaneTransportOption.ipv4=TRUE and controlPlaneTransportOption.atm=FALSE Unlock IubLink in RNC. From this point, downtime ceases for the RBS node Create CV in all involved nodes Verify traffic on migrated RBS Activate configuration files to remove ATM configuration using OSS-RC Create CV in all involved nodes Remove ATM boards and cables
Main Subrack layout in HW R5
SC
B3
SX
B3
TU
B2
Fan Unit
SC
B3
TU
B2
ET
B
ET
B
1 2 3 4 5 6 7 8 9 10
11 12
13
14 15 16
17 18 19 20 21
22 23 24 25
26
27 28
GP
B5
3 - R
NC
SC
CP
MP
- Active
GP
B5
3 - R
NC
SC
CP
MP
- Sta
nd
by
GP
B5
3 - R
NC
Cen
tral M
P - A
ctive
GP
B5
3 - R
NC
Cen
tral M
P - S
tan
db
y
GP
B5
3 - R
NC
O&
M M
P - A
ctive
GP
B5
3 - R
NC
O&
M M
P - A
ctive
GP
B5
3 - R
NC
Mod
ule
MP
- Active
GP
B5
3 - R
NC
Mod
ule
MP
- Sta
nd
by
GP
B5
3 - R
NC
Mod
ule
MP
- Active
ET
B
ET
B
ET
B
SP
B2
1
SP
B2
1
SP
B2
1
SP
B2
1
SP
B2
1
SX
B3
GP
B5
3 - R
NC
Ran
ap
+R
Nsa
p M
P
GP
B5
3 - R
NC
Ran
ap
+R
nsa
p M
P ET
B
Extension Subrack layout in HW R5
SC
B3
Fan Unit
SC
B3
1 2 3 4 5 6 7 8 9 10
11 17 18 19 20 21
22 23 24 25
26
27 28
12
13
14 15 16
GP
B5
3 - R
NC
Mod
ule
MP
– Sta
nd
by
GP
B5
3 - R
NC
Mod
ule
MP
- Active
GP
B5
3 - R
NC
Mod
ule
MP
– Active
GP
B5
3 - R
NC
Mod
ule
MP
- Active
GP
B5
3 - R
NC
Mod
ule
MP
- Active
GP
B5
3 - R
NC
Mod
ule
MP
- Active
DB
or
ET
B SP
B2
1
SP
B2
1
SP
B2
1
DB
SP
B2
1
SP
B2
1
SP
B2
1
SP
B2
1
SP
B2
1
SP
B2
1
DB
SP
B2
1
DB
or
ET
B
DB
or
ET
B
DB
or
ET
B
DB
or
ET
B
DB
or
ET
B
DB
or
ET
B
DB
or
ET
B
RXI - 860
The RXI 860 is a larger version of the
RXI 820. Its first introduction is in
WCDMA P5. Some features of the RXI
860 are as follows: 3 Subracks 84 board positions 11 reserved for system boards 3 for either system or ET boards 70-73 positions available for ET-boards Prepared for Iub IP Transport introduction in WCDMA RAN P6
ET-MC41: channelized STM-1 line interface board. One port per boardET-MF41: channelized STM-1 line interface board. Four ports per boardET-M4: 155 Mbps STM-1 (VC-4) SDH interface. Two ports on each board ET-MF4:155 Mbps STM-1 (VC-4) SDH interface. Four ports on each board
RNC Each RNC IP extension subrack (ES) is equipped with 2 or 3 ET-
MFX boards for capacity and redundancy purposes. The RNC IP main subrack (MS) will be equipped with 2 ETMFX boards. It is expected that during P6 timeframe 2 ET-MFX boards (for redundancy) will be enough to cope with the total traffic handled by one subrack.
In case a third ET-MFX board is needed, the complete RNC physical connectivity is presented here:
RNC
Symmetricom Time ServersS200
Juniper FirewallsISG/SSG550
Extreme NetworksSwitchesSummit X450 and 400
NodeB Since ET-MFX board does not support IPsec VPNs, a standalone
SEGw beside the RBS is suggested for IPsec tunnel termination and firewalling.
SSG5 SEGws from Juniper have been selected in this solution as RBS SEGws.
SSG5 does not offer optical interfaces. If optical interfaces are mandatory then a more advanced SEGw such as SSG20 should be selected instead.
ContentsTheoryHWConnectivitySecuritySynchronizationQOSDemo Case
ATMIP
OSS RC 5.3EMAS
MO Shell
OSS RC 5.3EMAS
MO Shell
IP-RAN - Clearly defined network set-up
Access Network
Only STP usage!
Max. 2 RBS’s
1 RNC
P6
IP-LANRouters
RBS 3206 CBUET-MFX11
RBS 3206CBUET-MFX11
RNC2 * ET-MFX12
RNC Physical connectivity
RNC connectivity
Each ET-MFX board in the RNC Main Subrack is connected to one RAN switch with one GE link. Iub traffic is carried over these links. RNC O&M traffic is delivered directly from the GPB boards to the site switches.
The RAN SEGws are connected to the RAN switches and site switches with GE links. Communication through the RAN switches is mandatory to enable VRRP and High Availability in the RAN SEGws, where IPsec tunnels towards the RBSs sites are started/terminated.
The RAN SEGws are connected to the Transport Provider PEs with GE links. In case multiple aggregated GE interfaces are required (total site traffic) the RAN SEGws must support link aggregation (Juniper/Netscreen ISG family).
The RAN SEGws are also directly connected through two High Availability links.
The RAN switches are connected to a site switch through one GE link only for its O&M traffic to be delivered through the O&M network.
The inter RAN switch links must be protected with 1+1 redundancy (link/board) since it is crucial for the solution to keep the inter RAN switch links operational continuously (provided both RAN switches are working too).
The RAN SEGws O&M traffic can be delivered to the site switches on the OM_STN VLAN.The RAN switches O&M traffic can be delivered to the site switches on the OM_GRAN VLAN. Hence, this VLAN should be also enabled in the site switches on those ports connected to the RAN switches.
Iub over IP user/control plane traffic and synchronization traffic is carried over the Iub_Traffic VLAN, defined in the RAN switches and ET-MFX boards. This VLAN is extended across the RAN switches to provide redundancy.
RNC O&M traffic is carried over a new OM_WRAN VLAN, configured in the site switches (extended across them). As described in section 7.1.2.2, Mobile-PBN O&M solution provides a VRF routing instance in the site routers where O&M traffic from all site nodes is routed to the OSS site through a VPN in the backbone.
This configuration enables High Availability in the RAN SEGws and VRRP in the L3 routing instance in the site switches.
If WCDMA RBS O&M traffic is also tunneled to the RAN SEGws, this traffic is also transported over the OM_WRAN VLAN
The RAN SEGws can also use this VLAN for their own O&M traffic. The RAN switches can be partitioned into multiple virtual bridges. Each
virtual bridge can run an independent Spanning Tree instance, called a Spanning Tree domain. Each STP domain has its own root bridge and active path. After an STP domain is created, one or more VLANs can be assigned to it. In our case, all the RNC VLANS will belong to the same RSTP domain.
When adding a VLAN to the RSTP domain, that VLAN becomes a member of the STP domain. There are two types of member VLANs in an STP domain:
Carrier VLAN: defines the scope of the RSTP domain, which includes the physical and logical ports that belong to this RSTP domain and the associated topology. Only one Carrier VLAN can exist per STPD. Iub_Traffic VLAN is the carrier VLAN in this design.
Protected VLANs: are merely users of the calculated topology. The rest of VLANs defined between the RAN switches are protected VLANs
NodeB connectivity
In order to provide the desired traffic separation at the RBS site, two VLANs are defined at the RBS and RBS SEGw: Iub_Traffic VLAN carries all user/control and synchronization traffic, whereas RBS_OM VLAN carries all O&M traffic.
TN L2 VPN Different L2 VLAN/VPNs for different traffic types (Iub
user/control plane and O&M). Each IPsec tunnel from the RBS site can be mapped to one VLAN/VPN
TN L3 VPNDifferent L3 VLANs/VPNs for different
traffic types (Iub user/control plane and O&M). Each IPsec tunnel from the RBS site can be mapped to one VLAN/VPN
ContentsTheoryHWConnectivitySecuritySynchronizationQOSDemo Case
Why To ensure that information generated by or relating to a user is
adequately protected against misuse or misappropriation. This can be interpreted to mean in the RAN case that user data, user identity and call related information like encryption keys, dialed numbers, and location data are protected.
To ensure that the resources and services provided by serving networks and home environments are adequately protected against misuse or misappropriation. This can be interpreted to mean in the RAN case that data related to the proper operation of RNCs/BSCs and RBSs are protected. For example, manipulation
These security problems come about for three main reasons: More connectivity. The IP network gives the RBS connectivity, not
only to the BSC/RNC but also to OSS and, when needed, the Time Server. If an IP network that is common with other systems is used even wider connectivity may result depending on the configuration of the network.
Use of insecure transmission. One of the cost reduction drivers of IP services is the use of public or semi-public networks. These networks are typically under less operator control than today’s TDM/ATM networks and provide a possibility for external agents to wiretap, insert or modify IP traffic.
IP in itself does not contain any secure parts, it is a pure packet distribution system without any means to decide if the packet is valid or not. Therefore protection means has been necessary to devise on top of IP and there are today very few entities using IP that does not provide optional protections means.
Use IPsecAlthough IPsec VPN is not strictly mandatory for
the RAN applications and protocols to work, its use is recommended to securely transport all traffic between RAN sites and is described as part of the PRAN security solution.
In addition to the usage of IPsec VPNs, a number of additional measures can be taken to limit the connectivity in the GSM/WCDMA IP RAN network:Implement Access Control Lists in site routers
(SEGw) (RBS, BSC/RNC and OSS sites).Implement firewall policies (SEGw) in front of
sensitive nodes such as BSC/RNC and OSS
SEGw Security architecture for RAN IP traffic can be implemented in
two different ways based on whether RAN traffic is carried over one or several IPsec VPNs. We will denote them as: Common virtual network (common virtual private network) Independent virtual networks (independent virtual private networks)
Both approaches use two SEGws (two for redundancy reasons) at the BSC/RNC, Time Server or OSS sites. In the RBS site only one SEGw is needed since redundancy is not expected to be required. As we will see later, in the RBS site the SEGw functionality can be provided by the SIU, PicoSTN or a standalone RBS SEGw.
At the BSC/RNC, Time Server and OSS sites the IPsec tunnel is terminated in the SEGws (depending on which of both is active at the time). At the RBS site the IPsec tunnel can be terminated in a standalone SEGw, SIU or PicoSTN devices.
ET-MFX variants for RNC and RBS do not support IPsec in WCDMA RAN P6. Hence, an external SEGw is needed to provide the IPsec termination functionality.
Common virtual network (common virtual private network)
In this scenario all RAN IP traffic (user/control plane, synchronization and O&M) is encapsulated over the same IPsec tunnel across the transport network from the RBS site to the BSC/RNC site. The impact of introducing IP/Ethernet is minimized since O&M infrastructure in the BSC/RNC site can be reused, perhaps only increasing the O&M capacity towards the OSS site to cope with additional O&M traffic (from/to the STN device, RBS SEGw, Time Servers, etc.).
This option may be adopted by operators with self built networks based on traditional point to point links from the RBS site to the BSC/RNC site that do not wish to modify the existing architecture when introducing IP in RAN.
BSC SIte
RAN SEGw 1
RAN SEGw 2
RBS SIte
RBS
SEGwTransport network
IPsec tunnel
Independent virtual networks (independent virtual private networks) In an IP RAN deployment, traffic separation
can be implemented by means of three different virtual networks: O&M network, Synchronization network and Traffic network.
In terms of security, this is translated into the configuration of three IPsec tunnels in the RBS site. One IPsec tunnel will be used to carry O&M traffic from the RBS site (STN, SEGw, etc) and will be terminated in the SEGws deployed at the OSS Site. The second IPsec tunnel will be used to transport synchronization packets between the RBS site and the site containing the Time Servers. The third IPsec tunnel is used to carry user/control plane traffic between the RBS site and the BSC/RNC site.
In terms of WCDMA RNC using ET-MFX, only 2 IPsec are need: OSS SoIP & Traffic NOC
SIte
RAN SEGw 1
RAN SEGw 2
NOC SIte
RAN SEGw 1
RAN SEGw 2
BSC SIte
RAN SEGw 1
RAN SEGw 2
RBS SIte
RBS
SEGw
Transport network
IPsec tunnel
Redundancy SEGw redundancy is recommended at the BSC/RNC, OSS and
Time Server sites (in cases where the operator decides to implement the common network approach all RBS traffic is sent to the BSC/RNC sites). In this way single points of failure in the SEGw and the connected links can be eliminated.
SEGws must provide a HA (High Availability) solution for redundancy purposes. The recommended setup consists of two SEGws working in active/passive mode. The state of the active gateway, such as the connection table and other vital information, is continuously copied to the inactive gateway. When the active SEGw fails over to the inactive SEGw, this already knows which connections are active, and communication traffic can continue to flow uninterrupted.
In order to accomplish this, both SEGws share a number of VIPs and VMAC addresses which are seized by the active SEGw and provide single tunnel endpoint addresses to SEGws/SIU/PicoSTN in RBS sites.
SEGw redundancy is not assumed at the RBS site because of the additional cost.
ContentsTheoryHWConnectivitySecuritySynchronizationQOSDemo Case
Synchronization Traditional GSM/WCDMA access networks based on PDH/SDH
transport provide a valid synchronization reference for clock regeneration in the RBSs. When the transport network has an asynchronous physical layer such as Ethernet, the RBS cannot be synchronized by the layer 1 transmission interfaces as in traditional PDH/SDH links. Therefore new synchronization sources and methodologies need to be implemented in order to have alternatives to the GPS based synchronization.
Ericsson’s technical solution to overcome this challenge consists in synchronizing the RBSs by exchanging time stamped NTP packets with a Time Server, which could be standalone equipment deployed in the network (GSM) or an embedded functionality in RAN nodes (WCDMA). An algorithm in the SoIP client in the RBS filters out the time stamps with the least delay variation and calculates a target frequency for the RBS. The algorithm is based on an Ericsson patent.
A client-server approach is used between the RBSs and the Time Servers. The SoIP client of an IP connected RBS sets up an association with a Time Server and aligns its frequency to the associated Time Server. If the selected Time Server is deemed unusable by the SoIP client, the SoIP can select a backup Time Server.
WCDMA Synchronization Solution In WCDMA RAN P6, RNC/RXI is equipped with Time Server
functionality in the ET-MFX boards. This document only addresses the case where the SoIP server is running in an ET-MFX board in the RNC. The ET-MFX in the RBS only implements the SoIP client.
The Time Server in the ET-MFX board in the RNC and the SoIP client in the RBS communicate using NTP over UDP, running on the same IP subnet/VLAN as the Iub user/control plane traffic. From that point of view, the same connectivity design for Iub user/control plane traffic can be used for synchronization traffic.
One ET-MFX can handle all NTP traffic for all RBSs in the largest RNC type (up to 768 RBSs). It is recommended though to enable the Time Server functionality in all ET-MFX boards in the RNC so that load is evenly distributed. One possible way to distribute RBSs across the SoIP servers on ET-MFX boards in the RNC is as follows:
Network Synchronization – Implementation
ET-MFX
NTP termHW basedtimestamp
Ethernet
RNC Site
Ext sync
TUB_A
NTP ServerHW basedtimestamp
ET-MFX
48.6MHz
TUB_B
19.44MHz
19.44MHz
IP connectiv
ity Network
IP connectiv
ity Network
RBS Site
CBU
NTP request
NTP response
RBS ClientRNC Server
SoIPClient SW
External Time Servers
ET-MFX board is equipped with the Time Server functionality to provide WCDMA RBSs with accurate time stamps. However ET-MFX board cannot be used as Time Server for GSM RBSs in WCDMA P6/BSS 07B releases. This will be possible for SIU and PicoRBSs in BSS 08A.
Symmetricom Time Servers can also be used as a backup alternative to ET-MFX boards for WCDMA RBS synchronization.
Each SoIP client in the ET-MFX generates between 1-10 NTP requests per second, depending on the synchronization state (1 NTP packet in locked mode and 10 NTP packets in unlocked mode). In average, we will assume 5 NTP requests per second per RBS.
Hence, the formula to be used to calculate the number of Symmetricom servers is:
Network (frequency) synchronization – overview
Synchronisation of RBSs based on timing over IP packets Possibility to use multiple
SoIP servers Variable rate of timing
packets as required by the different RBSs.
Common solution for WCDMA and GSM
IPTransport
Service
RNC
Remote area
IntegratedSoIP server
Local SoIP server
Network synchronization – Configuration (RNC)
NTP Server
Packet distribut
or
ET-MFX
Host
Ethernet
switch
IpAccessHostEt
IpInterface
IP AddressntpServerMode
TUB
TU
defaultRouter0networkPrefixLengthvidvLan
• Configure as NW Synch Server• Typically activated on all ET-MFX boards in RNC• Share IP Host with Iub UP
Network synchronization – Configuration (RBS)
Packet distribut
or
ET-MFX
IP Host
Ethernet
switch
CBU
TU
IpAccessHostEt
IpInterface
ipAddressntpServerMod
e
IpSyncRefntpServerIpAddress
Synchronization
synchReference
synchPriority
defaultRouter0networkPrefixLengthvidvLan
• Configure as NW Synch Client in RBS• Share IP Host with Iub UP
NTP Client
ContentsTheoryHWConnectivitySecuritySynchronizationQOSDemo Case
Requirements on IP TN SLA
Requirements on native IP Iub performance RT part
Max delay, RT part: ≤30ms, recommendation: ≤5ms Max delay variation RT part: ≤10ms Max packet loss, RT part: 10-6
HS BE and R99 BE data part Max delay, HS BE data part: ≤100ms, recommendation: ≤10ms,
including delay variation Max delay, R99 BE data part: ≤50ms, recommendation: ≤10ms Max delay variation, R99 BE data part: ≤12ms Max packet loss, HS+R99 BE part:10-4, recommendation: 10-6
Requirements on Transport SolutionSynchronization
At least 1% of the NTP packets arrive at the STN device within a time window of 20 µs from the minimum delay
At least 10% of the NTP packets arrive at the STN device within a time window of 100 µs from the minimum delay.
User/Control plane traffic The packet drop rate shall be better than 10-4 (maximum is 10-
3). The sum of Abis delay and delay variation shall be below 15 ms
one way (maximum is 50 ms).
IP / Ethernet QoS Delay sensitive traffic is prioritised over less sensitive traffic (in
times of congestion) The IP RAN transport QoS is implemented by SCTP and Ethernet
Pbit RAN nodes tag the egress IP packets with a configured DSCP
values. The DSCP values depend on the RAB type. (DSCP is the 6 most significant bits of the IP packet DiffServ field).
The ET-MFX will map the DSCP value of the IP packet to a configurable Ethernet Pbit value.
Ethernet switch vendors may support different priority mapping in the deviced (IEEE supports up to 8 priority level definitions, /// ET-MFX supports 4 priority queues)
QoSDue to uncertainties regarding the mechanisms that
are available in this type of IP-based transport networks, and the highly sensitive traffic types existing in the RAN, the following two initial recommendations are made:The Quality of Service solution should be ensured by
over-provisioning the capacity (for GBR traffic in case sufficient level of QoS differentiation is supported by the network) required from the Transport Service Provider.
Well defined SLA parameters regarding packet drop rates, delay and delay variation following the requirements listed in section 7.29 should be agreed with the Service Provider and met by the transport network.
RNC and RBS ET-MFX The RNC and RBS P6 software release supports a basic Diffserv
classification and marking. The default DSCP code in ET-MFX allocation is summarized in the bale below:
This simple classification can be further extended with the optional feature “Configurable Transport Bearer QoS class”. This feature allows QoS class allocation for user service transport bearers to be configured to the operator's choice.
By activating this feature in the Ericsson WCDMA RAN we can set a different DSCP value to each of the traffic types listed below.
In Juniper/Netscreen SEGw, if a packet already contains a DSCP value, and the policy does not specify a different DSCP value to replace it, the SEGw copies-out the inner packet’s DSCP code to the ESP header and retains marking in the inner packet. In this way the IPsec encapsulation does not modify the node’s DSCP marking.
QoS in RAN Switch The Extreme networks X450 switches support the following
Diffserv features: Classification of incoming packets into traffic classes according to
specified criteria called “traffic groupings” (DSCP, etc.) Queuing of all traffic in the respective queues according to traffic
classification. Optional DSCP marking
The switches support 8 egress queues per port and thus 8 QoS profiles (named QP1 to QP8).
For classification according to incoming DSCP, there is default mapping from DSCP to QoS profile. This mapping can be modified in order to assign a QoS profile to any set of incoming DSCP values.
QoS for Time Server Symmetricom Time servers have no Diffserv capabilities. Hence, NTP
packets originated in the Time Servers must be re-marked with the appropriate DSCP and P-bit (802.1P) value at the RAN switches.
ContentsTheoryHWConnectivitySecuritySynchronizationQOSDemo Case
Native IP introduction in Iub
No back-off in functionality nor capacity!
IP RAN
RBS• Built in Ethernet switch• 7 x GBE/FE/Eth
RNC
IP Transmission InterfaceRNC
Native Iub IP Transport 3GPP TS25.430
Two variations for ETB on RNC ET-MFX12
6 x RJ45 1 x SFP
ET-MFX13 1 x RJ45 6 x SFP
Iub termination capacity Up to 500 Mbps
QoS VLAN 802.1Q L2 Ethernet priority 802.1p L3 IP Diffserv priority
IP Transmission InterfaceRBS
Native Iub IP Transport 3GPP TS25.430
One variation for ETB on RNC ET-MFX11
6 x RJ45 1 x SFP
Iub termination capacity Up to 100 Mbps
QoS VLAN 802.1Q L2 Ethernet priority 802.1p L3 IP Diffserv priority
Ericsson STP Setup: (Example)
Thank you