+ All Categories
Home > Documents > Harnessing the Power of HFC Node Facility

Harnessing the Power of HFC Node Facility

Date post: 01-Jan-2017
Category:
Upload: danghanh
View: 214 times
Download: 0 times
Share this document with a friend
35
HUAWEI TECHNOLOGIES CO., LTD. Harnessing the Power of HFC Node Facility May 6, 2014
Transcript
Page 1: Harnessing the Power of HFC Node Facility

HUAWEI TECHNOLOGIES CO., LTD.

Harnessing the Power of HFC Node Facility

May 6, 2014

Page 2: Harnessing the Power of HFC Node Facility

Page 2 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Table of Contents

Abstract ...................................................................................................................................................................... 4

1 Introduction ....................................................................................................................................................... 5

2 HFC Migration Tools and Challenges .................................................................................................................. 6

2.1 Existing Tools ...................................................................................................................................... 6

2.2 Emerging Tools ................................................................................................................................... 9

2.2.1 DOCSIS 3.1 ......................................................................................................................................... 9

2.2.2 Ethernet Node.................................................................................................................................. 10

3 A Facility View of the HFC Network ................................................................................................................. 11

3.1 HFC Node Facility ............................................................................................................................. 12

3.1.1 Physical Elements ............................................................................................................................. 12

3.1.2 Service Attributes ............................................................................................................................. 13

4 The Node Facility Model .................................................................................................................................. 14

4.1 The Bandwidth Model ..................................................................................................................... 15

4.2 Downstream Bandwidth Modeling Result ....................................................................................... 17

4.3 Upstream Bandwidth Modeling Result ............................................................................................ 18

4.4 Bandwidth Modeling Summary ....................................................................................................... 19

5 Ethernet Node ................................................................................................................................................. 21

5.1 Network Maintenance Considerations ............................................................................................ 22

5.2 Power Consumption Analysis ........................................................................................................... 24

5.2.1 Overall Power Consumption Comparison ........................................................................................ 25

6 Overview of Baseband Optical Technologies ................................................................................................... 26

6.1 Comparison of Technical Parameters of GPON and EPON ............................................................... 28

6.2 TWDM-PON ..................................................................................................................................... 30

7 Summary .......................................................................................................................................................... 32

8 References ....................................................................................................................................................... 33

9 Abbreviations and Acronyms ........................................................................................................................... 33

Page 3: Harnessing the Power of HFC Node Facility

Page 3 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Figure 1 Spectrum Map Evolution between 2004 and 2014 ________________________________________________ 7

Figure 2 Spectrum Map Evolutions from 2014 to 2022 ___________________________________________________ 10

Figure 3 A Facility View of the HFC Network ___________________________________________________________ 12

Figure 4 Physical Elements of HFC Node Facility ________________________________________________________ 13

Figure 5 Service Attributes of HFC Node Facility _________________________________________________________ 14

Figure 6 Ethernet Node Facility Model ________________________________________________________________ 15

Figure 7 Downstream Bandwidth Growth Projection 50% CAGR____________________________________________ 16

Figure 8 Downstream Bandwidth Demand per Service Group @ 50% CAGR for 2015-2021 ______________________ 18

Figure 9 Downstream Bandwidth Demand per Service Group @ 35% CAGR for 2015-2021 ______________________ 18

Figure 10 Upstream Bandwidth Demand per Service Group @ 30% CGAR for 2015-2021 ________________________ 19

Figure 11 Small Cell Growth Rate in Dense Urban Areas __________________________________________________ 22

Figure 12 Core Hardware Block Diagrams of Analog Fiber Node and Digital Ethernet Node ______________________ 24

Figure 13 GPON EPON Technology Roadmaps __________________________________________________________ 28

Figure 14 GPON, XG-PON1, and TWDM-PON Coexistence _________________________________________________ 29

Figure 15 Key Technical Challenges Resolved in TWDM-PON ______________________________________________ 31

Figure 16 Coexisting TWDM-PON and GPON Deployment Scenarios ________________________________________ 31

Table 1 Existing HFC Migration Tools __________________________________________________________________ 8

Table 2 Emerging HFC Migration Tools ________________________________________________________________ 10

Table 3 Concurrency Ratio per Service Group ___________________________________________________________ 16

Table 4 Total Bandwidth Demand per Service Group in 2021 ______________________________________________ 19

Table 5 Migration Time-line per Service Group _________________________________________________________ 20

Table 6 Recommended Migration Steps toward 175HHP _________________________________________________ 20

Table 7 Recommended Migration Steps toward 60HHP __________________________________________________ 20

Table 8 Comparison of Network Maintenance Aspects of Fiber Node and Ethernet Node ________________________ 23

Table 9 Power Consumption Comparison of Analog Fiber Node and Digital Ethernet Node ______________________ 24

Table 10 Power Consumption Comparison: M-CMTS vs. Distributed Ethernet Node ____________________________ 25

Table 11 A Summary of Baseband Optical Technologies __________________________________________________ 27

Table 12 Downstream and Upstream Wavelength Bands of GPON and EPON _________________________________ 29

Table 13 Technical Parameters of GPON and EPON ______________________________________________________ 29

Page 4: Harnessing the Power of HFC Node Facility

Page 4 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Abstract

Driven by 50% CAGR in bandwidth consumption, MSOs are increasingly focused on

cost-performance of the HFC network. The unabated upward march in bandwidth demand is

expected to continue as consumers turn to higher and higher volume of IP content and services. The

trend toward Gigabit access has forever changed the broadband access competitive landscape and

the race is on for operators to seek out the lowest cost per bit solution for their access networks.

The Outside Plant (OSP) of the HFC network, specifically the coaxial plant connecting the Optical

Node to the subscribers, has seen its share of upgrades over the years but the network topologies

remain essentially unchanged. In today’s highly competitive landscape, the multi-gigabit capacity

of the coaxial plant stands as a key differentiator for the MSOs. A timely question is – how to best

capture the full performance potential of the coaxial plant?

In trying to answer this question, this paper takes a close look at the coaxial plant as modular

broadband access facilities. Along with a definition of the HFC node facility, several sizes of the

node facility are applied to a bandwidth demand projection model to determine the suitable

starting points and migration steps. A series of commercially available baseband optical

technologies are presented and compared.

As the analysis of this paper shows, the evolution of the HFC network has reached an inflection

point where a combination of baseband optical technologies and optimized Ethernet Node is fully

capable of meeting the cost-performance target and represents a promising HFC migration option.

Page 5: Harnessing the Power of HFC Node Facility

Page 5 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

1 Introduction

Since the deployment of DOCSIS, MSOs have consistently maintained a competitive edge in

broadband access by adopting a series of technological enhancements including the highly

successful DOCSIS 3.0 introduced in 2008. The cycle of enhancements continues today with the

ongoing migration toward Converged Cable Access Platform (CCAP), to be followed by DOCSIS 3.1,

anticipated to start in 2016.

In recent years, a persistent shift in consumer video consumption behavior drove steady increases

in annual bandwidth demand, confirming a long-established downstream bandwidth growth rate

of 50% CAGR [1]. As growing racks of equipment were being installed in the hub to expand DOCSIS

and edge QAM capacities, it became clear that many hubs will soon reach several performance

ceilings including space, power and cooling. With the advent of CCAP, facility issues in many hubs

can be addressed at the cost of equipment swap-outs. Practical business and operational

considerations make migration toward CCAP a multi-year cap-and-grow exercise of relocating,

consolidating, and installing old and new equipment among multiple hubs. Yet questions remain

as to the longevity of an ever expanding integrated platform within the confines of the hub facility.

The steady annual growth in downstream bandwidth demand is a global phenomenon. In some

developing markets, where ARPU is significantly less, low-cost Ethernet over Coax (EOC)

technologies are being deployed to meet an extremely challenging set of business cases.

Deployment of these low-cost EOC technologies such as HomePlug AV creates a lower

performance service delivery platform in regional markets. Continue divergence of incompatible

cable access technologies will not only inhibit sharing of product and service innovations but also

significantly limit the potential for DOCSIS to scale on a global basis.

Preserving a global scale for continuous DOCSIS evolution requires a significantly lower cost

DOCSIS access platform. Couple this new price-performance target with the finite resources

available in the hub facility, the stage is set for a potential paradigm shift in HFC network

migration.

This paper examines the evolutional potential of HFC network as broadband access facilities.

These include HFC hub facility and HFC node facility. The HFC hub facility has been thoroughly

examined by the industry and the associated engineering considerations were identified as part of

the key drivers for CCAP. The focus of this paper is on the HFC outside plant, which consists of

modular HFC node facilities with a specific set of physical elements and service delivery attributes.

Several sizes of the node facility representing the core samples of HFC network are identified.

These node facilities are then applied to a commonly referenced 35~50% CAGR downstream and

30% CAGR upstream bandwidth demand projection model to demonstrate how capacity demands

of the node facility can be met for the next seven years (2015-2021).

Page 6: Harnessing the Power of HFC Node Facility

Page 6 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

2 HFC Migration Tools and Challenges

Since the fiber node became an integral part of the HFC network in 1990s, a number of HFC

performance enhancement tools have been put into practice with good results. These include

node split, spectrum expansion, and several options to improve spectrum utilization. With

increasing capacity utilization, the pace of the migration has been accelerating in the last decade.

Although each one of these currently available options is capable of achieving a certain level of

spectrum capacity gains, the cable industry came to the realization a few years ago that other

options will be necessary to meet the bandwidth demand forecast through the end of the decade.

Starting in 2012, cable industry embarked on a standardization effort, aimed at adopting

Orthogonal Frequency-Division Multiplexing (OFDM) for HFC access, with the formation of DOCSIS

3.1 working groups by CableLabs. As the first fundamental change in HFC access technologies,

DOCSIS 3.1 achieved a new milestone and forever changed the tenor of HFC migration. With the

coming introduction of DOCSIS 3.1, HFC migration has hit an inflection point where other

cost-effective options such as distributed Ethernet Node will also be leveraged to harness the full

potential of the HFC network.

2.1 Existing Tools

Among the existing migration tools, node split is utilized to specifically address the growth in

DOCSIS bandwidth demand. It was put into practice soon after the initial deployment of fiber

nodes and over the years, many larger nodes have been reduced down to the 500 to 750

household passed (HHP) range. With the introduction of DOCSIS 2.0 and 3.0, bandwidth

performance gains through higher order modulations and channel bonding created enough

headroom for bandwidth demand growth and eased the rate of additional splits toward even

smaller size nodes. However, the benefits of fiber-deep architecture including

Fiber-to-the-Last-Amplifier (FTTLA) are well understood. In recent years, node splits down to

around the 175 HHP size have also been implemented to better position the HFC network to

address competitive pressure and support future growth.

In addition to node split, several other tools are available to increase spectrum capacity and

utilization. Spectrum capacity can be expanded by physically upgrading the HFC plant to extend

the upper limit of the RF spectrum range to 750MHz, 870MHz or 1GHz. This is a major undertaking

involving extensive component upgrades. The vast majority of HFC network today have already

been upgraded to either 750MHz or 870MHz. Future migration toward the 1GHz plant will likely

be considered along with moving the US/DS split for US expansion and DOCSIS 3.1 rollout together

as a comprehensive re-engineering of the HFC network.

Page 7: Harnessing the Power of HFC Node Facility

Page 7 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Apart from spectrum expansion, other options not requiring outside plant upgrades are available

to improve spectrum utilization:

1. Analog reclamation

2. MPEG-2 to MPEG-4 AVC / HEVC conversion

3. Narrowcast video

Figure 1 is a sample spectrum usage diagram serves to illustrate a dramatic shift in HFC spectrum

allocations in the last decade.

• STB return

• DOSCIS return

• CBR voice return

• Status monitoring Analog Video

2004

Digital Broadcast

• SDV, VOD

• DOSCIS forward

• CBR voice forward

MHz 20 42 54 550 750

• STB return

• DOSCIS return

• CBR voice return

• Status monitoring

2014

Broadcast SDTV Broadcast HDTV UHDTV/ Narrowcast/ Unicast

• SDV, VOD

• DOSCIS forward

• VoIP forward

• IPTVAnalog

550 750 860 1000MHz 20 42 54

Figure 1 Spectrum Map Evolution between 2004 and 2014

Analog reclamation converts existing analog video channels to digital QAM channels to improve

spectrum utilization. For each 6MHz QAM channel, up to 12 standard-definition channels or 3

high-definition channels can be transmitted instead of a single analog channel. As part of the

analog reclamation program, Digital Terminal Adapters (DTAs) are deployed in the subscriber

premises to support standard TVs with RF input. Depending on market, the DTA is used to

convert MPEG2 to NTSC or DVB-C to PAL.

Conversion from MPEG-2 to a high efficiency encoding scheme such as MPEG-4 AVC or HEVC can

realize more than 50% bandwidth savings. Many subscription television service providers are

using MPEG-4 AVC to deliver HDTV [2] while HEVC is a key enabler for 4K Ultra HD [3]. However,

as an early adopter of HD services, MSOs are saddled with millions of MPEG-2-only digital set-top

boxes (STBs). Even though a significant number of MPEG-4 capable STBs have been deployed as

replacement units for the past several years, linear video today is still distributed in MPEG-2,

making a full transition from MPEG-2 a longer term migration option.

Migration from broadcast to narrowcast video has been extensively deployed in the forms of

switched digital video (SDV) and video on demand (VOD). Less popular linear channels are

offered through SDV while a library of video programs is offered through VOD. The number of

QAM channels used for SDV and VOD varies by markets and demographics, and in North America,

a recent reference model shows approximately 20 channels for a SDV service group and 6

channels for a VOD service group.

Page 8: Harnessing the Power of HFC Node Facility

Page 8 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

SDV enabled statistical gains result in spectrum savings and allows MSOs to offer more linear

channels to keep up with growing consumer demand for more varieties of video content. For a

given SDV service group, the benefit can only be realized with a segment of channels with

intermediate popularity, between the most viewed and the least viewed. The most viewed

channels will continue to be broadcasted and the least viewed channels will only be transmitted

occasionally with low chance of benefiting from multicast. A significant portion of spectrum

savings is realized with the multicast implementation of SDV. However, in order to provide

personalization features, SDV would have to operate in unicast mode. Without multicast gains, a

significant amount of additional spectrum is required to maintain the same level of customer

experience achieved with multicast SDV.

Table 1 provides a summary of the currently available HFC migration tools described above.

Table 1 Existing HFC Migration Tools

Tools Pros Cons

Node Split Basic tool for fiber-deep migration

Doubling DOCSIS bandwidth per node

split

Reduce noise-funneling effect

Does not create more

spectrum capacity

Impact to outside plant

Splitting unbalanced nodes

require more resources

Spectrum Expansion Creates spectrum capacity

Impact to outside plant

Requires equipment upgrade

to cover expanded spectrum

range

Analog Reclamation Creates spectrum capacity

No impact to outside plant

Significant impact to

subscribers

Requires new CPE

Conversion from

MPEG-2 to MPEG-4

AVC / HEVC

Creates spectrum capacity

No impact to outside plant

MPEG-4 STBs are being

deployed

Linear video is only distributed

in MPEG-2

Migration steps include

MPEG-2 and MPEG-4 simulcast

SDV Creates spectrum capacity

No impact to outside plant

Allow addition of long tail programs

Only beneficial for

intermediate channels

Significant reduction in

spectrum capacity gains if

switched from multicast to

Page 9: Harnessing the Power of HFC Node Facility

Page 9 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

unicast

VOD Creates spectrum capacity

No impact to outside plant

Limited use in linear channel

and DVR consumption

environment

2.2 Emerging Tools

There are two key emerging HFC migration tools: DOCSIS 3.1 and Ethernet Node. Both tools

deviate from the incremental enhancement path of the existing tool set by introducing mature

technologies from other telecommunications industry segments into the HFC network. As this new

class of migration tools is put into practice, other innovative tools from the global ecosystem will

surely follow.

2.2.1 DOCSIS 3.1

DOCSIS 3.1 is being developed as the next phase of HFC DOCSIS evolution specifically to address

the exponential bandwidth growth observed in the cable plant. Standard development is expected

to complete in 2014 with initial deployments targeted for 2016. Its principle goal of network

capacity optimization is achieved by supporting higher order modulations such as 4096-QAM

enabled by two technological enhancements over DOCSIS 3.0:

1. Orthogonal Frequency-division Multiplexing (OFDM), which encodes digital data on

multiple sub-carrier frequencies.

2. Low-density Parity Check (LDPC), which is a high efficiency Forward Error Correction (FEC)

technique, used in many high-speed communication standards.

DOCSIS 3.1 is designed to be backward compatible to DOCSIS 3.0 with requirement to support

MAC layer bonding of DOCSIS 3.1 and DOCSIS 3.0 PHY channels. The higher modulation order is

targeted for 4096-QAM (12 bits/Hz) with option to reach 16384-QAM (14 bits/Hz), making it

capable of supporting up to 10Gbps downstream and 1Gbps upstream capacity. The OFDM-based

technologies incorporated into DOCSIS 3.1 are widely adopted in other RF technology standards,

making DOCSIS 3.1 a significant first step toward the convergence of HFC and other access

network technologies.

Some challenges remain with the deployment of DOCSIS3.1 including:

1. Lower CNR of an HFC network can limit the performance of DOCSIS 3.1. For example, the

modulation order of 4096-QAM requires at least 41 dB, but the typical CNR can be

supported by current HFC is around 38-39 dB.

2. Limited spectrum is available when DOCSIS 3.1 coexists with DOCSIS 3.0 especially in the

upstream.

Page 10: Harnessing the Power of HFC Node Facility

Page 10 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Figure 2 shows a series of spectrum diagrams to illustrate how spectrum usage may evolve from

2014 to 2022. Based on recent industry discussions, only a mid-split to 85MHz is expected with the

initial deployment of DOCSIS 3.1 in 2016.

• STB return

• DOSCIS return D3.0/3.1

• CBR voice return

• Status monitoring

• STB return

• DOSCIS return D3.0/3.1

• CBR voice return

• Status monitoring

2022

2016

Broadcast SDTV Broadcast HDTV

Broadcast SDTV Broadcast HDTV UHDTV/ Narrowcast/ Unicast

UHDTV/ Narrowcast/ Unicast

• SDV, VOD

• DOSCIS forward

• VoIP forward

• IPTV

• SDV, VOD

• DOSCIS forward

• VoIP forward

• IPTV

MHz 20 85 550 750 860 1000

MHz 20 200 550 750 860 1000 >1200

• STB return

• DOSCIS return

• CBR voice return

• Status monitoring

2014

Broadcast SDTV Broadcast HDTV UHDTV/ Narrowcast/ Unicast

• SDV, VOD

• DOSCIS forward

• VoIP forward

• IPTVAnalog

550 750 860 1000MHz 20 42 54

Analog

Figure 2 Spectrum Map Evolutions from 2014 to 2022

2.2.2 Ethernet Node

Ethernet Node is a combined DOCSIS MAC and PHY solution connecting to a controller via a packet

digital optical network. Ethernet Node is designed as a dropped-in replacement for the

conventional Fiber Node and supports dedicated DOCSIS 3.0/3.1 bandwidth and legacy video

services. It originates as the Coax Media Converter Type I (CMC I) described in the C-DOCSIS

System Specification, which is a global variant of DOCSIS technology sponsored by State

Administration of Radio Film and Television (SARFT) of China and CableLabs.

Originally designed for the vertical Multiple-Dwelling-Unit (MDU) market, the Ethernet Node is

evolving to become a general purpose Fiber-to-the-Node (FTTN) solution for the migration of HFC

network. While skepticisms remain about potential operational and maintenance impacts of

embedding CMTS functions in the OSP, the role of the Ethernet Node in the longer term HFC

evolution is clear in that Ethernet Node will be the only access network element remaining in the

virtualized architecture of HFC evolution endgame.

Table 2 provides a summary of the emerging HFC migration tools described above.

Table 2 Emerging HFC Migration Tools

Tools Pros Cons

Page 11: Harnessing the Power of HFC Node Facility

Page 11 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

DOCSIS 3.1 Higher modulation orders and improved

forward error correction (FEC) provide

significant B/W gains

Granular subcarriers provide better

immunity to ingress noise for higher

upstream B/W

Well-defined migration path

Higher CNR is required to

reach optimal performance

Large channel width requires a

minimum of 96MHz upstream

and 192MHz downstream

spectrum

CPE pricing will drive initial

deployment

Ethernet Node Highest cost-performance driven by the

global market Optimal solution for MDU

and rural segments

One of the few remote solutions to

realize the full potential of DOCSIS 3.1

End-to-end compatibility for the delivery

of legacy QAM video and other services

Drop-in replacement for fiber node

Synergy with FTTH rollout

The only HFC access network element to

remain in the virtualized HFC evolution

endgame

Potential impacts to OSP

operational practices

Insufficient field deployment

experience

3 A Facility View of the HFC Network

A telecommunications facility is generally defined as any part of the infrastructure of a

telecommunications network. The key components of the HFC network include the hub facility

and the node facility. A single hub facility is connected to a number of node facilities spread out

over the serving area. Telecommunications facilities are heavily regulated structures and assets.

Optimization and consolidation of existing telecommunications facilities are part of the long-term

objectives for most network operators.

The size of a hub facility is dependent on the number of household passed and varies greatly

across HFC networks. With exponential growth in bandwidth demand occurring in all markets, hub

facilities of all sizes are hitting a performance ceiling in terms of space, power and cooling. While

integrated CCAP can be deployed to address these resource issues in many medium and large hub

facilities, a solution is not quite straightforward for many smaller hub facilities. In these cases, the

logical next step is to look toward the node facility.

Page 12: Harnessing the Power of HFC Node Facility

Page 12 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Figure 3 A Facility View of the HFC Network

3.1 HFC Node Facility

With multi-gigabit bidirectional access capacity connecting to over 90% of homes in some markets,

the HFC outside plant is the crown jewel of the HFC network. The HFC outside plant has seen its

share of upgrades over the years but the plant topologies stay essentially unchanged.

HFC outside plan consists of individual coaxial plants each with a single fiber nodes connecting to a

number subscriber homes. From the fiber node down to the drop cables, the coaxial plant is a fully

powered broadband access facility with a huge potential for performance optimization. With a

minimum of 2 fibers connecting to each fiber node, it can be readily adapted to a number of fiber

access topologies.

3.1.1 Physical Elements

The HFC node facility is a coaxial plant with both active and passive elements. In addition to the

fiber node, a number of other physical elements are parts of a typical coaxial plant. These physical

elements include active components such as Trunk Amplifiers, Bridger Amplifiers, Line Extenders,

and Uninterruptable Power Supplies (UPS), as well as passive components including coaxial cable,

splitters and taps.

The physical size of the node facility is a function of HHP and serving group density. Cascaded

amplifiers are used as needed to propagate upstream and downstream RF signals at a high gain

performance while minimizing distortion. Unity gain is applied to every cable segment between

any two amplifiers and between the fiber node and the adjacent amplifiers. Unity gain is set up

and maintained with return signals arriving at each amplifier/node input port at the same level

while each amplifier/node is set to output the forward signals at the same level.

UPS provides primary and backup power to actives in the HFC node facility. Typically, UPS is

installed in an outdoor enclosure, which is either pole-mounted or ground-mounted depending on

the rules of local regulation and utilities. Most UPSs can be monitored via an embedded cable

modem (CM). HFC power consumption is a significant part of OPEX for the MSOs. Improving

overall efficiency while maintaining network reliability is a continuous tradeoff exercise as HFC

Page 13: Harnessing the Power of HFC Node Facility

Page 13 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

migration continues.

Fiber Node

Trunk

AmpBridger

Amp

Line Extender

Tap

Fibers

Power Supply

Figure 4 Physical Elements of HFC Node Facility

3.1.2 Service Attributes

A finely tuned node facility with amplifiers and node set for unity gain provides the foundation for

continuing migration of the HFC network. Enhancements and innovations will continue to be

introduced to expand the service delivery capacity of the node facility. With the coming

introduction of DOCSIS 3.1, downstream and upstream bandwidth capacities available in the cable

plant will be determined by a myriad of factors driven by the DOCSIS 3.1 migration strategy. In

addition to the typical upstream/downstream spectrum split changes and spectrum expansion

toward 1GHz, DOCSIS 3.0 and 3.1 channel mix will also be part of the equation.

During this migration process, changes in spectrum assignment, as reflected by updates in channel

line-up, will be the result of a continuous trade-off exercise to meet the changing service

requirements. In addition to expanding DOCSIS bandwidth, simulcast required for QAM video also

presents a significant challenge in spectrum availability. Any change in spectrum utilization must

be coordinated with the deployment of DOCSIS 3.1 capable cable modems and gateways, as well

as other CPEs such as MPEG-4 capable STBs. The overwhelming impact of legacy CPE support has

on MSO’s migration plan cannot be overstated.

A full accounting and continuous updates of availability and performance level of various

resources within the node facility will be vital to the successful planning and execution of each

migration step. These monitoring and maintenance functions of the node facility can be realized

with a suite of impairment identification capabilities as provided by Proactive Network

Maintenance (PNM). Compared to the current approach of using a centralized PNM server, a

distributed implementation of PNM where data collection and analysis are dedicated to the node

facility represents a more efficient framework with an added dimension of facilitating a new set of

Page 14: Harnessing the Power of HFC Node Facility

Page 14 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

localized features and innovations.

Figure 5 Service Attributes of HFC Node Facility

4 The Node Facility Model

The node facility is the principle building block of HFC network, most commonly characterized by

the number of HHP. Although the actual number of HHP per node varies significantly from more

than 1000 to less than 100 across different market segments, the vast majority of fiber nodes

today are connected to 700 or less HHP, with 500 HHP being the most prevalent. In the fiber-deep

case where trunk fibers have been extended to the last amplifier, the fiber node is connected via

passives and coaxial cable directly to the subscriber premises, making amplifier an optional

element of the node facility.

While the node facility has always been a modular extension of the hub facility since fiber node

was first deployed, the tight coupling of the two facilities driven by the centralized hub-based

solution has kept the speed of the migration within the node facility at a measured pace. With the

introduction of emerging tools and capabilities including DOCSIS 3.1 and Ethernet Node, the node

facility is becoming the focal point for the next cycle of HFC migration. Starting with the ongoing

preparation for DOCSIS 3.1 rollout, the entire cable industry is analyzing every aspect of the node

facility to seek out the most effective migration path. The clarity of the inner workings of the HFC

node facility resulting from these analyses is truly unprecedented, and will provide a sound base

for future innovations from a newly expanded ecosystem.

Figure 6 shows a block diagram of a node facility used in this model. For this modeling exercise, an

Ethernet node with embedded DOCSIS 3.1 capabilities is used. Using XG-PON1/TWDM-PON as an

example, the dual fiber connections to the hub is capable of delivering up to 10 Gbps of

downstream and 2.5 Gbps of upstream bandwidth, as well as broadcast and narrowcast QAM

video and Out-of-Band (OOB) forward and return for the support of legacy video services.

Page 15: Harnessing the Power of HFC Node Facility

Page 15 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Hub

Power

Supply

Node Facility

HHPFiber

DOCSIS MAC/PHY

PON / P2P

POWER

Mixer/AMP2

Mixer/AMP1

TX/RX WDM

Ethernet Node

HHP

Figure 6 Ethernet Node Facility Model

4.1 The Bandwidth Model

Recent data confirmed DS bandwidth continue to grow at 50% CAGR, principally driven by

increasing consumer demand for streaming video. Whether growth will continue at this rate is

subject to debate with one opinion claiming that CAGR may settle at 35% due to higher efficiency

in various aspects of streaming video. The current consensus for US bandwidth growth rate is 30%

CAGR.

There are two principal drivers for the persistent demand in downstream bandwidth:

1. Cable operators’ continuing migration toward IP video

2. Increasing popularity of over-the-top video and other applications

A recently published 50% CAGR bandwidth growth projection for the next 7 years [4] is shown in

the Figure 7. As noted in the diagram, there are two key parameters:

1. Max speed: drives bonding group size which equals to the size of the DOCSIS pipe

between the CMTS and the connected CMs.

2. Weighted average speed: drives total bandwidth needed. This is the sustained bandwidth

required per subscriber. For a given service group, the sum of weight average speed for all

subscribers are adjusted based on a concurrency ratio to determine the total amount of

bandwidth required for the service group.

Page 16: Harnessing the Power of HFC Node Facility

Page 16 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Figure 7 Downstream Bandwidth Growth Projection 50% CAGR

For the bandwidth modeling, total bandwidth needed is calculated for five sizes of service group:

700, 500, 350, 175 and 60. A take rate of 40% is used for all service groups. The weighted average

speed is roughly divided between HSI and IP video. A 10:1 oversubscription ratio is applied to HSI

bandwidth for all service groups. A 10:7 oversubscription ratio is applied to IP video bandwidth

and adjusted with a sliding scale of multicast gains based on service group size. The overall

concurrency ratios are calculated by combining the result from both services as shown in the

following formula illustrated for 700 HHP: (10Mbps*1/10+8Mbps*7/10*(1-25%))/18Mbps=29%.

Table 3 Concurrency Ratio per Service Group

Service Group Size (HHP) 700 500 350 175 60

Weighted Average Bandwidth per Home 18Mbps (10Mbps HSI + 8Mbps IP video)

Average HSI Bandwidth / Oversubscription Ratio 10Mbps / 10:1

Average IP Video Bandwidth / Oversubscription Ratio 8Mbps / 10:7

IP Video Multicast Gains 25% 20% 15% 10% 0

Concurrency Ratio 29% 30% 32% 34% 37%

While the formula used to determine the concurrency ratio is based on a set of simplified

assumptions made for this specific set of service groups, the resulting ratios shown in Table 3 are

reflective of increasing utilization of broadband access. Concurrency ratio especially with regard to

IP video has a direct impact to quality of services delivered, hence a critical consideration for MSOs.

The concurrency ratios shown here are not intended as a recommendation or to reflect any actual

implementations.

Page 17: Harnessing the Power of HFC Node Facility

Page 17 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

The upstream bonding group for a typical service group size of 700HHP includes three 6.4MHz

ATDMA channels (2X64QAM and 1X16QAM), providing 80Mbps of bandwidth capacity (30Mbps

each for the two 64QAM channels and 20Mbps for the 16QAM channel). For upstream bandwidth

modeling, the 80Mbps capacity is used as the starting bandwidth demand for 700HHP. The starting

bandwidth demand for other service groups are proportionally generated based on the size of the

service group.

4.2 Downstream Bandwidth Modeling Result

The result of downstream bandwidth modeling with 50% CAGR is shown in Figure 8, which includes

bandwidth demand projection for all five service groups. Also showed in the diagram are the

bandwidth capacities of the following bonding groups representing both near-term and

longer-term options:

1. Maximum size of DOCSIS 3.0 bonding group with 32 SC-QAM channels (Annex B 6MHz,

40Mbps data rate): 1280Mbps

1. Minimum size of DOCSIS 3.1 bonding group with 32 SC-QAM channels (commercial target,

8 channels higher than the minimum of 24 SC-QAM channels specified by CableLabs) and 2

OFDM channels: 5280Mbps

Some key observations are as follows:

1. For the typical 500HHP service group, downstream bandwidth demand will exceed the

DOCSIS 3.1 bonding group capacity by mid-2018.

2. For the fiber-deep 175HHP service group, the DOCSIS 3.0 bonding group will be sufficient

until mid-2017, and with a DOCSIS 3.1 grade, bandwidth demand can be met until the end

of 2020.

20

25

30

35

40

45

2014 2015 2016 2017 2018 2019 2020 2021

700HHP

500HHP

350HHP

175HHP

60HHP

32 SC-QAM + 2x192MHz OFDM

10log(Mbps)

32 SC-QAM

Page 18: Harnessing the Power of HFC Node Facility

Page 18 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Figure 8 Downstream Bandwidth Demand per Service Group @ 50% CAGR for 2015-2021

As stated earlier, some industry observers see downstream bandwidth growth rate moderating at

35% CAGR with higher efficiency in streaming video. The result of downstream bandwidth

modeling with 35% CAGR is shown in Figure 9.

20

25

30

35

40

45

2014 2015 2016 2017 2018 2019 2020 2021

700HHP

500HHP

350HHP

175HHP

60HHP

32 SC-QAM + 2x192MHz OFDM

32 SC-QAM

10log(Mbps)

Figure 9 Downstream Bandwidth Demand per Service Group @ 35% CAGR for 2015-2021

With this model, downstream bandwidth demand of the typical 500HHP service group can be met

until later 2019, which means a node split is still required for the last 2 years of the modeling

period. On the other hand, a migration path based on the 175HHP service group begins to emerge.

With 175HHP, downstream bandwidth can be met with the 32-channel DOCSIS 3.0 bonding group

until early 2018. With a DOCSIS 3.1 upgrade, bandwidth demand can be met well into 2022.

4.3 Upstream Bandwidth Modeling Result

The result of upstream bandwidth modeling with 30% CAGR is shown in Figure 10, which includes

bandwidth demand projection for all five service groups. Also showed in the diagram are the

bandwidth capacities of the following bonding groups representing both near-term and

longer-term options:

1. Three 6.4MHz ATDMA channels (2X64QAM @6.4Mhz, 1X16QAM @6.4Mhz): 80Mbps

2. Eight 6.4MHz ATDMA channels: 200Mbps

3. Eight 6.4MHz ATDMA channels and one 24MHz OFDMA channels: 400Mbps

Some key observations are as follows:

Page 19: Harnessing the Power of HFC Node Facility

Page 19 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

1. For the typical 500HHP service group, upstream bandwidth demand will exceed the DOCSIS

3.1 bonding group capacity after mid-2021.

2. For the fiber-deep 175HHP service group, the existing 3 SC-QAM channels will be sufficient

until mid-2019. With 8 SC-QAM channels and a plant upgrade to 85MHz mid-split,

bandwidth demand can be met close to the end of 2022 without DOCSIS 3.1.

3. With node splits to 175HHP, the 85MHz mid-split upgrade can be delayed until 2019.

4. A plant upgrade to 200MHz high-split is not required for the modeling period.

10

12

14

16

18

20

22

24

26

28

2014 2015 2016 2017 2018 2019 2020 2021

700HHP

500HHP

350HHP

175HHP

60HHP

10log(Mbps)

8 SC-QAM

8 SC-QAM + 24MHz OFDMA

3 SC-QAM

Figure 10 Upstream Bandwidth Demand per Service Group @ 30% CGAR for 2015-2021

4.4 Bandwidth Modeling Summary

The primary objective of the bandwidth modeling is to determine if, how and when HFC

bandwidth demand can be met by existing and emerging migration tools. With the modeling

exercise, a template is created, which can be readily adapted to specific HFC scenarios to create a

suitable migration strategy.

Table 4 contains a summary of upstream and downstream bandwidth demands projected for each

service group at the end of 7-year modeling period in 2021. Bandwidth demands that can be

supported by the maximum bonding group size of 5280Mbps downstream and 400Mbps upstream

are also identified.

Table 4 Total Bandwidth Demand per Service Group in 2021

Service Group Size

(HHP)

700 500 350 175 60

DS Bandwidth with

50% CAGR

24.97Gbps 18.45Gbps 13.78Gbps 7.32Gbps 2.73Gbps

Page 20: Harnessing the Power of HFC Node Facility

Page 20 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

DS Bandwidth with

35% CAGR

11.94Gbps 8.82Gbps 6.59Gbps 3.50Gbps 1.31Gbps

US Bandwidth with

30% CAGR

502Mbps 358Mbps 251Mbps 125Mbps 43Mbps

Other factors to consider are DOCSIS 3.1 upgrade and 85MHz mid-split upgrade. DOCSIS 3.1 is not

expected to start until 2016. Therefore, it is important to know, for each service group, when the

bandwidth demand will exceed the capacity of maximum DOCSIS 3.0 bonding group, thus

requiring a DOCSIS 3.1 upgrade. Upgrade to 85MHz mid-split or 200MHz high-split is generally

considered the next migration step leading up to, or as part of, the DOCSIS 3.1 upgrade cycle, so it

is also important to know when a mid-split upgrade becomes necessary. The timeline information

is summarized in Table 5, which also highlights the timelines on or after 2016 to illustrate alignment

with the start of DOCSIS 3.1 upgrade cycle.

Table 5 Migration Time-line per Service Group

Service Group Size (HHP) 700 500 350 175 60

DS B/W demand exceeds

D3.0 BG @ 50% CAGR

2014 2015 2015 2017 2019

DS B/W demand exceeds

D3.0 BG @ 35% CAGR

2014 2015 2016 2018 2021

Mid-split upgrade 2014 2015 2016 2019 2022

Based on the result of the bandwidth modeling analysis, it is clear a node split to 175HHP would be

necessary to meet the annual increases in bandwidth demand through 2021. Depending on the

current node size and taking into account a variety of market-specific factors, node split to 175HHP

should start between now and 2016, followed by mid-split and DOCSIS 3.1 upgrades as shown in

Table 6.

Table 6 Recommended Migration Steps toward 175HHP

Current Service Group Size (HHP) 700 500 350 175 60

Step 1: Node split to 175HHP 2014 2015 2016 N/A N/A

Step 2: Upgrade to mid-split 2018 2021

Step 3: Upgrade to DOCSIS 3.1 2018 2018 2019 2022

Alternatively, a node-split to 60HHP would defer DOCSIS 3.1 and mid-split upgrade until 2021 as

shown in Table 7.

Table 7 Recommended Migration Steps toward 60HHP

Current Service Group Size (HHP) 700 500 350 175 60

Page 21: Harnessing the Power of HFC Node Facility

Page 21 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Step 1: Node split to 60HHP 2014 2015 2016 2018 N/A

Step 2: Upgrade to DOCSIS 3.1 and mid-split 2021 2021 2021 2021 2021

5 Ethernet Node

In the past few years, there have been many discussions about the suitability of deploying

Ethernet Nodes in HFC network. Concerns were raised primarily due to the physical layer nature of

legacy cable plant and the potential operational and maintenance impacts of adding electronics in

the OSP. These are valid concerns given the history of HFC network evolution but a much different

picture emerges when the role of the Ethernet Node is considered in a comprehensive and

forward-looking context.

Although a fairly recent addition to the HFC network, the Ethernet Node is a conventional

Ethernet Bridge with a built-in media converter, extensively used in media conversion applications

in both wireless and wireline access networks. Many of these Ethernet Nodes, including Wi-Fi

access points, small cells and Fiber-to-the-Distribution Point (FTTdp) ONUs, are designed for

outdoor applications, and are built upon an evolving, mature access product platform with a

proven reliability record.

Overall, access network architecture has been trending toward incremental build-out of digital

fiber connecting to purpose-built media converters dedicated to fixed-size serving areas. This

trend is accelerating as both FTTH and wireless networks, having completed the initial phases of

deployment, are moving toward a more granular and cost-effective architecture to increase

performance and coverage. It is especially apparent in the increasing deployment of metro cells in

3G and 4G networks as illustrated in Figure 11 with one forecast predicting more than 5 million

metro cells to be shipped in 2017 [5]. The benefits of deploying metro cells in the wireless network

are well-understood to include the following:

1. Cost-effective augmentation of the macro network

2. Granular and flexible deployment to add bandwidth and coverage as needed

3. Service innovations in a distributed and localized environment

Page 22: Harnessing the Power of HFC Node Facility

Page 22 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

0

100

200

300

400

500

600

700

800

900

1,000

2012Y 2014Y 2016Y 2018Y

LTE Connection HotZone LTE Capacity(Spectrum) LTE Capacity(HetNet)

1:2

Macro : Small Cell

Connected User per Macro Sector

Source: Huawei simulationDense urban area traffic forecast

1:7

1:12

Figure 11 Small Cell Growth Rate in Dense Urban Areas

Much in the same way small cell with PON backhaul augments the macro wireless network [8], the

application of Ethernet Node in the HFC network is expected to bring a similar set of benefits as a

complimentary solution to the centralized HFC architecture.

Besides being the center piece of access network evolutionary trend, the Ethernet Node also holds

an unique position in the HFC evolution path, when considered in its entirety, from the near-term

focus of enhancing bandwidth and spectrum utilization to the eventual virtualized architecture,

the Ethernet Node stands out as the only network element that will persist through the end of the

evolution cycle. Seeing it this way, it would be prudent for MSOs to evaluate various aspects of the

Ethernet Node to determine how it can be introduced into their HFC networks.

This section presents a basic impact analysis of deploying the Ethernet Node in the HFC network

by comparing network maintenance and power consumption aspects of the Ethernet Node and

Fiber Node.

5.1 Network Maintenance Considerations

Compared to traditional Fiber Node, the Ethernet Node incorporates electronics to support three

additional functions: PON/P2P ONU, Ethernet Bridge, and a simplified DOCSIS CMTS. Among these

functions, PON/P2P ONU and Ethernet Bridge are both mature technologies backed by a large

ecosystem of vendors. The only exception is DOCSIS CMTS, which has several Field-programmable

Page 23: Harnessing the Power of HFC Node Facility

Page 23 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Gate Array (FPGA)-based solutions with limited commercial chipset offerings. However, with the

coming introduction of DOCSIS 3.1, availability of CMTS chipsets is expected to improve aided by

increasing adoption of Ethernet Nodes.

From a network management perspective, the Ethernet Node is centrally managed via an access

network element, which, in the near-term, aggregates both control and data planes for the

underlying Ethernet Nodes, and operates as a virtual CCAP. For the PON based Ethernet Node, OLT

is the access network element that also provides a highly scalable fiber access concentration

platform supporting evolution toward next-generation PON technologies.

In a PON based system, the Ethernet Node in simply a variant of media converter developed on

top of a mature FTTN Multiple-Any-media-Unit (MxU) platform, from which Ethernet Node

inherits a complete suite of field-proven maintenance features including system recovery,

software upgrade and remote diagnostics. The addition of DOCSIS MAC and PHY functions in this

platform may appear to be a complicated proposition considering an abundant of traditional CMTS

features supported by integrated solutions. However, many of those legacy CMTS features

designed to improve scalability and performance of a centralized solution are simply not needed

for a distributed Ethernet Node designed for a small service group with up to a couple of hundred

subscribers. The simplified node-based deployment scenario makes it possible to develop Ethernet

Node as an optimized CMTS solution from the ground up. Consequently, an Ethernet Node

developed on top of a carrier-grade platform is expected to meet or exceed the reliability of

traditional fiber nodes.

Table 8 provides a summary of comparison of network maintenance aspects of Ethernet Node and

Fiber Node.

Table 8 Comparison of Network Maintenance Aspects of Fiber Node and Ethernet Node

Node Type Pros Cons

Ethernet Node

High optical power dynamic range

SNR not impacted by digital optics

More efficient RF monitoring and

analysis for automated fault

isolation and RF adjustments

Potential Impact to OSP

operational practices

Reliability may decrease because

of complex components involved.

Fiber Node

Well established maintenance

record

Mature technology with high

reliability.

No software download or

configuration actions driven by

control interface

The AM optics link is SNR aware.

No management functions unless a

special CM is embedded.

Page 24: Harnessing the Power of HFC Node Facility

Page 24 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

5.2 Power Consumption Analysis

The core hardware block diagrams of analog Fiber Node and digital Ethernet Node are shown in

Figure 12. The analog Fiber Node includes O/E and E/O converter, amplifier components, and an

optional embedded CM for management; the digital Ethernet Node includes a digital optical

module which can be a P2P optical transceiver or a PON ONU, a DOCSIS PHY/MAC implemented in

FPGA or Application-specific Integrated Circuit (ASIC), and an Analog Front End (AFE) including ADC,

DAC and RF circuits. Power consumption of an actual Fiber Node or Ethernet Node will be higher

due to additional components required to support product features such as multiple RF ports.

Digital OpticalModule

PHY/MAC FPGA or ASIC

ADCEthernetFiber

DAC

RF

Analog O/E Receiver

Downstream Amplifier

Return E/O Transmitter

Managed CM(Option)

AnalogFiber

Digital Ethernet NodeAnalog Fiber Node

Figure 12 Core Hardware Block Diagrams of Analog Fiber Node and Digital Ethernet Node

As shown in Table 9, power consumption of the digital Ethernet Node is about 20W higher than

that of analog Fiber Node. The key reason is the current generation of node-specific DOCSIS

MAC/PHY chipset consumes a relatively large amount power. DOCSIS MAC/PHY in the existing

Ethernet Node solutions is digital circuits consisting of FPGA or ASIC chipset. On average, power

consumption of the FPGA device decreases 15% every year while each successive version of

commercial chipsets is expected to achieve a higher percentage of improvement in power

performance.

Table 9 Power Consumption Comparison of Analog Fiber Node and Digital Ethernet Node

Analog Fiber Node Digital Ethernet Node

Analog O/E Receiver 6W Digital Optical Module

P2P 1W

Analog E/O Transmitter 3W PON 7W

Downstream Amplifier 11W DOCSIS MAC/PHY (32x10) 22W

Managed CM (Option) 5W ADC and DAC 2W

Power Supply efficiency 85% RF Amplifier 11W

Power Supply efficiency 85%

Total Power Without CM 23.5W

Total Power P2P 42.3W

With CM 29.4W PON 49.4W

Page 25: Harnessing the Power of HFC Node Facility

Page 25 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

The additional 20W of power required by each Ethernet Node is well within the overhead power

capacity of the node facility. As shown in the bandwidth modeling result, the deployment of

Ethernet Nodes is expected to be carried out together with a node split to 175HHP, which may

result in the removal of a number of amplifiers. Consider a typical amplifier consumes about 50W

of power, the additional power needed to support the new Ethernet Nodes should be more than

offset by the power saved during node split.

5.2.1 Overall Power Consumption Comparison

While the introduction of Ethernet Node has only a small impact to power consumption within the

node facility, the overall power consumption benefit of a distributed solution to both the node

facility and the hub facility is significant. Similar to the power saving benefit realized by the

introduction of I-CCAP, the amount of power savings increases as a hub is gradually converted from

the existing M-CMTS to an OLT based distributed Ethernet Node solution.

Table 10 serves to illustrate the overall power savings of an OLT based distributed Ethernet Node

solution compared to an M-CMTS solution. Using a hub with 30000 HHP as an example, power

consumption is calculated based on 700/500/350/175/60 HHP per node, each served with a 16X4

DOCSIS 3.0 bonding group. As expected, overall power savings increases dramatically with

fiber-deep service group sizes at 175 or lower number of HHP.

Table 10 Power Consumption Comparison: M-CMTS vs. Distributed Ethernet Node

M-CMTS Solution Ethernet Node Solution Total Power

Saved

700HHP Hub Chassis 1 CMTS core + 1

UEQAM + 43

TX/RX

Hub Chassis 1 OLT + 43

TX/RX

3.24kW

Total Power

Consumption

2000W +

3000W + 430W

= 5430W

Total Power

Consumption

900W +

430W =

1330W

Node Number of

Nodes

43 Node Number of

Nodes

43

Total Power

Consumption

~80W x 43 =

3440W

Additional Power

Consumption

20W over

Fiber Node

x 43 = 860W

500HHP Hub Chassis 1 CMTS core +1

UEQAM + 60

TX/RX

Hub Chassis 1 OLT + 60

TX/RX

2.9kW

Total Power 2000W + Total Power 900W +

Page 26: Harnessing the Power of HFC Node Facility

Page 26 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Consumption 3000W + 600W

= 5600W

Consumption 600W =

1500W

Node Number of

Nodes

60 Node Number of

Nodes

60

Total Power

Consumption

~80W x 60 =

4800W

Additional Power

Consumption

20W x 60 =

1200W

350HHP Hub Chassis 1 CMTS core + 1

UEQAM + 85

TX/RX

Hub Chassis 1 OLT + 85

TX/RX

2.4kW

Total Power

Consumption

2000W +

3000W + 850W

=5850W

Total Power

Consumption

900W +

850W =

1750W

Node Number of

Nodes

85 Node Number of

Nodes

85

Total Power

Consumption

~80W x 85 =

6800W

Additional Power

Consumption

20W x 85 =

1700W

175HHP Hub Chassis 2 CMTS core + 2

UEQAM + 170

TX/RX

Hub Chassis 2 OLT + 170

TX/RX

4.8kW

Total Power

Consumption

11.7kW Total Power

Consumption

3.5kW

Node Number of

Nodes

170 Node Number of

Nodes

170

Total Power

Consumption

~80W * 170 =

13.6kW

Additional Power

Consumption

20 W x 170 =

3.4kW

60HHP Hub Chassis 4 CMTS core + 8

UEQAM + 500

TX/RX

Hub Chassis 6 OLT + 500

TX/RX

16.6kW

Total Power

Consumption

37kW Total Power

Consumption

10.4kW

Node Number of

Nodes

500 Node Number of

Nodes

500

Total Power

Consumption

~80W x 500 =

40kW

Additional Power

Consumption

20 W x 500 =

10kW

6 Overview of Baseband Optical Technologies

Compared to analog optics link used to transport modulated analog signals in conventional HFC

network, baseband optical technologies offer a different set of value proposition that is

Page 27: Harnessing the Power of HFC Node Facility

Page 27 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

increasingly relevant for HFC migration. One of the principle drivers for adopting digital optics is to

extend the reach of native IP content delivery pipe in the HFC network. Other benefits include

noise-immunity, plug-n-play installation, and simplified maintenance and diagnostic processes.

Short of FTTH deployment, connecting distributed Ethernet Nodes with digital optics links is an

obvious next step.

Driven by steady global FTTx deployments during the past decade, the application of baseband

optics in the access network has seen tremendous growth. With each new FTTx project,

incremental advancements were made in various deployment techniques to lower the

construction cost, which in turn enabled more projects and drove down the cost of baseband

optics and overall FTTx solutions.

Depending on market, up to 40% of the FTTx deployment is targeted for MDU, which requires

significantly higher uplink bandwidth than the SFU serving a single FTTH subscriber. The recent

trend toward Gigabit access is driving demand toward a much higher uplink capacity especially for

MDU solutions. Since baseband optics is utilized in almost all FTTx solutions, these market factors

are driving rapid development and commercialization of high-capacity next-generation baseband

optical technologies. A summary of current and future baseband optical technologies is provided

in the Table 11.

Table 11 A Summary of Baseband Optical Technologies

Technologies Capacity Cost Deployment&

Maintenance

Logical

Reach

Suitable for Ethernet Node

2.5G GPON Shared 2.5G DS

/ 1.25 G US

Low Colorless ONU,

co-existence on the

same Optical

Distribution

Network (ODN),

high scalability, easy

deployment &

maintenance

60km Limited (for <60HHP node)

10G GPON Shared 10G DS /

2.5G US

Medium 60km Yes

TWDM-PON Shared 40G DS /

10G US

Medium

to High

60km Yes

1G EPON Shared 1G / 1G Low Not

specified

No

10G EPON Shared 10G /

1G or 10G / 10G

Medium Not

specified

Yes

10G P2P Dedicated 10G

Symmetric

Medium Requires individual

trunk fibers

120km Dedicated 10G symmetric

bandwidth

10G DWDM

Ring

Up to 40/80

wavelengths

High Requires inventory

to manage colored

SFPs

120km

Page 28: Harnessing the Power of HFC Node Facility

Page 28 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

6.1 Comparison of Technical Parameters of GPON and EPON

PON is the dominate baseband optical technology, accounting for 93% of FTTx deployment [6].

GPON and EPON are the two mainstream PON technologies. In 2013, worldwide EPON revenue is

about one-third of GPON [6].

At 2.4Gbps DS and 1.2Gbps US, GPON is widely deployed for FTTH in most markets as it offers

better cost-performance than 1G/1G EPON as shown in Table 13. Both XG-PON1 and 10G EPON

solutions are now available, mostly targeted for the residential MDU and various business service

applications including wireless backhaul.

Selection of a particular PON technology essentially commits the operator to a specific technology

roadmap. As shown in Figure 13, the GPON technology roadmap extends beyond XG-PON1 to

TWDM-PON, which will see initial deployment beginning in 2016 [6]. For EPON, the current

technology roadmap ends with 10G EPON. Discussion of next-generation EPON has just started

with formation of IEEE 802.3 Industry Connections NG-EPON Ad Hoc in January 2014 [7].

2004

IEEE 802.3ah

2008

ITU G.984

2009

IEEE 802.3av

2010

ITU G.987

XG-PON1

2013

ITU G.989

NG-PON2

(TWDM-PON)

1G

2.4G

10G

2.4G

1G

10G

40G

1.2G

2017?

IEEE NG-EPON

1.25G/2.5G PON

10G PON

NG PONEPON

DS US

GPON

DS US

Figure 13 GPON EPON Technology Roadmaps

Both GPON and EPON are designed to support coexistence of all standard-defined PON signals on

the same ODN. This is accomplished by utilizing different optical wavelengths for all upstream and

downstream signals. In the case of GPON, all three generations of PON signals, both upstream and

downstream, can coexist in the same ODN as shown in Figure 14.

Page 29: Harnessing the Power of HFC Node Facility

Page 29 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

XG-PON1OLT

GPON ONU

WDM MUX

GPON

OLT

splitter 10G/2.5G

2.5G/1.25G

TWDM-PONOLT

Tunable 10G/2.5G

XG-PON1 ONU

TWDM-PON ONU

1577nm

1270nm

1577nm

1270nm

40G/10G

10G/2.5G

2.5G/1.25G

Hub

Figure 14 GPON, XG-PON1, and TWDM-PON Coexistence

However with EPON, the wavelength bands of 1G and 10G EPON upstream signals overlap as

shown in Table 12. In order to support 1G and 10G EPON coexistence, the upstream EPON channels

are separated in the time domain, commonly referred to as dual-rate TDMA, which also requires

the use of a dual-rate receiver in the 10G EPON OLT line card.

Table 12 Downstream and Upstream Wavelength Bands of GPON and EPON

GPON EPON

GPON XG-PON1 TWDM-PON EPON 10G EPON DS Wavelength Band 1490 nm 1575 – 1580 nm 1596 – 1603 nm 1480 – 1500 nm 1575 – 1580 nm US Wavelength Band 1310 nm 1260 – 1280 nm 1524 – 1544 nm 1260 – 1360 nm 1260 – 1280 nm

A summary of technical parameters of current-generation GPON and EPON are shown in Table 13.

Table 13 Technical Parameters of GPON and EPON

Feature GPON EPON XG-PON1 10G EPON Standard ITU G.984 IEEE 802.3 ITU G.987 IEEE 802.3av Payload Protocol Ethernet Ethernet Ethernet Ethernet Payload

Encapsulation Generic Encapsulation

Method Ethernet framing + tag Generic Encapsulation

Method Ethernet framing + tag Bandwidth (DS / US) 2.4 / 1.2 Gbps 1 / 1 Gbps 10 / 2.4 Gbps 10 / 1 & 10 / 10 Gbps Number of ONTs 32 / 64 16 / 32 32 / 64 / 128 / 256 16 / 32 / 64 / 128 / 256 32-way split Per-User

Effective DS/US B/W 71.8 / 35.9 Mbps 29 / 26 Mbps 265 / 70 Mbps 265 / 265 / 26 Mbps

Page 30: Harnessing the Power of HFC Node Facility

Page 30 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

Logical Reach (km) 60 Not specified 60 / 80 Not specified Differential Reach

(km) 20 / 40 20 20 / 60 20 Wavelengths (DS /

US) 1490 / 1310 nm 1490 / 1310 nm 1270 / 1577 nm 1270 / 1310 / 1577 nm RF Overlay (DS) 1550 nm 1550 nm 1550 nm 1550 nm

Line Coding NRZ with scrambling 8b/10b NRZ with scrambling 8b/10b (asymmetric)

64b/66b (symmetric) Optics B: 25 dB PX10: 20dB N1: 29 dB PRX10, PR10: 20 dB B+: 28 dB PX20: 24 dB N2: 31 dB PRX20, PR20: 24 dB C: 30 dB PX20+: 28 dB E1: 33 dB PRX30, PR30: 29 dB C+: 32 dB E2: 35 dB Laser On / Off 25.7 ns 512 ns 25.7 ns 512 ns Framing GPON TC (125 us) 802.3 based GPON TC (125 us) 802.3 based

FEC RS(255, 239) RS(255, 239) DS: RS(248, 216), US:

RS(248, 232) RS(255, 223) Multicast Yes Yes Yes Yes PON Security AES-128 AES-128 AES-128 AES-128 QoS Ethernet & T-CONT Ethernet Ethernet & T-CONT Ethernet

OAM Ethernet, GEM,

PLOAM, OMCI Ethernet Ethernet, GEM, PLOAM,

OMCI Ethernet Protection Type B & C Not standardized Type B & C Not standardized Coexistence Yes Yes (Dual-rate TDMA) Yes Yes (Dual-rate TDMA) DOCSIS Provisioning 2014 Yes 2014 Yes

6.2 TWDM-PON

As a next-generation PON technology, TWDM-PON provides the anchor for a comprehensive GPON

technology roadmap. Compared to alternative solutions, TWDM-PON is less risky, less disruptive,

and offers the best price-performance. Within a span of few short years, through the collaboration

of key industry partners, a number of key technical challenges were solved as illustrated in Figure 15,

further demonstrating the value of a mature GPON ecosystem.

Page 31: Harnessing the Power of HFC Node Facility

Page 31 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

OLT module v14 XFP+EDFA+Mux/Demux

+ +

4 XFP Modules Optical

Amplifier

Mux/Demux

Tunable filter

Tunable laser

ONU Module v2:10G SFP+ with Tunable Rx/Tx

ONU Module v1:TF+TL

World’s first 4- in-1 OLT

integrated module

World’s first 10G SFP+

ONU module for

TWDM -PON

IntegrationIntegrated into BOSA

Technical Challenges: Small Size tunable BOSA(SFP+) Low cost tunable transmitter and tunable receiver Wavelength calibration and alignment

OLT Module v2

Technical Challenges: Limited space to accommodate Tx, Rx,

Mux/Demux , optical amplifier Burst mode optical amplification ASE noise impact on ONU registration Crosstalk of four channels

Figure 15 Key Technical Challenges Resolved in TWDM-PON

The mixed residential and business application of PON network as illustrated in Figure 16 is an

important part of network planning strategy to maximize ROI. Despite increase in upstream traffic,

service evolution driven by consumer behavior continues to demand higher downstream

bandwidth. Support for asymmetric bandwidth is a market requirement supported by the entire

ecosystem, which translates to higher cost for symmetric PON solutions. Symmetric bandwidth is

an important part of business service offering but with typical service tiers ranging from 50Mbps to

200Mbps, a relatively small percentage of business subscribers can easily be covered by

multi-gigabit asymmetric PON solutions.

Figure 16 Coexisting TWDM-PON and GPON Deployment Scenarios

Page 32: Harnessing the Power of HFC Node Facility

Page 32 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

A single TWDM-PON OLT port provides 40Gbps DS and 10Gbps US bandwidth over a single ODN. In

a mature FTTx deployment, each ODN is expected to connect directly to both residential and

business subscribers, as well as a number of cell backhaul and FTTN locations. For the vast majority

of FTTN applications serving less than a couple of hundred subscribers per node, this level of

uplink capacity is sufficient to support bandwidth demand growth well into the next decade. With

commercial launch anticipated in 2016, TWDM-PON positions GPON network for future growth

and solidifies GPON as the technology of choice for many operators.

7 Summary

As demonstrated by the modeling result, HFC bandwidth demand for the next 7 years can be met

with a combination of node split, mid-split upgrade and DOCSIS 3.1 upgrade. Most would agree

these migration steps can be carried out with either centralized or distributed solutions depending

on prevailing conditions. Less obvious is the macro trend of an accelerating transition toward a

distributed architecture in the access network.

HFC coaxial plants each consisting of hundreds or more modular node facilities are ready to be

optimized. However, industry continues to focus on improving the efficiency of hub facilities to

maintain flexibility of a physical layer outside plant. This least disruptive approach appears to be

perfectly rational. But a closer examination of the competitive landscape presents a different view.

Within many node facilities, digital fibers are being installed by competing wireline and wireless

operators offering services to the same pool of potential subscribers. The exponential growth in

bandwidth demand is being addressed with innovations in both DOCSIS and FTTx solutions.

Economies of scale continue to drive better performance, reliability, and pricing for the FTTx

Ethernet Node solutions, most of which are tailored for service delivery over other access media

today.

With the introduction of OFDM, DOCSIS 3.1 represents the first major step toward the

convergence of cable access and other telecommunications networks. Adoption of Ethernet Node

is another step down this convergence path toward better economies of scale. There is little

debate about fiber being the future-proof access medium. Newly validated business cases are

accelerating FTTx network build-out and continue to tip the balance in favor of the larger scale.

Now is the opportune time for MSOs to plan for the coming convergence of HFC node facility and

packet digital optical network. The value of Ethernet Node as one of the emerging HFC migration

tools is underscored by the strategic implication it would bring to the HFC network.

Page 33: Harnessing the Power of HFC Node Facility

Page 33 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

8 References

[1] Howald, Dr. Robert L, ARRIS. Breathing New Lifespan into HFC: Tools, Techniques, and Optimizations,

Cable Show Spring Technical Forum

[2] http://advanced-television.com/2012/04/19/research-mpeg-4-drives-operator-investment/

[3] http://www.cnet.com/news/broadcom-chip-ushers-in-h-265-and-ultrahd-video/

[4] Virtualizing CCAP: Developing a Distributed Access Architecture, a Light Reading Webinar

[5]http://www.prnewswire.com/news-releases/mobile-experts-latest-small-cell-market-forecast-predicts-carrie

r-grade-small-cells-will-outnumber-consumer-femtocells-in-2016-200211811.html

[6] PON, FTTH, and DSL Aggregation Equipment and Subscribers Market Share, Size, and Forecasts: 4Q13 /

CY13 Edition, Infonetics Research

[7] http://www.ieee802.org/3/ad_hoc/ngepon/email/msg00012.html

[8] C. Ranaweera, M.G.C. Resender, K.C. Reichmann, P.P. Iannone, P.S. Henry, B-J. Kim, P.D. Magill, K.N.

Oikonomou, R.K. Sinha, and S.L. Woodward. Design and Optimization of Fiber-Optic Small-Cell Backhaul

Based on an Existing Fiber-to-the-Node Residential Access Network

9 Abbreviations and Acronyms

ADC Analog-to-Digital Converter

AES Advanced Encryption Standard

AFE Analog Front-end

AM Amplitude Modulated

ASE Amplified Spontaneous Emission

ASIC Application-specific Integrated Circuit

ATDMA Advanced Time Division Multiple Access

AVC Advanced Video Coding

BOSA Bi-directional Optical Sub-assembly

CAGR Compounded Annual Growth Rate

CBR Constant Bit-rate

CCAP Converged Cable Access Platform

C-DOCSIS China DOCSIS

CM Cable Modem

CMC Coax Media Converter

CNR Carrier-to-Noise Ratio

CPE Customer Premises Equipment

DAC Digital-to-Analog Converter

Page 34: Harnessing the Power of HFC Node Facility

Page 34 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

DOCSIS Data over Cable Service Interface Specifications

DSL Digital Subscriber Line

DTA Digital Terminal Adaptor

DVB-C Digital Video Broadcasting - Cable

DVR Digital Video Recorder

E/O Electrical-to-Optical

EOC Ethernet over Coax

EPON Ethernet Passive Optical Network

FEC Forward Error Correction

FPGA Field-programmable Gate Array

FTTdp Fiber-to-the-Distribution Point

FTTH Fiber-to-the-Home

FTTLA Fiber-to-the-Last-Amplifier

FTTN Fiber-to-the-Node

FTTx Fiber-to-Anything

GEM Gigabit-capable Passive Optical Network Encapsulation Method

GPON Gigabit-capable Passive Optical Networks

HD High Definition

HDTV High Definition Television

HEVC High Efficiency Video Coding

HFC Hybrid Fiber Coaxial

HHP Household Passed

HSI High-speed Internet

I-CCAP Integrated CCAP

IP Internet Protocol

IPTV Internet Protocol Television

LDPC Low Density Parity Code

MAC Media Access Control

M-CMTS Modular Cable Modem Termination System

MDU Multiple Dwelling Unit

MxU Multiple-Any-media-Unit

MPEG Motion Picture Experts Group

MSO Multiple-system Operator

NG-EPON Next-generation EPON

NG-PON2 40-Gigabit-capable Passive Optical Network

NRZ Non-return-to-zero

NTSC National Television Standards Committee

ODN Optical Distribution Network

O/E Optical to Electrical

OFDM Orthogonal Frequency-Division Multiplexing

Page 35: Harnessing the Power of HFC Node Facility

Page 35 of 35

Harnessing the Power of HFC Node Facility

WHITE PAPER

OMCI Optical network termination Management and Control Interface

ONU Optical Network Unit

OOB Out-of-Band

OPEX Operating Expenses

OSP Outside Plant

P2P Point-to-Point

PAL Phase Alternate Line

PHY Physical Layer

PLOAM Physical Layer Operations, Administration and Maintenance

PNM Proactive Network Maintenance

PON Passive Optical Network

QAM Quadrature Amplitude Modulation

RF Radio Frequency

ROI Return on Investment

RS Reed-Solomon

SARFT State Administration of Radio Film and Television

SC-QAM Single Carrier QAM

SDTV Standard Definition Television

SDV Switched Digital Video

SFP Small Form-factor Pluggable

SFU Single Family Unit

SNR Signal-to-Noise Ratio

STB Set-top Box

T-CONT Transmission Container

TDMA Time Division Multiple Access

TF Tunable Filter

TL Tunable Laser

TWDM-PON Time and Wavelength Division Multiplex Passive Optical Networks

UEQAM Universal Edge QAM

UHDTV Ultra High Definition Television

UPS Uninterruptable Power Supplies

VOD Video on Demand

VoIP Voice over Internet Protocol

XFP 10 Gigabit Small Form Factor Pluggable

XG-PON1 10-Gigabit-capable Passive Optical Network

---End---


Recommended