+ All Categories
Home > Documents > Enhancing DPI with Virtualized Packet DPI analyzers … · Enhancing DPI with Virtualized Packet...

Enhancing DPI with Virtualized Packet DPI analyzers … · Enhancing DPI with Virtualized Packet...

Date post: 18-May-2018
Category:
Upload: dangkhue
View: 243 times
Download: 1 times
Share this document with a friend
5
Solution brief Enhancing DPI with Virtualized Packet Brokering Deep Packet Inspection (DPI) is a powerful tool when married with the right data. Ideally, DPI analyzers would see every packet, for each flow of interest, at all key locations, as it happens. The “right data” has these attributes: Right Data Attribute Current Requirement Complete Sampled or incomplete data inhibits the accuracy of DPI applied to loss-sensitive or transactional applications—VoIP, or stock trading, for example—where precisely identifying missing packets is critical to measuring user experience, application reliability, or network efficiency. Requires DPI appliances to be installed at the tap point of interest to avoid loss. Focused DPI is more efficient and scalable when data is reduced to the subset required for analysis. Sifting through full line-rate data for a specific flow of interest is computationally expensive. Requires DPI analyzers with dedicated hardware for optimal data- plane processing, or packet-brokers co- located with each appliance Localized DPI appliances are often installed at centralized sites where the broadest access to traffic is available. However, analyzing services at a single location offers incomplete insight in many applications, as quality of service or experience (QoS/QoE) often degrade with distance. The ability to analyze traffic at session endpoints, and key locations in between, means the difference between simply detecting, and rapidly isolating and resolving a problem. Requires analyzers to be installed at each point of interest to reconstruct sessions and correlate results from multiple locations. Time-Stamped Analyzing real-time, transactional, and streaming media applications requires precise packet timing information to resolve inter-packet delay variation (jitter) and latency. Requires packet brokers or hardware-based time-stamping directly in the DPI analyzer. Legend:
Transcript

Solution brief

Enhancing DPI with Virtualized Packet Brokering

Deep Packet Inspection (DPI) is a powerful tool when married with the right data. Ideally, DPI analyzers would see every packet, for each flow of interest, at all key locations, as it happens. The “right data” has these attributes:

Right Data Attribute Current Requirement

Complete Sampled or incomplete data inhibits the accuracy of DPI applied to loss-sensitive or transactional applications—VoIP, or stock trading, for example—where precisely identifying missing packets is critical to measuring user experience, application reliability, or network efficiency.

Requires DPI appliances to be installed at the tap point of interest to avoid loss.

Focused DPI is more efficient and scalable when data is reduced to the subset required for analysis. Sifting through full line-rate data for a specific flow of interest is computationally expensive.

Requires DPI analyzers with dedicated hardware for optimal data-plane processing, or packet-brokers co-located with each appliance

Localized DPI appliances are often installed at centralized sites where the broadest access to traffic is available. However, analyzing services at a single location offers incomplete insight in many applications, as quality of service or experience (QoS/QoE) often degrade with distance. The ability to analyze traffic at session endpoints, and key locations in between, means the difference between simply detecting, and rapidly isolating and resolving a problem.

Requires analyzers to be installed at each point of interest to reconstruct sessions and correlate results from multiple locations.

Time-Stamped Analyzing real-time, transactional, and streaming media applications requires precise packet timing information to resolve inter-packet delay variation (jitter) and latency.

Requires packet brokers or hardware-based time-stamping directly in the DPI analyzer.

Legend:

2 Solution Brief • Enhancing DPI with Virtualized Packet Brokering • 2Q 2016

Installing DPI appliances and taps or packet brokers at all key locations is often impractical, cost prohibitive, and operationally complex—despite the high value of the insight afforded.

Making large scale DPI cost-efficient is paradoxical: although DPI can benefit greatly from the elastic compute found in data centers, virtualizing only makes it more difficult to access the ‘right data’.

Distributed Packet Capture & Virtual Packet Brokering Instead of collocating analyzers at capture points to provide integral access to traffic, an innovative, elegant solution inverts this principle to bring the ‘right data’ to the analyzers, wherever they may be located.

Distributed packet capture means operators and enterprise IT staff can gain ubiquitous access to data anywhere in the network, with the same scalability and performance levels traditionally seen in core network devices.

When distributed capture is married with virtualized packet brokering, tailored flows can be seamlessly delivered from any point of interest to any number of physical or virtual analyzers without impacting data integrity.

Recent solutions overcome shortcomings that plague traditional attempts to capture and ‘backhaul’ traffic using technologies such as RSPAN/ERSPAN, or remote PCAP. These methods simply involve mirroring a set of VLANS through a loss sensitive UDP tunnel; there is no precise timestamping, slicing, or backhaul bandwidth optimization—features necessary for modern DPI applications.

Effective Distributed Capture & Brokering Some of the most desirable attributes of distributed packet capture technologies are:

• Granular traffic filtering and classification

• Smart consolidation of packets matching multiple filter criteria, to ensure that multiple overlapping flows are ‘uplinked’ together

• Highly efficient transfer bandwidth utilization by bundling groups of captured packets together when transmitting: a technique that can significantly reduce overhead

• Support for jumbo frames, regardless of the network maximum transmission unit (MTU)

• Support for stateful capture, allowing the capture process to terminate automatically when target packets have been identified and stored/forwarded

• Adaptive capturing that allows additional header information to be captured under specific circumstances

• Secure and lossless forwarding of captured data to DPI location(s)

• Remotely programmable capture profiles through open interfaces, for on-the-fly ‘data on demand’ in response to requests from analyzers, SDN controllers, intrusion detection systems, and other upstream clients.

• Low-cost, small form-factor hardware capture modules that can be installed in our out of line, supporting zero-touch deployment

Packet capturing has long been the mainstay of core network engineering support and optimization teams, used as a troubleshooting and analytical tool across a wide range of market sectors. Such capability is generally restricted to core network applications for several important reasons:

Significant compute power is necessary to run packet capture engines, which increases the footprint of the equipment and drives up power consumption in already heavily utilized data centers. This means centrally located data centers are often the only locations that can host packet capture capability.

Scalable packet capture capability is expensive to integrate into typical router/switch products, and entry-level packet brokers and intelligent taps are not feature rich due to limited processing capacity. Larger platforms found in core network locations are generally more likely to support all features required deliver the ‘right data’ to DPI.

Captured data needs to be readily accessible to centralized analyzers. Reliable transfer from the capture engine to deep packet inspection (DPI) tools mean they are colocated where capture is available. Core network locations are ‘closer’ to where the DPI tools are likely to be located.

Traditional Packet Capture

3 Solution Brief • Enhancing DPI with Virtualized Packet Brokering • 2Q 2016

The Capture Process In order for distributed packet capture solutions to achieve the desirable attributes, each area of functionality must be carefully designed and optimized to make best use of the available compute resources in small form-factor modules distributed around the network.

Several key functions are necessary, including:

Filtering and classification – to search and locate the traffic that needs to be captured

Packet slicing – to identify the part of the traffic flow that is of interest (headers only, all of the payload, partial payload, other fields)

Bundling captures prior to uplink transmission – for efficient packing of the uplink packets and better bandwidth utilization in the scarce resource available

Reconstruction – all of the captures from all of the capture points in the network need to be centrally ‘brokered’ before being forwarded to the ultimate destination – DPI tools

Distribution – transmission of the reconstructed captures to one or more destinations for onward analysis and storage

Filtering and ClassificationIn some applications of distributed packet capture, it is desirable to capture all traffic that hits the interface. In such cases, filtering and classification is a simple process. Example: finance network monitoring, where real trading data needs to be captured, losslessly transferred to a central repository, and stored for a defined period of time for regulatory and compliance purposes.

In almost all other applications, however, some level of complex filtering and classification is needed. The fastest and most efficient way of implementing this filtering is in hardware. The type of filtering and classification will vary from one

network type to another, but generally speaking the requirements fit into Layer 2 and Layer 3 categories.

Even more important, however, is the ability to create complex filtering and classification mechanisms to achieve the desired granularity. Such mechanisms need to support multiple criteria in a single filter definition, or even combinations of criteria from Layer 2 and Layer 3 categories. Furthermore, the ability to filter and classify on specific protocols will greatly simplify the whole process and enable provisioning of the overall solution to be far simpler and less user intensive. Feature-rich classification methods ensure the ‘right data’ is capture, regardless of its specific L2-L7 attributes.

Bundling and Transmission The ability to pack multiple captures into a single uplink packet will achieve significant improvements in transmission efficiency, and ensure that any required overhead added to the uplink packets is minimized.

Extensive flow capture filter options increase the granularity of capture, and reduce unnecessary transmission overhead

4 Solution Brief • Enhancing DPI with Virtualized Packet Brokering • 2Q 2016

Significant efficiency gains are possible; for example:

• Multiple capture slices sent as one packet

• Average internet traffic packet size is 576 bytes

• When desired info resides in headers, truncating packets (e.g. to 50 bytes) allows several captured slices to be bundled and transmitted in one packet

• Truncating and bundling means efficient use of bandwidth

Bandwidth savings possible when truncating and bundling a 576 byte packet stream (100pps)

Using this method, uplink bandwidth required can be reduced by as much as 90 percent. This minimizes, but doesn’t completely eliminate, the trade-off to be made between uplink efficiency and latency introduced in the forwarding process.

When bundling and ensuring captured packets are sent without loss, some degree of buffering is necessary at the point of capture. This poses unique engineering challenges when integrating sophisticated packet capture into small form-factor devices (SFPs generally). Rapid advances in highly integrated chipsets mean that memory capacities of multiple gigabytes are available, but power restrictions means that a delicate balance needs to be struck between capacity, and energy consumption.

Smart SFP Module as Distributed Packet Capture Point

As more memory means more active gates, and more power consumed, there is an optimal point where the most efficient uplink packing of captured packets is achieved. Engineering expertise based on 12 years of FPGA design have allowed Accedian developers to deliver a uniquely capable product in the demanding SFP footprint.

Assured, Lossless Delivery One of the most important considerations—and often a prohibiting factor—when considering distributed packet capture solutions is how to guarantee that captures made on a mobile cell-site or business CPE location actually reach the DPI tools where analysis occurs. Generally, DPI tools are centrally located, hosted in data centers, with remote client connectivity to workstations wherever the workforce happen to be. Without

this guaranteed packet delivery, the whole solution falls apart and becomes unusable.

In most networking environments, it is mandatory to use a TCP-based mechanism for transferring remotely captured packets over the network and back to the centrally located DPI tools. This implies having a TCP stack implemented on the small form-factor capture devices, which has cost and power consumption implications. However, with certain technology types, FPGA for example, this becomes an achievable goal. Vendors looking to achieve distributed packet capture solutions will need to leverage such technology.

The use of TCP also allows distributed capture points to be located far from the controller and DPI analysis functions.

Efficient, Versatile Packet Brokering When packets arrive, a viable distributed packet capture solution needs a centralized controller function, responsible for:

• Directing remote capture points in terms of provisioning and management functions

• Brokering all the captures received from a network of capture devices

• Forwarding captures to one or multiple destinations

In this context, the destinations could be permanent storage arrays, DPI engines, or analytical platforms used for network planning, optimization and troubleshooting.

5 Solution Brief • Enhancing DPI with Virtualized Packet Brokering • 2Q 2016

Because the centralized controller is a compute intensive function, it makes sense to leverage virtualization solutions for hosting this function. This also allows ‘direct connection’ to virtualized analyzers located in the same data center / NFV infrastructure (NFVI).

A centralized controller is responsible for:

• Filter capture control in real–time

• Configuring truncation sizing and granularity

• Implementing a searchable capture database

• Configuring thresholds for automated notification when capture conditions are met or when captures arrive at the controller

• Scheduling timed captures

• Exporting captured data, potentially from 1,000s or 10s of 1,000s of remote capture points to third-party systems, via PCAP, ERSPAN or other protocols (e.g. those based around streaming)

• Open northbound API for integration with existing SDN controllers and orchestrators

Conclusion When paired with the right data, DPI is a powerful tool. That data should be complete, focused, localized, and timestamped. However, traditional packet capturing is generally restricted to core network applications because of the compute power needed, the expense of scaling capture, and the necessity of reliable data transfer from capture engine to DPI tools.

There’s a better way.

Distributed packet capture and brokering—which brings the right data to analyzers wherever they are located—provides access to data anywhere in network, with the same scalability and performance levels seen in traditional core network devices.

This is possible by combining distributed capture, timestamping, filtering, and classification; adaptive bundling and transmission; efficient, versatile packet brokering; and assured, lossless delivery (using FPGA technology and a TCP-based mechanism) from source to DPI analyzers. These actions are performed by distributed, low-cost, small form-factor hardware modules and an NFV-based, centralized controller.

There are many advantages to distributed, virtualized packet capture and brokering, including these related directly to DPI performance and utility:

• All types of QoE analyzers are dramatically more efficient when flows are received pre-filtered and sliced, instead of full link.

• Surgically selecting specific traffic at the edges allows unprecedented opportunities for using DPI equipment to correlate data for valuable insight into end-to-end quality degradation, and minimizes uplink bandwidth needed to deliver the captures.

• Open API capture control allows DPI solutions to fully automate selection, capture, and distribution of flows in real-time.

In short, distributed packet capture and brokering is now not only possible and advantageous, but feasible and affordable.

© 2016 Accedian Networks Inc. All rights reserved. Accedian Networks, the Accedian Networks logo, SkyLIGHT, AntMODULE, Vision EMS, Vision Suite, VisionMETRIX, Vision Collect, Vision Flow, Vision SP, V-NID, Plug & Go, R-FLO, Network State+, Traffic-Meter, FlowMETER & airMODULE are trademarks or registered trademarks of Accedian Networks Inc. All other company and product names may be trademarks of their respective companies. Accedian Networks may, from time to time, make changes to the products or specifications contained herein without notice. Some certifications may be pending final approval, please contact Accedian Networks for current certifications.

About Accedian’s SkyLIGHT FlowBROKER Solution

The first virtualized, distributed packet broker solution, FlowBROKER offers cost-efficient, microsecond precise, lossless flow capture and relay to centralized and virtualized analyzers, analytics platforms, security and policy enforcement systems. Now operators can optimize QoE & network efficiency with the complete picture—and complete confidence.

FlowBROKER, Accedian’s distributed packet capture solution, is comprised of Nano and ant Performance Modules (smart SFP and GbE units) coupled with the VCX Controller virtual machine. This solution brings capture capability to all parts of customer networks, in a small form-factor, low-cost, highly scalable package.


Recommended