Date post: | 13-Apr-2015 |
Category: |
Documents |
Upload: | krish-ramesh |
View: | 29 times |
Download: | 2 times |
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
1
Introduction
Today’s Devices Under Test (DUT) represent complex, multi-protocol network elements with an emphasis
on Quality of Service (QoS) and Quality of Experience (QoE) that scale to terabits of bandwidth across the
switch fabric. The Spirent Catalogue of Test Methodologies represents an element of the Spirent test
ecosystem that helps answer the most critical Performance, Availability, Security and Scale Tests (PASS)
test cases. The Spirent Test ecosystem and Spirent Catalogue of Test Methodologies are intended to help
development engineers and product verification engineers to rapidly develop and test complex test
scenarios.
How to use this Journal
This provides test engineers with a battery of test cases for the Spirent Test Ecosystem. The journal is
divided into sections by technology. Each test case has a unique Test Case ID that is universally unique
across the ecosystem.
Tester Requirements
To determine the true capabilities and limitations of a DUT, the tests in this journal require a test tool that
can measure router performance under realistic Internet conditions. It must be able to simultaneously
generate wire-speed traffic, emulate the requisite protocols, and make real-time comparative
performance measurements. High port density for cost-effective performance and stress testing is
important to fully load switching fabrics and determine device and network scalability limits.
In addition to these features, some tests require more advanced capabilities, such as
Integrated traffic, routing, and MPLS protocols (e.g., BGP, OSPF, IS-IS, RSVP-TE, LDP/CR-LDP) to
advertise route topologies for large simulated networks with LSP tunnels while simultaneously
sending traffic over those tunnels. Further, the tester should emulate the interrelationships
between protocol s through a topology.
Emulation of service protocols (e.g., IGMPv3, PIM-SM, MP-iBGP) with diminution.
Correct single-pass testing with measurement of 41+ metrics per pass of a packet.
Tunneling protocol emulation (L2TP) and protocol stacking.
True stateful layer 2-7 traffic.
Ability to over-subscribe traffic dynamically and observe the effects.
Finally, the tester should provide conformance test suites for ensuring protocol conformance and
interoperability, and automated applications for rapidly executing the test cases in this journal.
Further Resources
Additional resources are available on our website at http://www.spirent.com
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
2
Table of Contents
Testing Industry Standards .....................................................................................................3
BNCH_001 RFC 3918 Multicast Join/Leave Latency Test ....................................................... 4
BNCH_002 CloudMark Virtualized Performance Benchmark ............................................... 8
BNCH_003 EtherSAM (ITU-T Y.1564) EBS and CBS Burst Test with TurboQoS .................... 18
BNCH_004 EtherSAM (ITU-T Y.1564) Service Configuration Ramp Test with TurboQoS ..... 27
BNCH_005 EtherSAM (ITU-T Y.1564) Service Performance Test with TurboQoS ................. 35
Appendix A – Telecommunications Definitions ..................................................................... 42
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
3
Testing Industry Standards
To provide a stand way of measuring and evaluating a Device Under Test (DUT), the industry has created a
series of benchmarks. These benchmarks establish a uniform procedure for the generation of traffic to
and from the DUT with a normalized procedure of analysis and reporting. The goal of the benchmark is to
generate metrics in a reproducible and unbiased fashion for comparability.
Benchmarking standards can come
from any organization for potential
industry wide adoption. Two key
organizations, IEEE and IETF, help set
standards in the industry by
coordinating recommendations.
Though the RFC (Request for
Comment) and WG (Working Group)
system, sub-groups like the BMWG
(Bench mark Working Group) help
coordinate and refine
recommendations. Key
recommendations such as RFC-2544,
RFC-2889, and RFC-3918 have come
from this process.
In order to execute benchmarks, test and measurement equipment like Spirent TestCenter, Spirent
Avalanche, and Spirent Landslide help the user systematically generate, analyze, and report based on the
industry derived standards. Spirent TestCenter is especially architected to rapidly test and measure
industry standards to its microkernel-architected and high-port-density design, allowing up to 32 multiple
automated processes to execute in parallel per chassis.
This document describes the methodologies associated with testing key industry standards. These
constructive generic frameworks help users understand and execute key standards and reduce the time
associated with testing benchmarks.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
4
BNCH_001 RFC 3918 Multicast Join/Leave Latency Test
Abstract
This test case determines the time it takes the DUT to start/stop forwarding multicast frames
once it receives a successful IGMP group membership report/leave group message.
Description
This test is part of the RFC 3918 tests, in this case to determine the time it takes a DUT to
start/stop forwarding multicast frames from the time a successful IGMP group membership
report/leave group message is sent.
In this test, Spirent TestCenter ports act as both multicast clients and sources with the DUT in
between. The DUT shouldn’t forward multicast traffic until it receives a request from the client. It
should process multicast Join/Leave requests from the client with the minimum possible latency.
This is critical for presenting the user with a good quality of experience, especially with video
applications.
Target Users
All NEMs and service providers
Target Device Under Test (DUT)
Core Equipment
Reference
RFC 3918
Relevance
This test case show cases the DUT’s capability to handle Join/Leave requests from the Multicast
Clients.
Version
1.0
Test Category
Testing Benchmarking Standards
PASS
[x] Performance [x] Availability [ ] Security [] Scale
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
5
Required Tester Capabilities
The tester must support:
RFC 3918
Multicast protocols support – IGMP/MLD
Results reporting capabilities in a template format with key
Topology Diagram
Multicast
ClientsDUT
Multicast
Server
Test Procedure
1. Launch the RFC 3918 Wizard in the Spirent TestCenter GUI.
2. Select the Multicast Join/Leave Test.
3. Select the ports that will be used in the test.
a. Configure multicast hosts and the group automatically with the wizard or manually.
4. Select the endpoint mapping and multicast source and client ports.
5. Configure the following parameters
a. Multicast Client Version (IGMP Version).
b. Join Group Delay.
c. Leave Group Delay.
d. Multicast Message Tx Rate.
e. Multicast Group Base IP Address, Step and the how much should it Increment in each
step.
f. Layer 4 Header – None, TCP, UDP.
i. If TCP/UDP is selected, give a port number range.
g. TOS or Class of Service.
h. VLAN P-bit, if any.
i. TTL.
j. Latency Type.
i. Selection from LILO, FIFO, FILO or LIFO.
k. The Multicast Group Distribution Mode – Even or Weighed, between Client ports, if
more than 1.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
6
6. Configure the test options.
a. Number of Trials to be run.
b. Duration in seconds.
c. Test Start Delay – for the DUT to be able to ramp up.
d. Frame Size.
i. Option to have Fixed, Random, Step, Custom or iMIX.
e. ARP/Learning parameters.
f. Results Collection Delay – after the test has finished, the analyzer will wait this amount
of time before it calculates the results.
7. Finish the wizard. It automatically creates a sequence of steps in the Command Sequencer.
8. Run the Command Sequencer and allow the Results Reporter to open up when the first
iteration has finished.
9. The Results Reporter tool launches when the first iteration is complete and displays the
results in a pre-defined template with all the necessary information.
10. Create a PDF, HTML or Excel report from the template if desired.
11. Run the test for IPv6/MLD if desired.
12. End of test case.
Control Variables & Relevance
Variable Relevance Default Value
Number of IGMP Groups
The more the number of IGMP Groups, more processing required on the DUT
1
IGMP Version This would be normally Version 2 or 3. 2
MLD Version This would be normally Version 2 or 3 2
Number of Multicast Hosts
This defines the number of Multicast Clients per port – usually the number of Multicast Clients and Groups are in 1:1 ratio
1
IGMP/MLD Group Addresses
The Starting Group Address – should be a Class D address. There are some Class D addresses, which are private and should not be used.
225.0.0.1
Multicast Group Address Step
Which octet to increment if we have more than 1 Groups
/8
TOS Class of Service for the Multicast packets 0
IP TTL Time to leave for the multicast packets 10
Multicast Group Distribution Mode
How the number of groups are distributed amongst the ports, if more than one
Even
Latency Type The way latency is calculated FIFO
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
7
Key Measured Metrics
Matric Relevance Metric Unit
Join Latency The time it takes for the first Multicast packet to arrive on the client port from the moment a join message is sent for the particular group
Microseconds
Leave Latency Time it takes for the DUT to stop forwarding the Multicast packets for a particular group after a Leave has been sent
Microseconds
Desired Result
The DUT should be able to process the multicast Join/Leaves as fast as possible.
Analysis
The DUT should be able to process the multicast Join/Leaves as fast as possible for multiple Joins
and Leaves happening together.
Typically, a Join latency will be lower than a Leave latency but there shouldn’t be too much of a
gap.
A high Join/Leave latency indicates a significant issue with the DUT processing engine as well as
the buffers.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
8
BNCH_002 CloudMark Virtualized Performance Benchmark
Abstract
This benchmark tests the elements and systems of a cloud and ranks performance. Cloud
Infrastructure is complex, overlapping, and inter-related set of protocols and systems that work
together to form a system. The results provide an independent understanding of the
performance of the cloud service independent of implementation.
Description
The cloud is composed of switch fabrics, virtual switch fabrics, physical network elements, virtual
networking elements, physical serves, virtual servers and client endpoints. A benchmark is
required to measure the performance of the cloud infrastructure in a comparable, independent
fashion. This benchmark test measures the performance of the cloud infrastructure.
Cloud Infrastructure performance can be measured using the following test cases.
Cloud Infrastructure Reliability (CiR) is the failure rate of the cloud infrastructure to provide the
environment to allow cloud protocols to operate without infrastructure-related errors. The
generalized goal of 99.999% uptime, means CiR<=0.001%.
Cloud Infrastructure Quality of Experience (CiQoE) is the ratio of Quality of Experience of the
protocols flowing over the Cloud Infrastructure to a client connected to a server though a back-
to-back cable. By expressing QoE as a normalized set compared to a back-to-back ratio, the local
VM operating system, server implementation, etc. are normalized.
The Cloud Infrastructure Quality of Experience variance (CiQoE Variance) is the variance over
time of the user experience. As a general rule, a variance measurement should be from a sample
of 12 hours or longer. Further, this measurement determines the reliability of the cloud to act in
a deterministic fashion.
Cloud Infrastructure Goodput (CiGoodput) measures the ability of the cloud to deliver a
minimum bitrate across TCP.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
9
The following traffic distribution by cloud class by IP QoS DiffServ is used as background traffic:
Device Under Test (DUT) Role
DiffServ EF (Real-Time)
DiffServ 0x31 (Critical) DiffServ 0x20 (General)
DiffServ 0x00 (Best Effort)
Enterprise Campus Apps VoIP 15% (SIP+RTP+G.729A),Unicast Web Conference 2-Way (MPEG2-TS, VBR), SIP 5%
Routing 3% OSPF Routing Updates 2%, BGP Updates 1%), Database 17% (Oracle SQLNet Updates), Corporate Web 2% , IMAP4 5%
Multicast Video 13% (480i, MPGEG-2, IGMPv2, 5 Multicast Channels), Telnet/SSH (2%), CIFS 10% (1:1:3 Small/Medium/Large Ratio)
Internet Web 5% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), BitTorrent 11%
Higher Education Network Administration 2% (SSH)
SQL 7% SQLNet SQL Table Updates), HTTPS University Admin 3% (64 Bytes index.html, 5x 1K JPEG Images), Video Conference 5% (MPEG2TS, VBR, 480i), VoIP 5% (G.729A CODEC)
FTP 7% (Large Files), HTTPS Student Services, HTTP 3%, POP3/SMTP 9%, CIFS 8% (1:1:3 Small/Medium/Large Objects, bidirectional), Multicast Video 5% (480i)
IM 12% (AIM) , BitTorrent 24%, HTTP 3% (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), HTTPS 1% (64 Bytes index.html, 5x 1K JPEG Images), Mail 5%, FTP 1% (Large Files), Telnet/SSH 3%
Service Providers Telnet/SSH 1% BGP Route Updates 1% N/A 50% P2P (Bit Torrent, 5% Peer to Tracker, 95% Peer-2-Peer), 30% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), 5% DNS, Video (MPEG2-TS 5%), SIP (G.729A 3%), Gaming (WoW 5%), 2% RAW TCP
10G Max Bandwidth No Payload, RAW TCP
1G max Bandwidth No Payload, RAW TCP
Small/Medium Business Apps POP2/SMTP 15% (5:2:1 Ratio of Small/medium/Big ratio). HTTPS 20% (64 Bytes index.html, 5x 1K JPEG Images, CIFS 30% (1:1:3 Small/Medium/Large Objects, bidirectional, BitTorrent 10%, Internet Web 25% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg)
WAN Accelerator Network Control 5% (Windows Domain Controller Updates), Network Logins
CIFS 40% (1:1:3 Small/Medium/Large Fields). Exchange 35%(5:2:1 Small/Medium/Large ratio)
HTTPS 10% (64 Bytes index.html, 5x 1K JPEG Images)
BitTorrent 10%
Internet AppMix 2011 50% P2P (Bit Torrent, 5% Peer to Tracker, 95% Peer-2-Peer), 30% HTTP (1024 Byte index.html, 30 500 Byte JPEG, 5 1K JPEG, 1x 100k jpeg), 5% DNS, Video (MPEG2-TS 5%), SIP (G.729A 3%), Gaming (WoW 5%), 2% RAW TCP
Table 1 – Background traffic by Class
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
10
Target Users
Network equipment providers, service providers, product verification
Target Device Under Test (DUT)
The DUT is a mix of physical and virtual elements, forming the cloud Infrastructure
Relevance
Measuring cloud infrastructure allows the user to measure the performance of the cloud in real
world environments in an independent fashion.
Version
1.0
Test Category
BNCH
PASS
[x] Performance [x] Availability [x] Security [x] Scale
Required Tester Capabilities
The tester must have the ability to measure non-simple, real world application layer traffic,
including live baselining off of a real internet server. The tester must support L2-7 and physical
and virtual ports.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
11
Topology Diagram
General Preparation
1. Tester ports.
a. Tester ports shall be connected to three locations in the topology.
i. Client cloud access emulation.
1. Physical test ports emulate the client cloud.
2. The client cloud is differentiated by user profiles. In general, user
profiles are evenly spread across all physical test interfaces.
3. For the purpose of baselining the virtual switch fabric, the virtual
switch ports are divided into client and server endpoints.
4. Physical user profiles should be tested with the following attributes:
a. Business Class User.
i. 0.1% packet Loss
ii. 30 mSec Delay
iii. /16 Subnet Mask
b. Fixed Internet User Class.
i. 3% packet Loss
ii. 150 mSec
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
12
iii. /16 Subnet mask
c. Smartphone / Tablet Class.
i. 1% Packet Loss
ii. 200 mSec Delay
iii. /16 Subnet mask
ii. Servers.
1. Servers are emulated on both the physical and the virtual switch
fabrics.
2. All servers serve all protocols.
b. Background Traffic.
i. Between the client and the server, the background traffic patterns in Table 1 are
continuously emulated, at 50% of line rate of the link speed.
c. Service flows.
i. Service flows represent the measured traffic in test bed.
ii. Start table 1 traffic.
iii. User actions:
1. The client opens a page from the server with 8-level pipelining
request.
2. The server responds with a 1 KB html page, one image that is 50 KB, 20
images that are 1 KB, 5 Java applets of 50 KB each, and an embedded
300 kbps adaptive streaming video.
3. The client spends 20 seconds on the page.
4. The client POSTs form data to the server with 5 fields.
5. The server responds with a continuing 1 KB html page and five 5 KB
JPEG objects.
6. The user closes the HTTP session. The QoE metric is measured as
either a fail or a pass (All objects loaded without error) and a page
load time (Time it takes to transfer all objects on a page from the
server to the client).
7. The user opens the FTP server and logs in with a unique username and
password.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
13
8. The user transfers 1 KB, 1MB and 10 MB files.
9. The user closes the FTP session. The QoE metric for this step is a pass
(No errors ) or fail (Some Errors) and the total time transferring 3 files
with FTP.
10. The user streams a 4 mbps video for 60 seconds. The QoE metric for
this step is a pass (No errors ) or fail (Some Errors) and the MOS-AV
Score.
11. The user will place a VoIP Call using G.729 for 30 seconds. The QoE
metric for this step is a pass (No errors ) or fail (Some Errors) and the
MOS-LQ score.
12. User session ends.
Test Procedure – Cloud Infrastructure Reliability Test (CiR)
1. Setup and bring on line all physical and virtual test interfaces.
2. Begin servers on physical and virtual endpoints.
3. Set the loading profile to a 45 degree ramp up to a value in excess of the DUT.
4. Start Table 1 Traffic on client and server.
5. Client traffic should be evenly split between physical server and virtual server endpoints.
6. Start test traffic client.
7. Stop once either a failed transaction or fail connection occurs.
8. Set a new loading profile to ramp up to the measured failure point minus one
connection or transaction. In the case of multiple failures, use the SimUser count for the
lowest value minus 1. In the sustaining portion of the load profile, run the duration in
excess of 12 hours.
9. Start traffic.
10. Stop traffic if and when a failed connection or transaction was detected.
11. Calculate the CIR by creating the ratio of 1 failure divided by the cumulative number of
SimUsers.
12. In the case of no failure, keep doubling the time until a failure is reached or until the CIR
Ratio becomes less than 0.001%.
13. The CiR is reported as “X % reliability at Y concurrent user sessions.”
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
14
Control Variables & Relevance
Variable Relevance Default Value
Number of open users Peak level of reliability Measured
Cumulative successful SimUsers session before failure
Used to build ratio of reliability Measured
Cumulative users at Failure minus one
Upper level of measurement Measured
Test Duration Time of Steady State Phase 60 minutes
Key Measured Metrics
Matric Relevance Metric Unit
CiR Ratio of first failure to the number of open SimUser sessions of success
percent
Test Procedure – Cloud Infrastructure Quality of Experience (CiQoE)
1. Calculate the virtual server QoE baseline.
a. Turn off all background traffic.
b. Using the service flow describe above, setup a single virtual endpoint as a client
and a single virtual endpoint as a server. The pathway should traverse the
virtual switch fabric.
c. The virtual client should run one user session to the virtual server.
d. Measure the QoE metrics as describe above. These become the baseline for the
virtual servers.
e. Reset all virtual endpoints as virtual servers.
2. Calculate the physical server QoE baseline.
a. Turn off all background traffic.
b. Using the service flow describe above, setup a single physical endpoint as a
client and a single physical endpoint as a server. The pathway should traverse
the virtual switch fabric.
c. The physical client should run one user session to the physical server.
d. Measure the QoE metrics as describe above. These become the baseline for the
physical servers
3. Use the loading profile in the CiR test (Error minus one). Set the load profile to send 50%
of the traffic to the virtual servers and 50% to the physical servers. Ramp up to the peek
value and sustain for the desired duration of the test. (Minimum of 60 minutes.)
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
15
4. Start traffic.
5. If any QoE metrics failed, demine the number of concurrent SimUsers at failure and
adjust the ramp down to that level. Go to Step 4 until no QoE failures are detected.
6. Measure the maximum value measure (Longest page load time, slowest FTP transfer,
smallest MOS-AV and MOS-LQ scores).
7. Divide the measure QoE number by its baseline equivalent. This is a percent impact of
the infrastructure on traffic.
8. CiQoE is this calculated percent impact by protocol at a peek concurrent SimUser count.
Control Variables & Relevance
Variable Relevance Default Value
Peek Concurrent SimUsers with No QoE Errors Upper limit of users Measured
Baseline QoE Values Perfect case value Measured
Measure Infrastructure QoE Values Cloud impacted QoE Metrics Measured
Test Duration Time of Steady State Phase 60 minutes
Key Measured Metrics
Metric Relevance Metric Unit
CiQoE Quality of Experience Impact percent
Test Procedure – Cloud Infrastructure Quality of Experience Variance (CiQoEv)
1. Calculate the virtual server QoE baseline.
a. Turn off all background traffic.
b. Using the service flow described above, setup a single virtual endpoint as a
client and a single virtual endpoint as a server. The pathway should traverse
the virtual switch fabric.
c. The virtual client should run one user session to the virtual server.
d. Measure the QoE metrics as described above. These become the baseline for
the virtual servers.
e. Reset all virtual endpoints as virtual servers.
2. Calculate the physical server QoE baseline.
a. Turn off all background traffic.
b. Using the service flow described above, setup a single physical endpoint as a
client and a single physical endpoint as a server. The pathway should traverse
the virtual switch fabric.
c. The physical client should run one user session to the physical server.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
16
d. Measure the QoE metrics as described above. These become the baseline for
the physical servers.
3. Use the loading profile calculated in the CiR test (Error minus one). Set the load profile
to send 50% of the traffic to the virtual servers and 50% to the physical servers. Ramp
up to the peek value and sustain for the desired duration of the test. (minimum of 60
minutes.)
4. Start Traffic.
5. If any QoE metrics failed, demine the number of concurrent SimUsers at failure and
adjust the ramp down to that level. Go to Step 4 until no QoE failures are detected
6. Measure the maximum value measure (Longest page load time, slowest FTP transfer,
smallest MOS-AV and MOS-LQ scores) every 4 seconds during the duration of the test.
7. By protocol, divide the measured QoE by the Baseline for each 4 second interval. This is
the instantaneous cloud infrastructure impact percent.
8. With the set calculated, determine the standard deviation and variance.
9. The CiQoEv value is presented as a variance of at a measured standard deviation.
Control Variables & Relevance
Variable Relevance Default Value
Peek Concurrent SimUsers with No QoE Errors Upper limit of users Measured
Baseline QoE Values Perfect case value Measured
Measure Infrastructure QoE Values Cloud impacted QoE Metrics Measured
Test Duration Time of Steady State Phase 60 minutes
Key Measured Metrics
Metric Relevance Metric Unit
CiQoEv Variance Variance of change
CiQoEv Std. Dev. Deviation of change
Test Procedure – Cloud Infrastructure Quality of Experience Variance (CiGoodput)
1. Start traffic in Table 1.
2. Setup client and server traffic in a fully partial mesh. All clients should talk evenly to
both virtual and physical servers.
3. Use the load profile calculated in the CiR test case.
4. Generate traffic for the desired duration.
5. Measure the minimum goodput achieved after the ramping phase by protocol.
6. Report Minimum average and maximum goodput by protocol.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
17
Control Variables & Relevance
Variable Relevance Default Value
Peek Concurrent SimUsers with No QoE Errors Upper limit of users Measured
Test Duration Time of Steady State Phase 60 minutes
Key Measured Metrics
Matric Relevance Metric Unit
Minimum / Average / Maximum Goodput
Achievable goodput by protocol bandwidth
Desired Result
The CiR ratio should be less than 0.001%.
The CiQoE Ratio should be greater than .999.
The CiQoEv Deviation and variance should be less than 0.001.
The CiGoodput should be as high as possible.
Results are only comparable if the number of concurrent SimUsers are the same.
Analysis
When presenting values, present the calculated ratio with the number of peek concurrent users.
In addition, document the number and scope of failed transactions, connections, and users.
Document all calculations performed as appendix information to the report.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
18
BNCH_003 EtherSAM (ITU-T Y.1564) EBS and CBS Burst Test
with TurboQoS
Abstract
EtherSAM (ITU-T Y.1564) is an industry standard for independently measuring Ethernet services
QoS across a Device Under Test (DUT). This test measures the impact of EBS and CBS EtherSAM
bursting on “Pathway QoS worthiness” for the Ethernet services across a Device Under Test
(DUT).
Description
EtherSAM (ITU-T Y.1564) is a successor to RFC 2544, focusing on Ethernet service level
agreements (SLAs). Key terms for EtherSAM include:
Committed burst size (CBS): Number of allocated bytes available for bursts of ingress service
frames transmitted at temporary rates above the CIR while meeting the SLA guarantees
provided at the CIR.
Committed information rate (CIR): Average rate in bits/s of service frames up to which the
network delivers service frames and meets the performance objectives defined by the class of
service attribute.
Excess burst size (EBS): Number of allocated bytes available for bursts of ingress service frames
sent at temporary rates above the CIR + EIR while remaining EIR conformant.
Excess information rate (EIR): Average rate in bits/s of service frames up to which the network
may deliver service frames but without any performance objectives.
ITU-T Y.156sam defines test streams with service attributes linked to the Metro Ethernet Forum
(MEF) 10.2 definitions.
Services are traffic streams with specific attributes identified by different classifiers, such as
802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services are defined at the UNI
level, with different frame and bandwidth profiles, such as the service’s maximum transmission
unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR).
ITU Y.156sam defines three key test rates based on the MEF service attributes for Ethernet
virtual circuit (EVC) and user-to-network interface (UNI) bandwidth profiles.
CIR defines the maximum transmission rate for a service where the service is guaranteed certain
performance objectives. These objectives are typically defined and enforced via SLAs.
EIR defines the maximum transmission rate above the committed information rate, which is
considered excess traffic. This excess traffic is forwarded as capacity allows and is not subject to
meeting guaranteed performance objectives (best effort forwarding).
Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the
DUT or network under test (NUT) does not forward more traffic than specified by the CIR or EIR
of the service.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
19
These rates can be associated with color markings:
Green traffic is equivalent to CIR
Yellow traffic is equivalent to EIR
Red traffic represents discarded traffic (overshoot – CIR or overshoot – EIR)
ITU-T Y.156sam is built around two key subtests, the service configuration test and the service
performance test, which are performed sequentially.
Forwarding devices, such as switches, routers, bridges and network interface units, are the basis
of any network as they interconnect segments. If a service is not correctly configured on any one
of these devices within the end-to-end path, network performance can be greatly affected,
leading to potential service outages and network-wide issues such as congestion and link failures.
The service configuration test measures the ability of DUT or NUT to properly forward in three
different states.
In the CIR phase, where performance metrics for the service are measured and compared to
the SLA performance objectives
In the EIR phase, where performance is not guaranteed and the services transfer rate is
measured to ensure that CIR is the minimum bandwidth
In the discard phase, where the service is generated at the overshoot rate and the expected
forwarded rate is not greater than the committed information rate or excess rate (when
configured)
As network devices come under load, they must prioritize one traffic flow over another to meet
the KPIs set for each traffic class. With only one traffic class, no prioritization is performed by the
network devices since there is only one set of KPIs. As the number of traffic flows increase,
prioritization is necessary and performance failures may occur.
The service performance test measures the ability of the DUT or NUT to forward multiple
services while maintaining SLA conformance for each service. Services are generated at the CIR,
where performance is guaranteed, and pass/fail assessment is performed on the KPI values for
each service according to its SLA.
Service performance assessment must also be maintained for a medium- to long-term period, as
performance degradation will likely occur as the network is under stress for longer periods of
time. The service performance test is designed to soak the network under a full committed load
for all services, and to measure performance over medium and long test time.
Y.156sam focuses on the following KPIs for service quality:
Bandwidth: Bit rate of available or consumed data communication resources expressed in
bits/second or multiples (kilobits/s, megabits/s).
Frame transfer delay (FTD): Also known as latency, this is a measurement of the delay between
the transmission and the reception of a frame. Typically this is a round-trip measurement,
meaning that the calculation measures both the near-end to far-end and far-end to near-end
direction simultaneously.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
20
Packet jitter: A measurement of the variation in the time delay between packet deliveries. As
packets travel through a network to their destination, they are often queued and sent in bursts
to the next hop. Prioritization may occur at random moments, also resulting in packets being
sent at random rates. Packets are consequently received at irregular intervals. The direct result
of jitter is stress on the receiving buffers of the end nodes, where buffers can be overused or
underused when there are large swings of jitter.
Frame loss: Typically expressed as a ratio, this is the number of packets lost over the total
number of packets sent. Frame loss can result from a number of issues, such as network
congestion or errors during transmissions.
Target Users
NEMS, service providers
Target Device Under Test (DUT)
Any router or switch using Ethernet services
Reference
ITU-T Y.1564, MEF 10.2
Relevance
This test measures Ethernet SLAs in the network and the worthiness of a path to maintain full
SLA traffic.
Version
1.0.
Test Category
BNCH
PASS
[ ] Performance [x] Availability [ ] Security [ ] Scale
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
21
Required Tester Capabilities
To properly measure EtherSAM KPIs, the tester must have the following attributes:
FPGA. The precision and scalability of FPGAs permit deep scale while measuring precise timing.
Ultra-Low Latency. The tester must be able to test and measure down to 2.5 ns.
Support for 10 Mbps to 100 Gbps Ethernet, including virtual ports.
Ability to select RFC 4814 MAC addressing to ensure longest CAM lookups.
True Jitter. Variable TX simulating such services as video, while isolating the impact of the DUT
on EVCs.
True sequencing. Measure and differentiate loss from late, duplicate, reorder, and out of order
packets.
Simultaneous measurement of KPIs. Measure everything in one packet pass.
Topology Diagram
Test Procedure
1. Select and reserve the left and right side ports, including virtual machine endpoints.
2. Set all ports to the Latency+Jitter Mode, clear all counters, and turn off latency
compensation on all ports.
3. Define multiple EVC Ethernet Services. For each service specify:
a. Service Name and color.
b. Service bandwidth units (Frames/Second, % Load, or Kbps or Mbps).
c. Whether the service is VLAN Tagged.
i. If VLAN Tagged, specify the VLAN ID and PRI level. (Default PRI should be zero.)
ii. Indicate whether to use RFC-4814 MAC addressing.
d. Indicate whether the service is Ipv4 Only, Ipv6 Only, or Both.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
22
i. Specify starting IP address, subnet mask, and default gateway.
ii. Specify the Ipv4 DiffServ Codepoint and/or Ipv6 Traffic Class.
e. Specify the Layer 4 header:
i. UDP
ii. Stateless TCP
iii. Stateful TCP
iv. Specify source and destination ports
f. Indicate the service to be tested
i. Loopback (Generated and Terminated on the same port)
ii. Bi-Directional (Generated and Terminated on different Ports)
g. Ask for Service KPIs, unless indicated as required, off the user to measure the KPI with a
checkbox
i. CIR (Required)
ii. EIR (Required)
iii. Overshoot (Required)
iv. Maximum True Packet Loss Count (required)
v. Latency and Jitter units
1. uSec (Default)
2. nSec
vi. RFC-4689 Absolute Average Jitter
vii. RFC-4689 Max Jitter (Required)
viii. Maximum Latency (Required)
ix. Average latency
x. Maximum Out of Order Packet Count
xi. Maximum Delayed Packet Count
xii. Maximum Late Packet Count
xiii. Define an AND or an OR (Default is AND) to combine KPIs
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
23
4. Map EVC Services to Test Ports.
5. Define Test case Constants.
a. Specify frame sizes to test across.
b. Specify the Host Count Iteration.
c. Specify a per-Iteration unit (Frame or time) and value.
d. Specify a starting bandwidth.
e. Specify the number of steps to reach the CIR.
6. Run the CBS Burst Test.
a. For each EVC Service across all test ports (Disable the non-current EVC service
StreamBlocks).
i. For Each Frame Size.
1. For Each Host Count per EVC Service.
a. Clear all Counters and Dynamic Views.
b. Pause for “Pre-Burst” duration.
c. Set Rate of StreamBlock to BURST, set the transmission mode
to burst for burst duration.
d. Start Traffic.
e. PAUSE for POST-BURST Duration.
f. Set Rate to CIR.
g. BURST Traffic.
h. Measure KIPs and Record.
7. Run the EBS (CIR=0) Burst Test.
a. For each EVC Service across all test ports (Disable the non-current EVC service
StreamBlocks).
i. For Each Frame Size.
1. For Each Host Count per EVC Service.
a. Clear all Counters and Dynamic Views.
b. Pause for “Pre-Burst” duration.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
24
c. Set Rate of StreamBlock to BURST, set the transmission mode
to burst for burst duration.
d. Start Traffic.
e. PAUSE for POST-BURST Duration.
f. Set Rate to EIR.
g. BURST Traffic.
h. Measure KIPs and Record.
8. Run the EBS (CIR>0) Burst Test.
a. For each EVC Service across all test ports (Disable the non-current EVC service
StreamBlocks).
i. For Each Frame Size.
1. For Each Host Count per EVC Service.
a. Clear all Counters and Dynamic Views.
b. Pause for “Pre-Burst” duration.
c. Set Rate of StreamBlock to BURST, set the transmission mode
to burst for burst duration.
d. Start Traffic.
e. Set Rate to CIR.
f. Start Traffic.
g. Set Rate to EIR.
h. Start Traffic.
i. Measure KIPs and Record.
Control Variables & Relevance
Variable Relevance Default Value
Left and Right Side Ports
Pick the physical or virtual port to be used
NA
Per EVC Service Name and Color
Name Up to 64 characters and a color picker to pick the Service Color
Name “EVC Service 1” and color is “Blue”
Per EVC Bandwidth Ask the Rate Unit, CIR, and EIR, and RFC-4814 MAC addressing
Rate Unit in Mbps, Default rate 1mbps CIR, with a CIR of 100 kbps.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
25
Variable Relevance Default Value
Per EVC Service VLAN Tag
If the service is VLAN tagged Yes, Starting at VLAN 100, PRI 0, RFC-4814 (Yes)
Per EVC Service IP information
IP Type, Starting IP Address, Mask, and Gateway
Ipv4, 10.x.y.2 /24 = Starting Ipv4 address, Default gateway is 10.x.y.1
Per EVC Service QoS Codepoint
QoS Level BE (0x00h)
Per EVC Service Layer 4 Header
Layer 4 Header Type and Source/Destination Ports
UDP (Source Port 53, Dest Port 1024)
Per EVC Traffic orientation
Type of Generation and termination
Bi-Directional
Per EVC KPI’s (Key Performance Indicators)
Sequence of KPIs determines PASS/Fail for the Iteration
Defaults: CIR (1 Mbps) EIR (100 Kbps) Overshot (100 kbps) Maximum Packet Loss Count (o packets) Absolute Average Jitter (50 nSec) Maximum Average Jitter (200 nSec) Maximum Latency (1 uSec) Average Latency (700 nSec) Out of order / Delayed / Late Maximum Packet Count (0 Packets) ‘AND’ Between each operator
Frame Size Iterations Ask the user what frame sizes to test across, with the option of IMIX
If UDP only, 64->128->256->512->768->1024->1280->1518->9022, If Some TCP, then 72>128->256->512->768->1024->1280->1518->9022, User should be able to pick an IMIX Pattern or define it, but if TCP is use, 72 should be validated.
Hosts per EVC Service Iteration
Iterates a list of host counts Default (List, Count=254). Validate Minimum of 1 host per service
Pre-Burst time Time to pause before the burst
Time 30 Seconds
Burst Rate and Unit Burst Rate Value Default 1.2 Mbps
Burst Duration Time To Burst 120 Seconds
Post-Burst Duration Time to Wait After Burst 30 Seconds
Key Measured Metrics
Metric Relevance Metric Unit
KPIs KPI Adherence Up to CIR Pass if KPIs regular expression is TRUE
Desired Result
The DUT passes each EVC Ethernet service KPI up to its respective CIR, allows traffic with no
guarantee of KPI adherence between CIR and CIR+EIR, and rate limits the EVC Ethernet Service at
CIR+EIR.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
26
Analysis
For Each Frame size used, a table displays the frame size and Hosts per Service as the Y-Axis.
Three columns, (CBS Burst, EBS (CIR=0), and EBS (CIR>0)) with “PASS” or “FAIL” and the
measured KIP values.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
27
BNCH_004 EtherSAM (ITU-T Y.1564) Service Configuration Ramp
Test with TurboQoS
Abstract
EtherSAM (ITU-T Y.1564) is an industry standard for independently measuring Ethernet services
QoS across a Device Under Test (DUT). EtherSAM requires simultaneous inspection of CIR, Frame
Loss, Jitter, and Latency. EtherSAM ensures “Pathway QoS worthiness” for the Ethernet services
across a DUT.
Description
EtherSAM (ITU-T Y.1564) was designed as a successor to RFC-2544. Focusing on Ethernet Service
SLA, EtherSAM requires the measurement of many QoS KPI (Key Performance Indicator) per
Service per pass. The key objectives of EtherSAM are:
1) To serve as a network service level agreement (SLA) validation tool, ensuring that a
service meets its guaranteed performance settings in a controlled test time
2) To ensure that all services carried by the network meet their SLA objectives at their
maximum committed rate, proving that under maximum load network devices and
paths can support all the traffic as designed
ITU-T Y.156sam defines test streams with service attributes linked to the Metro Ethernet Forum
(MEF) 10.2 definitions.
Services are traffic streams with specific attributes identified by different classifiers, such as
802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services are defined at the UNI
level, with different frame and bandwidth profiles, such as the service’s maximum transmission
unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR).
ITU Y.156sam defines three key test rates based on the MEF service attributes for Ethernet
virtual circuit (EVC) and user-to-network interface (UNI) bandwidth profiles.
CIR defines the maximum transmission rate for a service where the service is guaranteed certain
performance objectives. These objectives are typically defined and enforced via SLAs.
EIR defines the maximum transmission rate above the committed information rate, which is
considered excess traffic. This excess traffic is forwarded as capacity allows and is not subject to
meeting guaranteed performance objectives (best effort forwarding).
Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the
DUT or network under test (NUT) does not forward more traffic than specified by the CIR or EIR
of the service.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
28
These rates can be associated with color markings:
Green traffic is equivalent to CIR
Yellow traffic is equivalent to EIR
Red traffic represents discarded traffic (overshoot – CIR or overshoot – EIR)
ITU-T Y.156sam is built around two key subtests, the service configuration test and the service
performance test, which are performed sequentially.
Forwarding devices, such as switches, routers, bridges and network interface units, are the basis
of any network as they interconnect segments. If a service is not correctly configured on any one
of these devices within the end-to-end path, network performance can be greatly affected,
leading to potential service outages and network-wide issues such as congestion and link failures.
The service configuration test measures the ability of DUT or NUT to properly forward in three
different states.
In the CIR phase, where performance metrics for the service are measured and compared to
the SLA performance objectives
In the EIR phase, where performance is not guaranteed and the services transfer rate is
measured to ensure that CIR is the minimum bandwidth
In the discard phase, where the service is generated at the overshoot rate and the expected
forwarded rate is not greater than the committed information rate or excess rate (when
configured)
As network devices come under load, they must prioritize one traffic flow over another to meet
the KPIs set for each traffic class. With only one traffic class, no prioritization is performed by the
network devices since there is only one set of KPIs. As the number of traffic flows increase,
prioritization is necessary and performance failures may occur.
The service performance test measures the ability of the DUT or NUT to forward multiple
services while maintaining SLA conformance for each service. Services are generated at the CIR,
where performance is guaranteed, and pass/fail assessment is performed on the KPI values for
each service according to its SLA.
Service performance assessment must also be maintained for a medium- to long-term period, as
performance degradation will likely occur as the network is under stress for longer periods of
time. The service performance test is designed to soak the network under a full committed load
for all services, and to measure performance over medium and long test time.
Y.156sam focuses on the following KPIs for service quality:
Bandwidth: Bit rate of available or consumed data communication resources expressed in
bits/second or multiples (kilobits/s, megabits/s).
Frame transfer delay (FTD): Also known as latency, this is a measurement of the delay between
the transmission and the reception of a frame. Typically this is a round-trip measurement,
meaning that the calculation measures both the near-end to far-end and far-end to near-end
direction simultaneously.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
29
Packet jitter: A measurement of the variation in the time delay between packet deliveries. As
packets travel through a network to their destination, they are often queued and sent in bursts
to the next hop. Prioritization may occur at random moments, also resulting in packets being
sent at random rates. Packets are consequently received at irregular intervals. The direct result
of jitter is stress on the receiving buffers of the end nodes, where buffers can be overused or
underused when there are large swings of jitter.
Frame loss: Typically expressed as a ratio, this is the number of packets lost over the total
number of packets sent. Frame loss can result from a number of issues, such as network
congestion or errors during transmissions.
Target Users
NEMS, service providers
Target Device Under Test (DUT)
Any router or switch using Ethernet services
Reference
ITU-T Y.1564, MEF 10.2
Relevance
This test measures Ethernet SLAs in the network and the worthiness of a path to maintain full
SLA traffic.
Version
1.0.
Test Category
BNCH
PASS
[ ] Performance [x] Availability [ ] Security [ ] Scale
Required Tester Capabilities
To properly measure EtherSAM KPIs, the tester must have the following attributes:
FPGA. The precision and scalability of FPGAs permit deep scale while measuring precise timing.
Ultra-Low Latency. The tester must be able to test and measure down to 2.5 ns.
Support for 10 Mbps to 100 Gbps Ethernet, including virtual ports.
Ability to select RFC 4814 MAC addressing to ensure longest CAM lookups.
True Jitter. Variable TX simulating such services as video, while isolating the impact of the DUT
on EVCs.
True sequencing. Measure and differentiate loss from late, duplicate, reorder, and out of order
packets.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
30
Simultaneous measurement of KPIs. Measure everything in one packet pass.
Topology Diagram
Test Procedure
1. Select and reserve the left and right side ports, including virtual machine endpoints.
2. Set all ports to the Latency+Jitter Mode, clear all counters, and turn off latency
compensation on all ports.
3. Define multiple EVC Ethernet Services. For each service specify:
a. Service Name and color.
b. Service bandwidth units (Frames/Second, % Load, or Kbps or Mbps).
c. Whether the service is VLAN Tagged.
i. If VLAN Tagged, specify the VLAN ID and PRI level. (Default PRI should be zero.)
ii. Indicate whether to use RFC-4814 MAC addressing.
d. Indicate whether the service is Ipv4 Only, Ipv6 Only, or Both.
i. Specify starting IP address, subnet mask, and default gateway.
ii. Specify the Ipv4 DiffServ Codepoint and/or Ipv6 Traffic Class.
e. Specify the Layer 4 header:
i. UDP
ii. Stateless TCP
iii. Stateful TCP
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
31
iv. Specify source and destination ports
f. Indicate the service to be tested
i. Loopback (Generated and Terminated on the same port)
ii. Bi-Directional (Generated and Terminated on different Ports)
g. Indicate the Service KPIs for which to measure the KPI
i. CIR (Required)
ii. EIR (Required)
iii. Overshoot (Required)
iv. Maximum True Packet Loss Count (required)
v. Latency and Jitter units
1. uSec (Default)
2. nSec
vi. RFC-4689 Absolute Average Jitter
vii. RFC-4689 Max Jitter (Required)
viii. Maximum Latency (Required)
ix. Average latency
x. Maximum Out of Order Packet Count
xi. Maximum Delayed Packet Count
xii. Maximum Late Packet Count
xiii. Define an AND or an OR (Default is AND) to combine KPIs
4. Map EVC Services to Test Ports.
5. Define Test case Constants.
a. Specify frame sizes to test across.
b. Specify the Host Count Iteration.
c. Specify a per-Iteration unit (Frame or time) and value.
d. Specify a starting bandwidth.
e. Specify the number of steps to reach the CIR.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
32
6. Run the CBS Burst Test.
a. For each EVC Service across all test ports (Disable the non-current EVC service
StreamBlocks).
i. For Each Frame Size.
1. For Each Host Count per EVC Service.
a. For Each Ramping Step from Starting bandwidth to CIR.
i. Phase 1 - Clear All Counters and Dynamic Views.
ii. Calculate per port the current bandwidth.
iii. Set the host and frame size.
iv. Set the burst to time or packets with the correct
rate.
v. Transmit Traffic.
vi. Determine failed KPI streams, record source and
destination.
vii. Record all KPI counts and TX Bandwidth per flow
equals RX Bandwidth per flow.
viii. Pass = All Streams passed KPI rules, else Fail. Record
pass or fail.
ix. Phase 2 - Now, Clear Counters and Dynamic Views.
x. Add EIR rate to CIR.
xi. Transmit for 1 Burst.
xii. Pass is CIR <=Rx rate<=CIR+EIR, Fail is CIR >=Rx Rate.
xiii. KPIs are not guaranteed at CIR+EIR, grab dynamic
view and counters and record.
xiv. Phase 3 - Clear Counters and Dynamic views.
xv. Add OVERSHOT RATE to CIR+EIR.
xvi. Transmits for 1 burst.
xvii. Pass means that RX Rate=CIR+EIR (Rate Limiting) and
Fail means that RX Rate >CIR+EIR.
xviii. Record Pass Fail and Counters.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
33
Control Variables & Relevance
Variable Relevance Default Value
Left & Right Side Ports Pick the physical or virtual port to be used
NA
Per EVC Service Name and Color
Name Up to 64 characters and a color picker to pick the Service Color
Name “EVC Service 1” and color is “Blue”
Per EVC Bandwidth Ask the Rate Unit, CIR, and EIR Rate Unit in Mbps, Default rate 1mbps CIR, with a CIR of 100 kbps.
Per EVC Service VLAN Tag
If the service is VLAN tagged and RFC-4814 MAC Addressing.
Yes, Starting at VLAN 100, PRI 0. RFC-4814 Mac Addressing (Yes)
Per EVC Service IP information
IP Type, Starting IP Address, Mask, and Gateway
Ipv4, 10.x.y.2 /24 = Starting Ipv4 address, Default gateway is 10.x.y.1
Per EVC Service QoS Codepoint
QoS Level BE (0x00h)
Per EVC Service Layer 4 Header
Layer 4 Header Type and Source/Destination Ports
UDP (Source Port 53, Dest Port 1024)
Per EVC Traffic orientation
Type of Generation and termination Bi-Directional
Per EVC KPI’s (Key Performance Indicators)
Sequence of KPIs determines PASS/Fail for the Iteration
Defaults: CIR (1 Mbps) EIR (100 Kbps) Overshot (100 kbps) Maximum Packet Loss Count (o packets) Absolute Average Jitter (50 nSec) Maximum Average Jitter (200 nSec) Maximum Latency (1 uSec) Average Latency (700 nSec) Out of order / Delayed / Late Maximum Packet Count (0 Packets) ‘AND’ Between each operator
Frame Size Iterations Ask the user what frame sizes to test across, with the option of IMIX
If UDP only, 64->128->256->512->768->1024->1280->1518->9022, If Some TCP, then 72>128->256->512->768->1024->1280->1518->9022, User should be able to pick an IMIX Pattern or define it, but if TCP is use, 72 should be validated.
Hosts per EVC Service Iteration
Iterates a list of host counts Default (List, Count=254). Validate Minimum of 1 host per service
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
34
Variable Relevance Default Value
Per Iteration Duration Ask the user for each vertical duration unit (Frames or time). This is the period where the KPIs are inspected for compliance
Units (Time in Seconds) Duration (120 Seconds)
Starting Bandwidth The beginning bandwidth of the test Default 1 kbps
Number of Steps to Reach CIR
CIR / Number of steps defines the raise of bandwidth from iteration to iteration
Default is 5
Key Measured Metrics
Metric Relevance Metric Unit
Phase 1 KPI Adherence Up to CIR Pass if KPIs regular expression is TRUE
Phase 2 Traffic Forwarding between CIR and CIR+EIR Pass if Traffic is forwarded between CIR and CIR+EIR
Phase 3 Rate Limited at CIR+EIR Rate limited to CIR+EIR
Desired Result
The DUT passes each EVC Ethernet service KPI up to its respective CIR, allows traffic with no
guarantee of KPI adherence between CIR and CIR+EIR, and rate limits the EVC Ethernet Service at
CIR+EIR.
Analysis
A bar chart represents service levels on the X-Axis and bandwidth on the Y-Axis. A bar colored
green maps from 0 up to the last passing bandwidth up to the CIR.
A grid represents service levels on the X-Axis KPI measurements of the last passing bandwidth for
each EVC service level on the Y-Axis.
A table generated for each tested frame size (or iMIX pattern) displays rows representing EVC
Ethernet Service Levels. Columns represent hosts per service per port pair. Individual cells
indicate “PASS” or “FAIL” and the recorded KPI information.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
35
BNCH_005 EtherSAM (ITU-T Y.1564) Service Performance Test
with TurboQoS
Abstract
EtherSAM (ITU-T Y.1564) is an industry standard for independently measuring Ethernet services
QoS across a Device Under Test (DUT). This test case measures the ability of the DUT to hand
multiple services and KPIs simultaneously for a duration of time. EtherSAM ensures SLA
“Pathway QoS worthiness” for the Ethernet services across a DUT.
Description
EtherSAM (ITU-T Y.1564) was designed as a successor to RFC-2544. Focusing on Ethernet Service
SLA, EtherSAM requires the measurement of many QoS KPI (Key Performance Indicator) per
Service per pass. The key objectives of EtherSAM are:
1) To serve as a network service level agreement (SLA) validation tool, ensuring that a
service meets its guaranteed performance settings in a controlled test time
2) To ensure that all services carried by the network meet their SLA objectives at their
maximum committed rate, proving that under maximum load network devices and
paths can support all the traffic as designed
ITU-T Y.156sam defines test streams with service attributes linked to the Metro Ethernet Forum
(MEF) 10.2 definitions.
Services are traffic streams with specific attributes identified by different classifiers, such as
802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services are defined at the UNI
level, with different frame and bandwidth profiles, such as the service’s maximum transmission
unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR).
ITU Y.156sam defines three key test rates based on the MEF service attributes for Ethernet
virtual circuit (EVC) and user-to-network interface (UNI) bandwidth profiles.
CIR defines the maximum transmission rate for a service where the service is guaranteed certain
performance objectives. These objectives are typically defined and enforced via SLAs.
EIR defines the maximum transmission rate above the committed information rate, which is
considered excess traffic. This excess traffic is forwarded as capacity allows and is not subject to
meeting guaranteed performance objectives (best effort forwarding).
Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the
DUT or network under test (NUT) does not forward more traffic than specified by the CIR or EIR
of the service.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
36
These rates can be associated with color markings:
Green traffic is equivalent to CIR
Yellow traffic is equivalent to EIR
Red traffic represents discarded traffic (overshoot – CIR or overshoot – EIR)
ITU-T Y.156sam is built around two key subtests, the service configuration test and the service
performance test, which are performed sequentially.
Forwarding devices, such as switches, routers, bridges and network interface units, are the basis
of any network as they interconnect segments. If a service is not correctly configured on any one
of these devices within the end-to-end path, network performance can be greatly affected,
leading to potential service outages and network-wide issues such as congestion and link failures.
The service configuration test measures the ability of DUT or NUT to properly forward in three
different states.
In the CIR phase, where performance metrics for the service are measured and compared to
the SLA performance objectives
In the EIR phase, where performance is not guaranteed and the services transfer rate is
measured to ensure that CIR is the minimum bandwidth
In the discard phase, where the service is generated at the overshoot rate and the expected
forwarded rate is not greater than the committed information rate or excess rate (when
configured)
As network devices come under load, they must prioritize one traffic flow over another to meet
the KPIs set for each traffic class. With only one traffic class, no prioritization is performed by the
network devices since there is only one set of KPIs. As the number of traffic flows increase,
prioritization is necessary and performance failures may occur.
The service performance test measures the ability of the DUT or NUT to forward multiple
services while maintaining SLA conformance for each service. Services are generated at the CIR,
where performance is guaranteed, and pass/fail assessment is performed on the KPI values for
each service according to its SLA.
Service performance assessment must also be maintained for a medium- to long-term period, as
performance degradation will likely occur as the network is under stress for longer periods of
time. The service performance test is designed to soak the network under a full committed load
for all services, and to measure performance over medium and long test time.
Y.156sam focuses on the following KPIs for service quality:
Bandwidth: Bit rate of available or consumed data communication resources expressed in
bits/second or multiples (kilobits/s, megabits/s).
Frame transfer delay (FTD): Also known as latency, this is a measurement of the delay between
the transmission and the reception of a frame. Typically this is a round-trip measurement,
meaning that the calculation measures both the near-end to far-end and far-end to near-end
direction simultaneously.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
37
Packet jitter: A measurement of the variation in the time delay between packet deliveries. As
packets travel through a network to their destination, they are often queued and sent in bursts
to the next hop. Prioritization may occur at random moments, also resulting in packets being
sent at random rates. Packets are consequently received at irregular intervals. The direct result
of jitter is stress on the receiving buffers of the end nodes, where buffers can be overused or
underused when there are large swings of jitter.
Frame loss: Typically expressed as a ratio, this is the number of packets lost over the total
number of packets sent. Frame loss can result from a number of issues, such as network
congestion or errors during transmissions.
Target Users
NEMS, service providers
Target Device Under Test (DUT)
Any router or switch using Ethernet services
Reference
ITU-T Y.1564, MEF 10.2
Relevance
This test measures Ethernet SLAs in the network and the worthiness of a path to maintain full
SLA traffic.
Version
1.0.
Test Category
BNCH
PASS
[ ] Performance [x] Availability [ ] Security [ ] Scale
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
38
Required Tester Capabilities
To properly measure EtherSAM KPIs, the tester must have the following attributes:
FPGA. The precision and scalability of FPGAs permit deep scale while measuring precise timing.
Ultra-Low Latency. The tester must be able to test and measure down to 2.5 ns.
Support for 10 Mbps to 100 Gbps Ethernet, including virtual ports.
Ability to select RFC 4814 MAC addressing to ensure longest CAM lookups.
True Jitter. Variable TX simulating such services as video, while isolating the impact of the DUT
on EVCs.
True sequencing. Measure and differentiate loss from late, duplicate, reorder, and out of order
packets.
Simultaneous measurement of KPIs. Measure everything in one packet pass.
Topology Diagram
Test Procedure
1. Select and reserve the left and right side ports, including virtual machine endpoints.
2. Set all ports to the Latency+Jitter Mode, clear all counters, and turn off latency
compensation on all ports.
3. Define multiple EVC Ethernet Services. For each service specify:
a. Service Name and color.
b. Service bandwidth units (Frames/Second, % Load, or Kbps or Mbps).
c. Whether the service is VLAN Tagged.
i. If VLAN Tagged, specify the VLAN ID and PRI level. (Default PRI should be zero.)
ii. Indicate whether to use RFC-4814 MAC addressing.
d. Indicate whether the service is Ipv4 Only, Ipv6 Only, or Both.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
39
i. Specify starting IP address, subnet mask, and default gateway.
ii. Specify the Ipv4 DiffServ Codepoint and/or Ipv6 Traffic Class.
e. Specify the Layer 4 header:
i. UDP
ii. Stateless TCP
iii. Stateful TCP
iv. Specify source and destination ports
f. Indicate the service to be tested
i. Loopback (Generated and Terminated on the same port)
ii. Bi-Directional (Generated and Terminated on different Ports)
g. Indicate the Service KPIs for which to measure the KPI
i. CIR (Required)
ii. EIR (Required)
iii. Overshoot (Required)
iv. Maximum True Packet Loss Count (required)
v. Latency and Jitter units
1. uSec (Default)
2. nSec
vi. RFC-4689 Absolute Average Jitter
vii. RFC-4689 Max Jitter (Required)
viii. Maximum Latency (Required)
ix. Average latency
x. Maximum Out of Order Packet Count
xi. Maximum Delayed Packet Count
xii. Maximum Late Packet Count
xiii. Define an AND or an OR (Default is AND) to combine KPIs
4. Map EVC Services to Test Ports.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
40
5. Define Test case Constants.
a. Specify frame sizes to test across.
b. Specify the Host Count Iteration.
c. Specify a per-Iteration unit (Frame or time) and value.
6. Run the Service performance test.
a. For Each Frame Size.
i. For Each Host count.
1. Clear Counters and Dynamic Views.
2. Setup all Services mapped and set their rates to their respective CIR.
3. Run Traffic.
4. Record Each EVC Ethernet service results, noting which services failed.
Control Variables & Relevance
Variable Relevance Default Value
Left & Right Side Ports Pick the physical or virtual port to be used
NA
Per EVC Service Name and Color
Name Up to 64 characters and a color picker to pick the Service Color
Name “EVC Service 1” and color is “Blue”
Per EVC Bandwidth Ask the Rate Unit, CIR, and EIR Rate Unit in Mbps, Default rate 1mbps CIR, with a CIR of 100 kbps.
Per EVC Service VLAN Tag
If the service is VLAN tagged and if the test should use RFC-4814 MAC addressing.
Yes, Starting at VLAN 100, PRI 0, RFC-4814 MAC addressing (Yes)
Per EVC Service IP information
IP Type, Starting IP Address, Mask, and Gateway
Ipv4, 10.x.y.2 /24 = Starting Ipv4 address, Default gateway is 10.x.y.1
Per EVC Service QoS Codepoint
QoS Level BE (0x00h)
Per EVC Service Layer 4 Header
Layer 4 Header Type and Source/Destination Ports
UDP (Source Port 53, Dest Port 1024)
Per EVC Traffic orientation
Type of Generation and termination Bi-Directional
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
41
Variable Relevance Default Value
Per EVC KPI’s (Key Performance Indicators)
Sequence of KPIs determines PASS/Fail for the Iteration
Defaults: CIR (1 Mbps) EIR (100 Kbps) Overshot (100 kbps) Maximum Packet Loss Count (o packets) Absolute Average Jitter (50 nSec) Maximum Average Jitter (200 nSec) Maximum Latency (1 uSec) Average Latency (700 nSec) Out of order / Delayed / Late Maximum Packet Count (0 Packets) ‘AND’ Between each operator
Frame Size Iterations Ask the user what frame sizes to test across, with the option of IMIX
If UDP only, 64->128->256->512->768->1024->1280->1518->9022, If Some TCP, then 72>128->256->512->768->1024->1280->1518->9022, User should be able to pick an IMIX Pattern or define it, but if TCP is use, 72 should be validated.
Hosts per EVC Service Iteration
Iterates a list of host counts Default (List, Count=254). Validate Minimum of 1 host per service
Total Test Duration Time to Run Each Iteration Default (24 hours)
Key Measured Metrics
Metric Relevance Metric Unit
Service Performance Test
KPI Adherence at CIR Pass or Fail
Desired Result
The DUT passes each EVC Ethernet service KPI at its respective CIR for the duration of the test.
Analysis
For each Frame Size and host count, a vertical line chart represents the time on the X-Axis and a
bar per EVC service indicates the time the KPI regular expression failed, with a table displaying
recorded KPI data.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
42
Appendix A – Telecommunications
Definitions
APPLICATION LOGIC. The computational aspects of an application, including a list of instructions that tells a
software application how to operate.
APPLICATION SERVICE PROVIDER (ASP). An ASP deploys hosts and manages access to a packaged application by
multiple parties from a centrally managed facility. The applications are delivered over networks on a
subscription basis. This delivery model speeds implementation, minimizes the expenses and risks incurred
across the application life cycle, and overcomes the chronic shortage of qualified technical personnel
available in-house.
APPLICATION MAINTENANCE OUTSOURCING PROVIDER. Manages a proprietary or packaged application from
either the customer's or the provider's site.
ASP INFRASTRUCTURE PROVIDER (AIP). A hosting provider that offers a full set of infrastructure services for
hosting online applications.
ATM. Asynchronous Transport Mode. An information transfer standard for routing high-speed, high-
bandwidth traffic such as real-time voice and video, as well as general data bits.
AVAILABILITY. The portion of time that a system can be used for productive work, expressed as a
percentage.
BACKBONE. A centralized high-speed network that interconnects smaller, independent networks.
BANDWIDTH. The number of bits of information that can move through a communications medium in a
given amount of time; the capacity of a telecommunications circuit/network to carry voice, data, and
video information. Typically measured in Kbps and Mbps. Bandwidth from public networks is typically
available to business and residential end-users in increments from 56 Kbps to 45 Mbps.
BIT ERROR RATE. The number of transmitted bits expected to be corrupted per second when two computers
have been communicating for a given length of time.
BURST INFORMATION RATE (BIR). The rate of information in bits per second that the customer may need over
and above the CIR. A burst is typically a short duration transmission that can relieve momentary
congestion in the LAN or provide additional throughput for interactive data applications.
BUSINESS ASP. Provides prepackaged application services in volume to the general business market,
typically targeting small to medium size enterprises.
BUSINESS-CRITICAL APPLICATION. The vital software needed to run a business, whether custom-written or
commercially packaged, such as accounting/finance, ERP, manufacturing, human resources and sales
databases.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
43
BUSINESS SERVICE PROVIDER. Provides online services aided by brick-and-mortar resources, such as payroll
processing and employee benefits administration, printing, distribution or maintenance services. The
category includes business process outsourcing (BPO) companies.
COMMERCE NETWORK PROVIDER. Commerce networks were traditionally proprietary value-added networks
(VANs) used for electronic data interchange (EDI) between companies. Today the category includes the
new generation of electronic purchasing and trading networks.
COMPETITIVE ACCESS PROVIDER (CAP). A telecommunications company that provides an alternative to a LEC
for local transport and special access telecommunications services.
CAPACITY. The ability for a network to provide sufficient transmitting capabilities among its available
transmission media, and respond to customer demand for communications transport, especially at peak
usage times.
CLIENT/DEVICE. Hardware that retrieves information from a server.
CLUSTERING. A group of independent systems working together as a single system. Clustering technology
allows groups of servers to access a single disk array containing applications and data.
COMPUTING UTILITY PROVIDER (CUP). A provider that delivers computing resources, such as storage, database
or systems management, on a pay-as-you-go basis.
CSU/DSU. Channel Server Unit/Digital Server Unit. A device used to terminate a telephone company
connection and prepare data for a router interface.
DATA MART. A subset of a data warehouse, intended for use by a single department or function.
DATA WAREHOUSE. A database containing copious amounts of information, organized to aid decision-
making in an organization. Data warehouses receive batch updates and are configured for fast online
queries to produce succinct summaries of data.
DEDICATED LINE. A point-to-point, hardwired connection between two service locations.
DEMARCATION LINE. The point at which the local operating company's responsibility for the local loop ends.
Beyond the demarcation point (also known as the network interface), the customer is responsible for
installing and maintaining all equipment and wiring.
DISCARD ELIGIBILITY (DE) BIT. Relevant in situations of high congestion, it indicates that the frame should be
discarded in preference to frames without the DE bit set. The DE bit may be set by the network or by the
user; and once set cannot be reset by the network.
DS-1 OR T-1. A data communication circuit capable of transmitting data at 1.5 Mbps. Currently in
widespread use by medium and large businesses for video, voice, and data applications.
DS-3 OR T-3. A data communications circuit capable of transmitting data at 45 Mbps. The equivalent data
capacity of 28 T-1s. Currently used only by businesses/institutions and carriers for high-end applications.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
44
ELECTRONIC DATA INTERCHANGE (EDI). The electronic communication of business transactions (orders,
confirmations, invoices etc.) of organizations with differing platforms. Third parties provide EDI services
that enable the connection of organizations with incompatible equipment.
ENTERPRISE ASP. An ASP that delivers a select range of high-end business applications, supported by a
significant degree of custom configuration and service.
ENTERPRISE RELATIONSHIP MANAGEMENT (ERM). Solutions that enable the enterprise to share comprehensive,
up-to-date customer, product, competitor and market information to achieve long-term customer
satisfaction, increased revenues, and higher profitability.
ENTERPRISE RESOURCE PLANNING (ERP). An information system or process integrating all manufacturing and
related applications for an entire enterprise. ERP systems permit organizations to manage resources
across the enterprise and completely integrate manufacturing systems.
ETHERNET. A local area network used to connect computers, printers, workstations, and other devices
within the same building. Ethernet operates over twisted wire and coaxial cable.
EXTENDED SUPERFRAME FORMAT. A T1 format that provides a method for easily retrieving diagnostics
information.
FAT CLIENT. A computer that includes an operating system, RAM, ROM, a powerful processor and a wide
range of installed applications that can execute either on the desktop or on the server to which it is
connected. Fat clients can operate in a server-based computing environment or in a stand-alone fashion.
FAULT TOLERANCE. A design method that incorporates redundant system elements to ensure continued
systems operation in the event of the failure of any individual element.
FDDI. Fiber Distributed Data Interface. A standard for transmitting data on optical-fiber cables at a rate of
about 100 Mbps.
FRAME. The basic logical unit in which bit-oriented data is transmitted. The frame consists of the data bits
surrounded by a flag at each end that indicates the beginning and end of the frame. A primary rate can be
thought of as an endless sequence of frames.
FRAME RELAY. A high-speed packet switching protocol popular in networks, including WANs, LANs, and
LAN-to-LAN connections across long distances.
GBPS. Gigabits per second, a measurement of data transmission speed expressed in billions of bits per
second.
HOSTED OUTSOURCING. Complete outsourcing of a company's information technology applications and
associated hardware systems to an ASP.
HOSTING PROVIDER. Provider who operates data center facilities for general-purpose server hosting and
collocation.
INFRASTRUCTURE ISV. And independent software vendor that develops infrastructure software to support
the hosting and online delivery of applications.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
45
INTEGRATED SERVICES DIGITAL NETWORK (ISDN). An information transfer standard for transmitting digital voice
and data over telephone lines at speeds up to 128 Kbps.
INTEGRATION. Equipment, systems, or subsystem integration, assembling equipment or networks with a
specific function or task. Integration is combining equipment/systems with a common objective, easy
monitoring and/or executing commands. It takes three disciplines to execute integration: 1) hardware, 2)
software, and 3) connectivity – transmission media (data link layer), interfacing components. All three
aspects of integration have to be understood to make two or more pieces of equipment or subsystems
support the common objective.
INTER-EXCHANGE CARRIER (IXC). A telecommunications company that provides telecommunication services
between local exchanges on an interstate or intrastate basis.
INTERNET SERVICE PROVIDER (ISP). A company that provides access to the Internet for users and businesses.
INDEPENDENT SOFTWARE VENDOR (ISV). A company that is not a part of a computer systems manufacturer
that develops software applications.
INTERNETWORKING. Sharing data and resources from one network to another.
IT SERVICE PROVIDER. Traditional IT services businesses, including IT outsourcers, systems integrators, IT
consultancies and value added resellers.
KILOBITS PER SECOND (KBPS). A data transmission rate of 1,000 bits per second.
LEASED LINE. A telecommunications line dedicated to a particular customer along predetermined routers.
LOCAL ACCESS TRANSPORT AREA (LATA). One of approximately 164 geographical areas within which local
operating companies connect all local calls and route all long-distance calls to the customer's inter-
exchange carrier.
LOCAL EXCHANGE CARRIER (LEC). A telecommunications company that provides telecommunication services
in a defined geographic area.
LOCAL LOOP. The wires that connect an individual subscriber's telephone or data connection to the
telephone company central office or other local terminating point.
LOCAL/REGIONAL ASP. A company that delivers a range of application services, and often the complete
computing needs, of smaller businesses in their local geographic area.
MEGABITS PER SECOND (MBPS). 1,024 kilobits per second.
METAFRAME. The world's first server-based computing software for Microsoft Windows NT 4.0 Server,
Terminal Server Edition multi-user software (co-developed by Citrix).
MODEM. A device for converting digital signals to analog and vice versa, for data transmission over an
analog telephone line.
MULTIPLEXING. The combining of multiple data channels onto a single transmission medium. Sharing a
circuit - normally dedicated to a single user - between multiple users.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
46
MULTI-USER. The ability for multiple concurrent users to log on and run applications on a single server.
NET-BASED ISV. An ISV whose main business is developing software for Internet-based application services.
This includes vendors who deliver their own applications online, either directly to users or via other
service providers.
NETWORK ACCESS POINT (NAP). A location where ISPs exchange traffic.
NETWORK COMPUTER (NC). A thin-client hardware device that executes applications locally by downloading
them from the network. NCs adhere to a specification jointly developed by Sun, IBM, Oracle, Apple and
Netscape. They typically run Java applets within a Java browser, or Java applications within the Java
Virtual Machine.
NETWORK COMPUTING ARCHITECTURE. A computing architecture in which components are dynamically
downloaded from the network onto the client device for execution by the client. The Java programming
language is at the core of network computing.
ONLINE ANALYTICAL PROCESSING (OLAP). Software that enables decision support via rapid queries to large
databases that store corporate data in multidimensional hierarchies and views.
OPERATIONAL RESOURCE PROVIDER. Operational resources are external business services that an ASP might
use as part of its own infrastructure, such as helpdesk, technical support, financing, or billing and payment
collection.
OUTSOURCING. The transfer of components or large segments of an organization's internal IT infrastructure,
staff, processes or applications to an external resource such as an ASP.
PACKAGED SOFTWARE APPLICATION. A computer program developed for sale to consumers or businesses,
generally designed to appeal to more than a single customer. While some tailoring of the program may be
possible, it is not intended to be custom-designed for each user or organization.
PACKET. A bundle of data organized for transmission, containing control information (destination, length,
origin, etc.), the data itself, and error detection and correction bits.
PACKET SWITCHING. A network in which messages are transmitted as packets over any available route rather
than as sequential messages over circuit-switched or dedicated facilities.
PEERING. The commercial practice under which nationwide ISPs exchange traffic without the payment of
settlement charges.
PERFORMANCE. A major factor in determining the overall productivity of a system, performance is primarily
tied to availability, throughput and response time.
PERMANENT VIRTUAL CIRCUIT (PVC). A PVC connects the customer's port connections, nodes, locations, and
branches. All customer ports can be connected, resembling a mesh, but PVCs usually run between the
host and branch locations.
POINT OF PRESENCE (POP). A telecommunications facility through which the company provides local
connectivity to its customers.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
47
PORTAL. A company whose primary business is operating a Web destination site, hosting content and
applications for access via the Web.
REMOTE ACCESS. Connection of a remote computing device via communications lines such as ordinary
phone lines or wide area networks to access distant network applications and information.
REMOTE PRESENTATION SERVICES PROTOCOL. A set of rules and procedures for exchanging data between
computers on a network, enabling the user interface, keystrokes, and mouse movements to be
transferred between a server and client.
RESELLER/VAR. An intermediary between software and hardware producers and end users. Resellers
frequently add value (thus Value-Added Reseller) by performing consulting, system integration and
product enhancement.
ROUTER. A communications device between networks that determines the best path for optimal
performance. Routers are used in complex networks of networks such as enterprise-wide networks and
the Internet.
SCALABILITY. The ability to expand the number of users or increase the capabilities of a computing solution
without making major changes to the systems or application software.
SERVER. The computer on a local area network that often acts as a data and application repository and that
controls an application's access to workstations, printers and other parts of the network.
SERVER-BASED COMPUTING. A server-based approach to delivering business-critical applications to end-user
devices, whereby an application's logic executes on the server and only the user interface is transmitted
across a network to the client. Benefits include single-point management, universal application access,
bandwidth-independent performance, and improved security for business applications.
SINGLE-POINT CONTROL. One of the benefits of the ASP model, single-point control helps reduce the total
cost of application ownership by enabling widely used applications and data to be deployed, managed
and supported at one location. Single-point control enables application installations, updates and
additions to be made once, on the server, which are then instantly available to users anywhere.
SPECIALIST ASP. Provide applications which serve a specific professional or business activity, such as
customer relationship management, human resources or Web site services.
SYSTEMS MANUFACTURER. Manufacturer of servers, networking and client devices.
TELECOMS PROVIDER. Traditional and new-age telecommunications network providers (telcos).
THIN CLIENT. A low-cost computing device that accesses applications and and/or data from a central server
over a network. Categories of thin clients include Windows-Based Terminals (WBT, which comprise the
largest segment), X-Terminals, and Network Computers (NC).
TOTAL COST OF OWNERSHIP (TCO). Model that helps IT professionals understand and manage the budgeted
(direct) and unbudgeted (indirect) costs incurred for acquiring, maintaining and using an application or a
computing system. TCO normally includes training, upgrades, and administration as well as the purchase
price. Lowering TCO through single-point control is a key benefit of server-based computing.
Spirent Journal of Benchmark Test Methodologies | © Spirent Communications 2011
48
TOTAL SECURITY ARCHITECTURE (TSA). A comprehensive, end-to-end architecture that protects the network.
TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP). A suite of network protocols that allow
computers with different architectures and operating system software to communicate over the Internet.
USER INTERFACE. The part of an application that the end user sees on the screen and works with to operate
the application, such as menus, forms and buttons.
VERTICAL MARKET ASP. Provides solutions tailored to the needs of a specific industry, such as the healthcare
industry.
VIRTUAL PRIVATE NETWORK (VPN). A secure, encrypted private connection across a cloud network, such as
the Internet.
WEB HOSTING. Placing a consumer's or organization's web page or web site on a server that can be
accessed via the Internet.
WIDE AREA NETWORK. Local area networks linked together across a large geographic area.
WINDOWS-BASED TERMINAL (WBT). Thin clients with the lowest cost of ownership, as there are no local
applications running on the device. Standards are based on Microsoft's WBT specification developed in
conjunction with Wyse Technology, NCD, and other thin client companies.