Date post: | 15-Mar-2018 |
Category: |
Documents |
Upload: | trankhuong |
View: | 244 times |
Download: | 5 times |
Oracle Communications Convergent Charging and Policy Solution Benchmark on Oracle SuperCluster M7 O R A C L E W H I T E P A P E R | D E C E M B E R 2 0 1 6
Oracle Communications Convergent Charging and Policy Solution Benchmark on Oracle SuperCluster M7 O R A C L E W H I T E P A P E R | D E C E M B E R 2 0 1 6
Oracle Communications Convergent Charging and Policy Solution Benchmark on Oracle SuperCluster M7 O R A C L E W H I T E P A P E R | D E C E M B E R 2 0 1 6
1 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
Introduction
The rise in smartphone and tablet usage—coupled with increasing data speeds and network
technology evolution, both for mobile and fixed connections—is contributing to an exponential increase
in the consumption of data services, placing greater demands on communications service providers
(CSPs) to effectively monetize the “digital experience.” The Oracle Communications Convergent
Charging and Policy Solution enables CSPs to combine business and network policies to rapidly
launch innovative offers and empower customers to personalize and control their usage experience,
accelerating service monetization at a predictable cost of ownership.
The Oracle Communications Performance Engineering Group conducted a comprehensive test to
demonstrate the extreme performance, robust carrier-grade capabilities, and outstanding
price/performance value of the solution’s charging capabilities on Oracle SuperCluster M7 (see Figure
1). Realistic test scenarios included large customer profiles and long data session management
alongside full invoicing and bill generation.
Figure 1. Oracle Communications Convergent Charging and Policy Solution
2 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
Oracle SuperCluster M7 is the world’s fastest engineered system, delivering incredible performance
under a wide range of workloads ranging from traditional enterprise resource planning (ERP), to
customer relationship management (CRM) and data warehouses, to e-commerce, mobile applications,
and real-time analytics. Equally importantly, it is extremely cost-effective because of its low purchase
price and the ease with which it can be deployed, scaled, managed, and maintained.
The test scenarios modeled 7 million subscribers generating continual data sessions with concurrent
online and offline charging traffic. Oracle achieved an average end-to-end online charging latency of 7
milliseconds for data throughput of 14,000 online and offline operations per second on a partial Oracle
SuperCluster M7.
Performance Test Description
System Configuration
This performance test focused on the end-to-end performance of the solution architecture applied to online and
offline charging models. Figure 2 illustrates the logical call flows and data transfers between each architectural
component.
Figure 2. Logical architecture for testing online charging and offline charging capabilities
Data Composition
On Oracle Communications Billing and Revenue Management (Oracle Communications BRM), 7 million subscriber
accounts were provisioned using a combination of account types for testing various business scenarios. For billing
and invoicing tests, the accounts were provisioned using a mixture of 500, 1,000, and 1,500 usage events per
account. A single schema with all 7 million subscribers having different customer profiles using large customer
profiles (of sizes 13 KB, 20 KB, and 34 KB) with many services was used for data creation.
3 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
Hardware Platform
Oracle Communications Convergent Charging and Policy Solution was hosted on Oracle SuperCluster M7 for
extreme performance, massive scalability, and maximum availability. Oracle SuperCluster M7 is a ready-to-deploy
secure cloud infrastructure for both databases and applications. It is an engineered system that combines compute,
networking, and storage hardware with virtualization, operating system, and management software into a single
system that is extremely easy to deploy, secure, manage and maintain. Oracle SuperCluster M7 features the
industry’s most advanced security, incorporating a number of unique runtime security technologies, documented and
tested system-wide security controls and best practices, and integrated automated compliance verification tools.
Oracle SuperCluster M7 is the world’s fastest engineered system, delivering incredible performance under a wide
range of workloads ranging from traditional enterprise resource planning, to customer relationship management and
data warehouses, to e-commerce, mobile applications, and real-time analytics. Equally importantly, it is extremely
cost effective because of its low purchase price; the ease with which the system can be deployed, scaled, managed,
and maintained; and its incredibly efficient use of space, power, compute resources, storage, memory, and software
licenses.
Figure 3. Oracle SuperCluster M7
Oracle SuperCluster M7 is built on the fastest and most advanced server with the world’s fastest microprocessor,
the fastest database storage, a fast networking and operating system combination, and unique capabilities for
securing application data, accelerating databases, and running Java applications.
» The SPARC M7 high-performance microprocessor is the world’s fastest microprocessor for general-purpose
computing and integrates additional performance enhancements for cryptographic acceleration and Oracle
Database 12c directly into the processor design.
» SPARC M7 In-Line Decompression allows Oracle Database 12c to store databases many times larger than
the physical memory in the system entirely in memory in a highly compressed format using dedicated functions in
the microprocessor itself and frees valuable general-purpose compute cores for SQL processing.
» SPARC M7 In-Memory Query Acceleration for Oracle Database In-Memory in Oracle Database 12c drives
simultaneous real-time analytics and transaction processing performance up to 9x better than x86 or IBM Power
systems
» Oracle Exadata Storage Server, coengineered with Oracle Database, delivers the optimal balance of scalability,
transaction processing, and batch performance for all Oracle Database workloads.
» Oracle’s InfiniBand fabric is the low-latency, high throughput I/O fabric that ties all of the Oracle SuperCluster
system components together, making it possible to horizontally scale the Oracle SuperCluster system.
4 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
Server Virtualization
Oracle VM Server for SPARC is a free virtualization technology that is integrated with Oracle SuperCluster M7. In
addition, Oracle Solaris 11 enables no-compromise virtualization, allowing enterprise workloads to be run within a
virtual environment at no performance cost, as if they were run in a bare-metal environment. Oracle VM Server for
SPARC logical domains (LDoms) and physical domains (PDoms) on Oracle’s high-end systems, such as the Oracle
SuperCluster M7 system used in these tests, provide a feature-rich environment to suit every workload while
providing extreme administrative efficiency. In addition, Oracle VM Server for SPARC is recognized as a license
boundary by most enterprise software vendors, leading to significant cost savings.
The deployed architecture shown in Figure 4 was configured as follows:
» Four PDoms per Oracle SuperCluster M7 with a total of 11 LDoms
» The following for each PDom:
» 1024 GB of RAM
» 4 x one-socket 4.133 GHz Oracle SPARC M7 processor with 32 cores and 8 strands for a total of 1,024
virtual CPUs
» The following I/O domains were configured:
» Chassis 0
PDOM0: one I/O domain with 22 cores and 336 GB RAM, one I/O domain with 12 cores and144 GB RAM
PDOM1: one I/O domain with 22 cores and 336 GB RAM, one I/O domain 22 cores and 368 GB RAM
» Chassis 1
PDOM0: one I/O domain with 22 cores and 336 GB RAM, one I/O domain with 22 cores and 368 GB RAM
PDOM1: one I/O domain with 22 cores and 336 GB RAM, one I/O domain with 22 cores and 368 GB RAM
A three node Oracle Real Application Clusters (Oracle RAC) database was configured using d1 from
Chassis 0-PDOM0, Chassis0-PDOM1, and Chassis1-PDOM0 and used three Oracle Exadata Storage
Servers (also called storage cells). The other three available storage cells were assigned to another project
and isolated, demonstrating the secure multitenancy environment of Oracle SuperCluster M7.
» Three database LDoms with 20 cores and 256 GB of RAM were configured with the following:
» Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 with Oracle Automatic Storage Management and
Oracle RAC
» Oracle Solaris 11.3
» Eight LDoms were created to host applications in Oracle Solaris Zones:
» The global zone ran Oracle Solaris 11.3 and was allocated two cores and 16 GB RAM
» Seven application zones ran Oracle Solaris 10 branded zones and were allocated 20 cores
» One application zone ran Oracle Solaris 10 branded zones and was allocated 10 cores
» Four application LDoms with 22 cores and 336 GB of RAM were configured with the following:
» Three Oracle Communications BRM and Oracle Communications Elastic Charging Engine (Oracle
Communications ECE) 11.3.0.0.0 server nodes
» Coresident on each LDom with Oracle Communications ECE: Oracle NoSQL Database 3.5.2 in a
fault-tolerant 3x3 high-availability configuration
» Coresident on one LDom: Oracle WebLogic Server 10.3.6 and Oracle Communications Pricing Design
Center (Oracle Communications PDC) 11.1.0.7
» Coresident on one LDom: The Customer Updater component of Oracle Communications ECE
» One LDom running the Diameter Gateway (DGW) component of Oracle Communications ECE
5 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
» Three application LDoms with 22 cores and a total of 368 GB of RAM were configured with the following:
» One LDom running Oracle Communications BRM 7.5.0.15.0 coresident with Real-time Transport Protocol
(RTP) and the Oracle Communications BRM External Manager (EM) Gateway
» Two LDoms configured with Oracle Communications BRM were provisioned as spares
» One application LDom with 12 cores and 144 GB of RAM ran Oracle Communications Offline Mediation
Controller 6.0.0.3.3
Figure 4. Oracle SuperCluster M7 deployment Architecture
Software Inventory
TABLE 1. SOFTWARE
Category Name Category Name
Runtime Products » Groovy 2.3.9
» Java Development Kit 1.7.0.80
(32- and 64-bit); 1.8.0_65 (64-bit)
Middleware Products » Oracle WebLogic Server 10.3.6
» Oracle Application Development Framework
runtime 11.1.1.6
Database
Products
» Oracle Coherence 12.2.1.0.2
» Oracle NoSQL Database 3.5.2
» Oracle Database JDBC driver 7
» Oracle Database 12c Enterprise Edition
Release 12.1.0.2.0, 64-bit production
» Oracle Database 11g 11.2.0.3.0, client
64-bit
» Oracle Database 11.2.0.1.0, client 32-bit
RMS Products » Oracle Communications Pricing Design Center
11.1.0.7
» Oracle Communications Offline Mediation
Controller 6.0.0.3.3
» Oracle Communications BRM Elastic Charging
Engine 11.3.0.0.0
» Oracle Communications Billing and Revenue
Management 7.5.0.15.0
OS Products » Oracle Solaris 11.3 SPARC 64-bit:
database
» Oracle Solaris 10 1/13 s10 SPARC
64-bit: app
Performance Results
6 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
Industry-Leading Charging Performance and Scalability
Figure 5 shows the total provisioned compute capacity (as measured by the number of CPU cores) for a deployment
scenario of 7 million subscribers. An approximation of the maximum average percentage utilization for charging for
online and offline traffic as well as billing or invoicing is shown as a proportion of the total core count.
Figure 5. Compute capacity and utilization for 7 million subscribers
Realistic Traffic Workload
Online and offline data traffic was generated for 7 million subscribers. Oracle Communications Offline Mediation
Controller generated offline traffic at a rate of 2,500 operations per second and the open source Seagull traffic
generator generated 11,500 operations per second of online traffic. Traffic was generated at an overall rate of 7.2
operations per subscriber per hour. Longer data sessions were maintained for all subscribers with mid-session call
detail record (CDR) generation every 30 minutes. Over 50 million operations were executed in an hour.
Rated Event Loader (REL)
Rated events were extracted from Oracle NoSQL Database using the Rated Event Formatter (REF) process. These
were loaded into the Oracle Communications BRM database using the REL process using parallel threads. On the
Oracle Communications BRM side, the REL performance was measured at throughput rates of up to 11,000 CDRs
per second on a single schema. In the test, the REL was started after a large backlog of CDRs had accumulated.
TABLE 2. REL THROUGHPUT
REL Test Number of
Files
Total Number
of CDRs
Average
CDRs/File
Throughput
CDRs/Second Details
Test 1 2,266 13,686,796 6,040 11,028 On an empty EVENT_T partition
Test 2 5,763 24,477,845 7,319 4,861 On a loaded EVENT_T partition with 40 million rows
Exceptional Performance with Low Response Times
7 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
In a series of traffic tests, the response times in the reported results demonstrate that Oracle Communications
Convergent Charging and Policy Solution was able to deliver and sustain the required performance levels for a very
large subscriber deployment. The Oracle Communications BRM and Oracle Communications ECE charging
response times are reported separately from the end-to-end response times representing the complete processing
flow for online and offline traffic. A sampling of the traffic activity during the three-hour test run reported that a total of
134,926,594 operations had been completed.
In Figure 6 for online charging at a rate of 11,500 operations per second, the Seagull response times represent the
end-to-end latency on the Seagull traffic generator. These response times averaged 6 ms. The DGW response
times represent the latency measured on the DGW (which provides the service that routes traffic from Seagull to
Oracle Communications ECE). Meanwhile the Oracle Communications ECE Client response times were also
measured on the DGW and were defined as being the round-trip time from the DGW to Oracle Communications
ECE excluding the DGW processing time. The Oracle Communications ECE server response times were measured
on Oracle Communications ECE as it performed the charging operation.
Figure 6. Percentiles of response times for charging of online traffic
In Figure 7 for the combination of online and offline charging running concurrently for a total workload of 14,000
operations per second, each of the data series had the same definition as in Figure 6. The averaged reported end-
to-end latency was 7 ms for Seagull traffic.
Average latency 6 ms
8 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
Figure 7. Percentiles of response times for charging online and offline traffic
In Figure 8 for offline charging at a rate of 2,500 operations per second, the client response times were the latencies
observed on the Offline Mediation Controller. The server response times were observed on Oracle Communications
ECE. The average latency observed was 4 ms on the client and 3 ms on the server.
Figure 8. Percentiles of response times for charging of offline traffic
Average latency 7 ms
Average latency 4 ms
9 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
High Volume Billing and Invoicing Throughput
Billing and invoicing tests were conducted separately and utilized 40 threads for processing, respectively. The
number of events per subscriber was varied to profile the throughput and CPU utilization for different scenarios. The
Oracle Communications BRM database was fully populated (for example, there were about 7 billion rows in the
EVENT_BAL_IMPACTS_T table).
To help make the size of the database more manageable for processing, the Oracle Partitioning option for Oracle
Database was used extensively. Partition pruning optimizations to the SQL execution plan were employed to
improve performance. Reducing the amount of data that needed to be scanned helped ensure that the Exadata
Smart Flash Cache serviced the majority of the disk reads. The total size of this cache increases with each
incremental Oracle Exadata Storage Server. The large cache provided by three Oracle Exadata Storage Servers
that are integral components of Oracle SuperCluster M7 was used.
In one test scenario with a billing cycle size of 100,000 accounts—each with 500 events per account—Oracle
Communications BRM achieved on a per-schema basis 120 bills per second with complex deferred taxation and
161 detailed invoices per second, as shown in gure 9 and 10. Billing and invoicing are very
resource-intensive on storage, because they must process vast amounts of data. A very high throughput for billing
and invoicing was achieved by utilizing the Oracle Exadata Storage Servers that are integral components of Oracle
SuperCluster M7. With this increase in storage and I/O operations per second (IOPS) capacity, these processes
generated higher throughput. I/O, rather than CPU or processing contention, was the limitation in the tests.
Figure 9. Billing throughput
10 | ORACLE COMMUNICATIONS CONVERGENT CHARGING AND POLICY SOLUTION BENCHMARK ON ORACLE SUPERCLUSTER M7
Figure 10. Invoicing throughput
Conclusion
This comprehensive set of performance tests demonstrates that Oracle Communications
Convergent Charging and Policy Solution can deliver unprecedented performance and scalability
for service providers looking to monetize, control, and manage revenue for their communications
services, and it is very well complemented by the Oracle SuperCluster M7 system’s incredible
performance and ability to consolidate database and applications.
Oracle Corporation, World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065, USA
Worldwide Inquiries
Phone: +1.650.506.7000
Fax: +1.650.506.7200
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. Oracle Communications Convergent Charging and Policy Solution Benchmark on Oracle SuperCluster M7 December 2016
C O N N E C T W I T H U S
blogs.oracle.com/oracle
facebook.com/oracle
twitter.com/oracle
oracle.com