+ All Categories
Home > Documents > Jagger Industrial-Strength Performance Testing

Jagger Industrial-Strength Performance Testing

Date post: 25-Feb-2016
Category:
Upload: gavril
View: 38 times
Download: 1 times
Share this document with a friend
Description:
Jagger Industrial-Strength Performance Testing. Preface. Report. System U nder Test. Performance Testing Tool. What’s wrong with this schema?. Report. System Under Test. Performance Testing Tool. What’s wrong with this schema?. Nothing. But it is not easy to make it work - PowerPoint PPT Presentation
Popular Tags:
38
Jagger Industrial-Strength Performance Testing
Transcript
Page 1: Jagger Industrial-Strength Performance Testing

JaggerIndustrial-Strength

Performance Testing

Page 2: Jagger Industrial-Strength Performance Testing

Preface

Page 3: Jagger Industrial-Strength Performance Testing

Performance Testing

Tool

System Under

Test

Report

What’s wrong with this schema?

Page 4: Jagger Industrial-Strength Performance Testing

Performance Testing

Tool

System Under

Test

Report

What’s wrong with this schema?

Nothing. But it is not easy to make it workcorrectly.

Page 5: Jagger Industrial-Strength Performance Testing

Typical Story :: 1Delivery Manager: New version goes live soon. We need to check that performance meets SLA.

2 days passed

Performance Tester: Performance degraded in new release in comparison with the previous. Please investigate.Technical Lead: Let's review changes...

5 hours passed

Technical Lead: Joe, one of your commits looks suspiciously.Developer: I need to check. I'll run it locally under profiler.

Developer: I found non-optimal code and fixed it. Profiler shows 150% increase in performance. Please retest.

3 days passed

2 days passed

Performance Tester: Performance increased on 10%, but it's still worse than in the previous release. Delivery Manager: We can't wait any more. Our CPU utilization in production is 50% only, let's go with what we have.

Performance testing should be continuous. If performance degrades, alerts should be raised.If something wrong with performance, it should be easy to identify cause and reproduce the problem.

Page 6: Jagger Industrial-Strength Performance Testing

Typical Story :: 2Operations: We deployed new release into production and it passed all smoke tests. Responses became incorrect when we switched load balancer to it.QA Lead: We tested all this functionality.Performance Test Lead: We tested that system has acceptable throughput under workload, there were no failed transactions, there were no memory leaks.

It is not enough to test that system works under workload. Performance test should check that it works correctly.

Technical Lead: We investigated this and it turned out that one singleton unit is not thread safe.

2 days passed

Page 7: Jagger Industrial-Strength Performance Testing

Typical Story :: 3

Performance Testing Tool is not enough for large projects. Such projects need a comprehensive performance testing process and Performance Testing Server that tracks all results.

Performance Tester: Performance degraded in the last release candidate. What happened?Technical Lead: We merged branches A and B to the trunk for this release. And a couple of fixes from branch C. What exactly impacted performance?Performance Tester: I never heard about branch B. We tested A, but that tests are incomparable with the current tests for trunk. And can’t find last report for branch C. What a mess…

Page 8: Jagger Industrial-Strength Performance Testing

Typical Story :: 4

Robustness and fail over should be tested not only in production. Simulation of maintenance operations and failures should be a part of non-functional testing.

VP of Engineering: Production system failed this morning. What happened?Technical Lead: It looks like somebody restarted two nodes simultaneously. After this synchronization failed and all the cluster went down. Operations: We noticed that two nodes consumed to much memory and restarted them.VP of Engineering: Did you test this scenario on pre-prod environment?QA Lead: We tested similar scenario two months ago, but this is a complex manual test, we can't do it continuously.

Page 9: Jagger Industrial-Strength Performance Testing

Jagger Overview

Page 10: Jagger Industrial-Strength Performance Testing

Principles of Performance TestingContinuous: Performance Testing Server automatically tests all builds and rises alerts in case of performance degradation

Transparent: Modules can be tested in isolation. Results of performance testing include monitoring and profiling information

Traceable: There is a master database that stores all results of testing and provides ability to browse and compare them

Validated: Performance testing provides some level of guaranties that system behavior under workload is functionally correct

Page 11: Jagger Industrial-Strength Performance Testing

Jagger Applicability

Distributed workload generators can create very high workload

Distributed storage can handle data collected during long-running tests

Workload can be specified as a function of time, hence spikes can be modeled

Spike Testing

Endurance Testing

Stress Testing

IsolationTesting

Besides the mentioned concepts, Jagger is designed to support all standard types of performance testing:

A library of primitives for remote services invocation facilitates isolated components testing

LoadTesting

A number of workload generation strategies and comprehensive reporting and statistics

Page 12: Jagger Industrial-Strength Performance Testing

Bird’s Eye View on Jagger

Kernel Kernel Kernel KernelKernel

Master

Remote Invocation and Telemetry

Coordination and Data Retrieval

System Under Test

System Under Test

Agent

MonitoringProfiler

System Under Test

System Under Test

Agent

MonitoringProfiler

System Under Test

System Under Test

Agent

MonitoringProfiler

System Under Test

System Under Test

Agent

MonitoringProfiler

System Under Test

System Under Test

Agent

MonitoringProfiler

System Under Test

System Under Test

Agent

MonitoringProfiler

System Under Test

System Under Test

Agent

MonitoringProfiler

Jagger deployment typically consists of three types of units: Master, Kernel, and Agent. Minimal deployment is so-called local mode that runs Master and Kernel in one JVM. At the other extreme, Master is deployed on a dedicated machine, user feeds it with test configuration and receive reports from it. Master coordinates a farm of Kernels that generate workload and collect telemetry from Agents. Agents are optionally deployed on boxes with System Under Test and each Agent is able to monitor multiple SuTs on one box. In the next section we will discuss Jagger from the user perspective, and after that we come back to its architecture and technologies underneath it.

Kernel responsible for workload generation, data storage, agents polling.

Master responsible for real time coordination of Kernels, data processing. Master can host Coordination service

and master DB with final results. Besides this master host catalog of the distributed data storage.

Master DB can be deployed separately

Page 13: Jagger Industrial-Strength Performance Testing

Features

Page 14: Jagger Industrial-Strength Performance Testing

Automation :: Continuous TestingAll Jagger flows are completely automatic and can be launched from within Hudson, Maven and about any other CI or build system

All final results are persisted to RDBMS, hence can be browsed using variety of tools and history can be traced

Page 15: Jagger Industrial-Strength Performance Testing

Automation :: Automated deploymentJagger is integrated with Jenkins with plug-in

Deployment & Configuration (Master, Kernels, Agents)

Page 16: Jagger Industrial-Strength Performance Testing

Automation :: Configurable

Easy configuration

Jagger offers advanced configuration system. This system can be briefly described as follows. One can shred configuration into arbitrary set of XML and properties files, some of these files can be shared between different test configurations or environments, and some can be test- or environment-specific. Each test specification has a root properties file that contains path masks of other pieces. Elements of configuration can be located in different folders or disks. Jagger will automatically discover all these pieces, apply property overriding and substitution rules and assemble final configuration that specifies both test scenarios as well as configuration of Jagger core services.

<configuration id="jaggerCustomConfig">....

</configuration>

<test-description id="altai-http-sleep-15">

<info-collectors> <validator xsi:type="validator-not-null-response"/> </info-collectors>

<scenario xsi:type="scenario-ref" ref="scenario-altai-sleep-15"/>

</test-description>

Sample as maven archetype

Page 17: Jagger Industrial-Strength Performance Testing

Automation :: Decisions Makers

Testing results for each test individually and for entire session are routed to the decision makers. This allows to map entire session to a single status that clearly indicates that results are acceptable or not. Jagger is bundled with the configurable decision makers, but one can easily write a custom decision maker in Java, Groovy, or JRuby.

A pluggable system of decision makers allows to compactly present testing statuses and automate human effort for report inspection

Page 18: Jagger Industrial-Strength Performance Testing

Reporting :: Time Series and Statistics

Jagger collects and reports comprehensive statistical information for all tests. This information includes both total scalar values as well as time plots that provide insights into test dynamics.

Page 19: Jagger Industrial-Strength Performance Testing

Test 1 | 10 Users | …Test 2 | 30 Users | …Test 3 | 20 Users | …

Test 2 | 30 Users | …Test 3 | 20 Users | …Test 5 | 10 Users | …

Session 1

Session 2

Decision MakerComparator

Reporting :: Session Comparison

Jagger is able to automatically compare results of performance testing with a baseline. A baseline can be specified as a result of previous testing session (say, previous release build) or as a manually created set of values. Results of session comparison are routed to the decision maker that transform numerical deviation from the baseline to the status (acceptable deviation, warning, etc).

Page 20: Jagger Industrial-Strength Performance Testing

Reporting :: Scalability Analysis

Reports contain plots that consolidate results of several tests that differs in workload and, consequently, visualize system scalability:

Page 21: Jagger Industrial-Strength Performance Testing

Reporting :: Monitoring 1

Jagger is bundled with monitoring agents which are based on cross-platform SIGAR monitoring library. SIGAR enables Jagger to collect variety of system metrics, besides this Jagger is able to poll JMX to collect JVM-specific parameters. Jagger is designed to support large number of monitoring agents and collect/store significant amount of monitoring data. Core Jagger communicates with Agents via Bayeux protocol, so there is no necessity even in additional open port on Agent side. For example, both system under test and Agent can use port 80 and that’s it.

JVM Heap Memory

Network Utilization

CPU Utilization

System Memory

Jagger has embedded monitoring system and provides comprehensive telemetry in the reports:

Jagger communicates with monitoring agents via HTTP/Bayeux protocol which is transparent for firewalls and port restrictions

Page 22: Jagger Industrial-Strength Performance Testing

Reporting :: Monitoring 2

ActivityJagger Agent

Activity Controller

Jagger can be used as a pure monitoring tool, separately from performance testing. One typical use case is monitoring of regular activities:

Jagger monitoring system is not coupled with performance testing – one can use Jagger only for monitoring. Of course, Jagger is not a generic monitoring system like Cacti or Zabbix, but it can be used to track performance or impact of regular activities/processes. For example, if some activity (data backup, heavy analytical query etc.) is executed periodically, activity controller can trigger both activity and Jagger Monitoring. In this case Jagger will collect telemetry for each activity execution, produce reports and persist collected data to its DB. This information can be used to track performance of the activity and its impact on system workload.

ReportPerformance

Analyst

Jagger Cluster

Page 23: Jagger Industrial-Strength Performance Testing

Support PDF and HTML output formats

Module #1

Module #2

Module #3

<xml …class …

<xml …class …

<xml …class …

<subreport …<subreport …<subreport …

Extension Manager

Pluggable design and fully customizable report structure

Report Container

Report Templates and Report Providers

Reporting system is based on JasperReports

Reporting :: Flexible

Reporting system is based on JasperReports – well known and mature solution for reporting. All report templates (in XML format ) are externalized and editable, so operator is able to configure report layout, change look-and-feel, include/exclude/modify any sections. Jagger provides an Extension Management mechanism that allows to register new report templates and Report Providers that supply templates by data. Report Providers can be written in Java, Groovy or JRuby. As soon as extension is registered, it can be included into hierarchy of report containers as a subreport. Typically report is generated after each test session, but Jagger provides ability to generate report based on DB data separately from testing.

Reports can be generated separately from testing

Page 24: Jagger Industrial-Strength Performance Testing

Reporting :: Web UIJagger has Web UI interactive and detailed reporting which allows share performance results

Page 25: Jagger Industrial-Strength Performance Testing

Workload Generation

Workload generation strategy is pluggable, it can beStatic (for example, fixed number of virtual users)

Two strategies are provided out of the box:Virtual users with configurable behavior

Total throughput specified as an arbitrary function of time. In particular, this allows to model workload spikes

Dynamic, controlled by feedback from Kernels(for example, increase throughput until response time is below threshold)

Page 26: Jagger Industrial-Strength Performance Testing

User API :: Extensibility

Session

WorkloadTask

Clock

TerminationStrategy

ScenarioSystem Under

TestQueryPool LoadBalancer Invoker

MonitoringTask

Collectors Chain

Test

Session is a largest unit of work in Jagger. Session is a list of tests that executed sequentially.

Test is set of tasks that are executed in parallel. From the Jagger core perspective, task is an arbitrary process that has master-side and kernel-side parts. Both sides communicate via messages and Jagger core doesn’t pose any restrictions on task nature.

Clock defines workload generation strategy.

Termination Strategy defines task stop condition. It can be time limit, or total number of invocations or something else.

Scenario is responsible for interaction with the SuT. It holds state of communication and typically includes such units as query pool or load balancer.

Invoker encapsulates details of interaction with the SuT, typically RPC protocol.

Tasks are able to emit events during execution and these events are processed and logged in Collectors.

Jagger has an extensible hierarchy of configuration entities. Any block highlighted in blue can be overridden in user-specific way:

Page 27: Jagger Industrial-Strength Performance Testing

User API :: Protocols and Primitives

REST SOAP Hessian

A number of invocation protocols and primitives are supported out of the box:

Query Pool

Load Balancer

Jagger is bundled with a library of ready-to-use primitives for building workload test scenarios. It includes support of several RPC protocols and typical units of logic like software load balancer or test scenario with a query pool.

Response Validator

Page 28: Jagger Industrial-Strength Performance Testing

User API :: Validation

Jagger provide ability to validate responses of System Under Test in two ways:

An invocation listener can be added to perform custom checks

If query pool is known before test, Jagger can automatically collect responses for each query in single-user fashion before test and then check that system returns the same responses under workload.

Page 29: Jagger Industrial-Strength Performance Testing

Extensibility and Dynamic Languages

Any module can be overridden in XML configuration files and implemented in Java, Groovy or JRuby

Zero-deployment for test scenarios in Groovy: scenario can be modified without Jagger cluster restart

Jagger heavily relies on Spring Framework for components wiring. All XML descriptors are externalized and editable, so one can override any module. New implementation can be either written in Java or Groovy/JRuby sources can be inserted directly in XML descriptors. Jagger not only allows to write test scenarios in Groovy, but also able to run such scenarios in distributed cluster without cluster restart or redeployment.

Page 30: Jagger Industrial-Strength Performance Testing

Basic Embedded Profiler :: 1

It is not enough to detect insufficient performance or performance degradation. Without insights into system under test it is difficult for developers to reproduce the problem and fix it reliably. Developers often try to profile a problematic unit on their local environments but such results are often distorted, especially for complex server-side applications. In practice this means many fix-test round trips and time losses. Jagger has an embedded profiler that provides insights into application and hints for performance problem investigation.

Problem ResoledFix

Performance issues investigation – a typical flow:

Performance TestingPerformance Problem Detected

Page 31: Jagger Industrial-Strength Performance Testing

Basic Embedded Profiler :: 2

Performance Problem Detected Problem ResoledFix

Jagger provides sampling profiler for JVM that works via JMX and doesn’t require JVM agent

Profiling results are included into reportsProfiler contains intelligence to detect hot spots

Profiling results

Performance Testing

Page 32: Jagger Industrial-Strength Performance Testing

Distributed Platform

Workload generation is distributed. Master continuously polls Kernels statuses and adjust workload if necessary.

Data storage is distributed. This provides both scalability and write performance due to data locality.

Monitoring Agents are supervised by Kernels, so high number of systems can be monitored without bottleneck in a single receiver of monitoring information.

Page 33: Jagger Industrial-Strength Performance Testing

Features Summary

AutomationReporting

Analysis and StatisticsWorkload and User API

MonitoringProfiling

Fully automated test executionProfiles management

Test sessions comparison

Test status decision makingTime plots with statistics

All data are stored in RDBMSReports can be generated separately from testing

Embedded monitoring system

Open configuration and high extensibility

Customizable report templates

Java, Groovy, and JRuby support

Customizable virtual usersWorkload as a function of time

OS-level telemetryJVM-level telemetryCross-platform and firewall-transparent

Basic sampling profiler with configurable overheadAutomatic hot spots detection

Distributed workload generation

Scalability analysis

Monitoring without performance testing

REST, SOAP, Hessian out of the boxResponse validation and other primitives

PDF and HTML reports

Ability to collect and store high data volumes

Web UI for interactive detailed report

Automated deployment

Maven Archetype

Easy configuration via Jenkins plug-in

Page 34: Jagger Industrial-Strength Performance Testing

JaggerRoadmap

Page 35: Jagger Industrial-Strength Performance Testing

Jagger Roadmap

Performance Testing Server – remote APIEnhancements of Jagger SPI

Include more workload primitives into Jagger standard library Improve Web UI for advanced results analysis

Page 36: Jagger Industrial-Strength Performance Testing

Join Jagger

Page 37: Jagger Industrial-Strength Performance Testing

Join Jagger as a:

Architect – review Jagger design, provide your feedback, device a new module

Developer – contribute to Jagger, participate in peer-to-peer code review

Client – try Jagger in your project, request a feature, share your experience in performance testing

Page 38: Jagger Industrial-Strength Performance Testing

[email protected] Us

https://jagger.griddynamics.net/Distribution and Documentation


Recommended