+ All Categories
Home > Technology > Article by Marlabs Bangalore employee receives international recognition!

Article by Marlabs Bangalore employee receives international recognition!

Date post: 30-Nov-2014
Category:
Upload: marlabs
View: 365 times
Download: 1 times
Share this document with a friend
Description:
“Testing Experience”, a Germany-based online magazine for software testers and test managers, has published an article by Ramesh Viswanathan, Senior Test Architect, Marlabs Bangalore in their March 2013 edition. Ramesh has presented his observations on the topic ‘Need for Performance Requirements to Ensure Reliable Business Applications’ through this article. It has incorporated topics associated with application development for various industry domains, techniques and tools for non-functional requirements gathering, optimal performance target, and best practices for developing resource-specific outputs. Testing Experience serves as a platform for knowledge transfer in software testing projects, with more than 250000 downloads per issue, in over fifty countries. Marlabs congratulates Ramesh Viswanathan for this accomplishment. We hope and wish that such valuable tech-philosophies lead him to better avenues of success in career! For viewing the detailed version of Ramesh’s article, refer the following link - http://www.testingexperience.com/testingexperience21_03_13.pdf
3
34 Testing Experience – 21/2013 Introduction Applications developed for various domain verticals like BFS, Insurance, Healthcare, e-Learning, etc. have a huge volume of transactions in a day. While there is sufficient focus and due-diligence in the industry to de- fine the functional requirements, the absence of proper non-functional requirements in many instances can lead to failure of applications and adverse business impact. Hence, performance requirements gathering play a very important role in the software development life cycle. Requirements gathering for performance testing can be for a. a new business application, or b. for an application already existing in production. There is always the need to systematically collate requirements with respect to performance testing. These need to be captured at an early stage of the software life cycle and signed off by all the key stakeholders. At a high level, performance requirements can be categorized as below: Requirements Categories Category 1 – WorkLoad Studying and understanding the specifics that relate to load on the system in terms of the number of transactions that need to be simulta- neously processed, or the amount of processing such as arrival patterns, navigation trends, and user behavior for completion. Aspects related to Category 1 – Workload include: Online transactions processing Handling batch jobs Transactions related to reports Volume of data in the back end Different user roles in the application Other external interfaces that communicate with applications Collating of the list of transactions that are Business Critical, High Re- source consumption, High Frequency of Use and Most Commonly used, reports that are generic/graphic formats and generic/critical batch jobs is normally easy, but getting detailed information in respect of the rate of transactions per minute or per second is not straight forward and need some formulas to capture them. For a system in production, this data can be obtained by means of ana- lyzing transaction history data in databases, and web and system logs that contain timestamps. For a new system that is yet to be implemented, this is purely based on business inputs and assumptions. Data volumes are easier to estimate for an existing system and they can easily be captured by analyzing counts from different tables in the database. Statistics about the number of registered users are usu- ally available, but estimating the number of concurrent users is never straightforward. Category 2 – Performance Targets Gathering performance targets such as response time, throughput, and resource utilization. For performance targets, the related items below cover the majority of needs: Online interaction response time: Based on the category and type of interaction, for example in the case of a simple click such as viewing the page and clicking the next button, it may have a smaller target than a transaction submit operation that requires entering manda- tory fields and clicking on a submit button. Network bandwidth considerations: As an example, for an intranet based application like an HR portal, the response time may be 8 seconds and applications accessed by end users that are Internet facing may have response time of 15 seconds based on the type of operation performed. Sub-transactions that form the complete transaction and the time for overall processing. Handling and completing online transaction activity: For example, if a user needs to complete a transaction in 3 minutes the transac- tion has 5 web interactions to complete, and each transaction re- quires 20 seconds for data entry, etc. to be successfully completed, the calculated average response time for a single interaction should not exceed (3*60−5*20)/5 seconds = 16 seconds. Delivery time for asynchronous transactions: Some factors to be considered when setting completion time targets are: Based on the type of transaction. Based on the network bandwidth. Based on the number of interactions within the application/ system architecture before reaching the final destination. Transaction throughput: Throughput refers to transactions pro- cessed per unit time as well as throughput for different type of reports and their completion time, which need to be classified into scheduled and ad hoc reports. By Ramesh Viswanathan Need for Performance Requirements to Ensure Reliable Business Applications
Transcript
Page 1: Article by Marlabs Bangalore employee receives international recognition!

34 Testing Experience – 21/2013

IntroductionApplications developed for various domain verticals like BFS, Insurance, Healthcare, e-Learning, etc. have a huge volume of transactions in a day. While there is sufficient focus and due-diligence in the industry to de-fine the functional requirements, the absence of proper non-functional requirements in many instances can lead to failure of applications and adverse business impact. Hence, performance requirements gathering play a very important role in the software development life cycle.

Requirements gathering for performance testing can be for

a. a new business application, or

b. for an application already existing in production.

There is always the need to systematically collate requirements with respect to performance testing. These need to be captured at an early stage of the software life cycle and signed off by all the key stakeholders.

At a high level, performance requirements can be categorized as below:

Requirements Categories

Category 1 – WorkLoad

Studying and understanding the specifics that relate to load on the system in terms of the number of transactions that need to be simulta-neously processed, or the amount of processing such as arrival patterns, navigation trends, and user behavior for completion.

Aspects related to Category 1 – Workload include:

▪ Online transactions processing

▪ Handling batch jobs

▪ Transactions related to reports

▪ Volume of data in the back end

▪ Different user roles in the application

▪ Other external interfaces that communicate with applications

Collating of the list of transactions that are Business Critical, High Re-source consumption, High Frequency of Use and Most Commonly used, reports that are generic/graphic formats and generic/critical batch jobs is normally easy, but getting detailed information in respect of the rate of transactions per minute or per second is not straight forward and need some formulas to capture them.

For a system in production, this data can be obtained by means of ana-lyzing transaction history data in databases, and web and system logs that contain timestamps.

For a new system that is yet to be implemented, this is purely based on business inputs and assumptions.

Data volumes are easier to estimate for an existing system and they can easily be captured by analyzing counts from different tables in the database. Statistics about the number of registered users are usu-ally available, but estimating the number of concurrent users is never straightforward.

Category 2 – Performance Targets

Gathering performance targets such as response time, throughput, and resource utilization.

For performance targets, the related items below cover the majority of needs:

▪ Online interaction response time:

▪ Based on the category and type of interaction, for example in the case of a simple click such as viewing the page and clicking the next button, it may have a smaller target than a transaction submit operation that requires entering manda-tory fields and clicking on a submit button.

▪ Network bandwidth considerations: As an example, for an intranet based application like an HR portal, the response time may be 8 seconds and applications accessed by end users that are Internet facing may have response time of 15 seconds based on the type of operation performed.

▪ Sub-transactions that form the complete transaction and the time for overall processing.

▪ Handling and completing online transaction activity: For example, if a user needs to complete a transaction in 3 minutes the transac-tion has 5 web interactions to complete, and each transaction re-quires 20 seconds for data entry, etc. to be successfully completed, the calculated average response time for a single interaction should not exceed (3*60−5*20)/5 seconds = 16 seconds.

▪ Delivery time for asynchronous transactions: Some factors to be considered when setting completion time targets are:

▪ Based on the type of transaction.

▪ Based on the network bandwidth.

▪ Based on the number of interactions within the application/system architecture before reaching the final destination.

▪ Transaction throughput: Throughput refers to transactions pro-cessed per unit time as well as throughput for different type of reports and their completion time, which need to be classified into scheduled and ad hoc reports.

By Ramesh Viswanathan

Need for Performance Requirements to Ensure Reliable Business Applications

Page 2: Article by Marlabs Bangalore employee receives international recognition!

Testing Experience – 21/2013 35

▪ Batch completion time: The completion time needs to be specified based on what type of batch process/programs are running that also involve any backup operations

▪ Understanding the system resources/resource consumption under performance targets also plays an important role and a few are mentioned below:

▪ CPU utilization: This is the percentage of time the CPUs of the system are busy. It is desirable not to have CPU usage of more than 70 %.

3. Memory consumption: This is the number of MB or GB of the system’s RAM consumed.

4. Disk Utilization: Disk utilization and I/Os per second of the disks subsystem are often measured to plan for capacity more holistically.

5. Network bandwidth: The metric can be either in Kbps or Mbps. It is always a good practice to have a target set for network consumption overall as well as per user. An example could be 15Kbps per user and 1 Mbps overall from identified critical branch to a centralized server. Network bandwidth targets are dependent on available bandwidth and usage by users of various roles and geographic locations.

▪ In addition to resource consumption counters such as CPU, Memory, Disk and Network, application specific counters need to be defined and added as part of monitoring, e.g. for an ASP.NET-based application, few of the counters like Requests/sec, Requests Execut-ing, Transactions Total, and Transactions/sec, for SQL counters like Logins/sec, Logouts/sec, UserConnections, and Buffer Cache Hit Ratio.

Techniques and Tools for Non-Functional Require-ments GatheringIt is always a good practice to proceed using a proper means of capturing the NFR details, including and not limited to the following:

▪ Non-functional requirements questionnaire: This is a list of items that can assist in forming and understanding the requirements bet-ter to ensure a quality output. Questionnaire collation for different sections is provided below:

▪ Business test cases requirements

▪ Performance testing requirements

▪ Monitoring requirements

▪ POC requirements

▪ External application/server requirements

▪ Workload modeling and WLM tools:

▪ Identifying objectives and related sub-categories

▪ Collating scenarios that are critical and closely related to the business

▪ Determining the associated navigation paths for the critical and key scenarios

▪ Isolating the unique data for the associated navigation paths as well for the simulated users

▪ Finding the distribution of scenarios and each scenario that is a business test case

▪ Categorizing the load levels for different scenarios from the identified target user load

▪ Formulating the engineering approach to instrument the model

A common means of workload modeling can be performed by under-standing the web-server logs. There are a few commercial and open source tools to extract and report the details as needed.

▪ Understanding future growth: This forms the capacity planning approach to cater for handling future loads as the application ma-tures and more features are implemented to market it long-term in the competitive e-industry. Some of the high level details are provided below:

1. Determine service level requirements: The first step in the capacity planning process is to categorize the work done by systems and quantify users’ expectations of how that work will get done. This is done by defining workloads, determin-ing the unit of work, and identifying service levels for each workload.

2. Analyze the current capacity: The existing capacity of the system must be assessed and analyzed to determine how the needs of the users will be met. This is done by measuring the service levels and comparing them to objectives, gauging overall resource usage, and computing the resource usage by workload.

3. Planning for the future: This involves using forecasting meth-odologies for future business activity, thereby assessing and determining future system requirements. After assessment, the immediately required changes are incorporated in system configuration. This will ensure there is sufficient capacity and is made available to maintain the service levels designed

Best PracticesThe best practices mentioned below will assist in ensuring that the requirements are accurate and testing is aligned with them.

Page 3: Article by Marlabs Bangalore employee receives international recognition!

36 Testing Experience – 21/2013

1. Engage early with stakeholders and collate details such as trans-actions performed for a specified period of time, time for each transaction, duration of peak activity, and usage.

2. Process the collated details, segregate and avoid duplications, e.g. ensuring requests are from unique IP addresses and filtering all error-related pages to end up with a work load model.

3. Understand the business needs, e.g. number of peak usages, trans-actions per second, etc. Map the collated details with business needs.

4. Organizations need to have a technical and high quality engineer-ing team that will assist in verifying and validating the require-ments.

5. Develop a testing strategy and an appropriate testing tool based on the platform and technology identified.

6. Deriving an accurate work load model with an appropriate test strategy and supporting test tools ensures that the performance testing outcome meets expectations and is aligned with business objectives.

SummaryThis article highlights the importance of defining and collecting re-quirements which is usually given secondary importance compared to functional requirements.

The aspects and parameters of performance requirements are illustrated with examples to provide a practical thought process.

By using tools, techniques and an engineering approach non-functional requirements (NFR) gathering can be made more scientific.

Performance can be built into the system and validated at every stage through the appropriate focus on business, technology, tools, and the software testing processes as demonstrated in the “Best Practices” sec-tion. ◼

Ramesh Viswanathan is a Technology Master of Engi-neering in Communications Systems and a Master of Business Administration in Operations Management, and is both Siebel and ISTQB Certified. He currently works as a Senior Performance Test Architect at Marlabs Inc, USA. He has been in the Software Testing industry since 1999, working with various organizations in the past such as ReadyTestGo, Cognizant Technology Solu-

tions, Symphony Software Services, ANZ Bank Information and Technology, and SunGard Global Solutions

> about the author

License ISTQB® and IREB®

training materials!

Díaz & Hilterscheid creates and shares ISTQB® and IREB® training material that provides the resources you need to quickly and successfully offer a comprehensive training program preparing students for certifi cation.

Save money and save time by quickly and easily incorpo-rating our best practices and training experience into your own training.

Our material consists of PowerPoint presentations and exercises in the latest versions available.

Our special plus: we offer our material in four different languages: English, German, Spanish and French.

Díaz & Hilterscheid Unternehmensberatung GmbHKurfürstendamm 17910707 BerlinGermany

Phone: +49 (0)30 74 76 28-0Fax: +49 (0)30 74 76 28-99

E-mail: [email protected]: training.diazhilterscheid.com

For pricing information and other product licensing requests, please contact us either by phone or e-mail.


Recommended