+ All Categories
Home > Documents > Metrics in DevOps - HDI...

Metrics in DevOps - HDI...

Date post: 15-Aug-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
16
Metrics in DevOps Header Body copy
Transcript
Page 1: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Metrics in DevOps

HeaderBody copy

Page 2: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Agenda

• What is DevOps exactly?

• What are Metrics and how are they useful?

• My Project Overview

• DevOps Data Collection Points

• The Reports

• Findings

What is DevOps Exactly?

Page 3: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

DevOps

• DevOps is simply the merging together of two processes under one umbrella• Development—typically Agile • Operations-Deployment,

monitoring, maintenance, etc

• The Developers and the Operations work together to provide continuous improvement of the product

What are Metrics?

• Simply a way to measure

• They tell us how we’re doing

• Metrics can show progress over time

• They should be used to improve

Page 4: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

How do we report our metrics?

• Reporting tools

• In-Tool Dashboards

• Spreadsheets

• Small database programs like Access or FoxPro

My Project

• Post-deployment development work in a large-scale ITSM system

• 2500 concurrent users

• 10,000+ total user base

• 120,000+ employees supported

• List of Enhancements that numbered in the hundreds

• 3 week Dev/Test/Deploy cycle

Page 5: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Project Team

• Development team

• Deployment and Maintenance team

• Area-specific Business Analysts (who also did testing)

• Project Manager

• Tool Power Users (Did User Acceptance Testing)

How the Project Team Interacted

• Project Manager and Business Analysts would prioritize requirements

• Development team and Business Analysts would collaborate on requirements• Prototype• Review and re-work

• Business Analysts would work with Power Users for final testing and acceptance

• Development team handed off code to Administrators to deploy

• Business Analysts would work with users to identify defects or gaps in deployed functionality

Page 6: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Release Goals

• Release as much new functionality as possible

• Introduce as few defects as possible

• Deploy with as little downtime as possible

• Have as short a testing cycle as possible, allowing us to release more functionality

• Increase user satisfaction of ITSM implementation

Tools Used• Requirements/Defect tracking

system• Track and prioritize requirements• Track and Prioritize Defects• Allowed for Requirement and Defect

lifecycle

• ITSM Change/Incident• Track tool releases• Associate Incidents to Change records

• Customer Satisfaction Surveys• Used to survey ITSM tool users along

process

Page 7: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

DevOps Collection Data Points

Release Goals

• New Functionality

• Defect introduction

• Downtime

• Short Testing Cycle

• User Satisfaction

Data Point Collection

• Requirements tracking system

• Defect tracking system

• Change and Incident Management

• Defect/Requirement Lifecycle

• User Surveys

Reporting Tools

• We were able to standardize on a

single reporting tool

• We had three different applications

tracking our data point with three

different back-ends

• ITSM tool

• Requirements/Defect tool

• Customer Satisfaction Surveys

Page 8: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

The Reports(and examples)

Release Report• Number of requirements and fixed

defects released per release cycle

• Grouped by complexity

• Allowed us to see the number of release items along with the complexity

• Point total to easily gauge against other releases

• Easy-1 pts

• Medium-2 pts

• Difficult-3 pts

Requirements

Defects

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Easy Medium Difficult

September Release

Requirements Defects

Page 9: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Releases Over Time

• Show the number of released items per release

• Also shows the point total per release

• Allowed us to gauge how effective each release was vs the other releases

• More effective post-mortems of previous releases 0

2

4

6

8

10

12

14

16

Release 1 Release 2 Release 3 Release 4

Releases Over Time

Items released Points

Defect Report• Recorded the defects reported post-

release cycle

• Report run from between go-live dates of releases

• Allowed us to easily see the defects generated by each release

• We actually did most of the post-mortem of defects in the defect tool

• Point total to easily gauge against other releases

• Minor-1 pts

• Major-2 pts

• Significant-3 pts

Release Defect Points

January UI unresponsive when clicking save

3

January Contact and Phone overlapping when contact name is long

1

January Screen refreshes take a long time during busy periods

2

January Unable to transfer tickets to Architecture team

2

Page 10: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Defects Over Time

0

2

4

6

8

10

12

14

16

January February March April

Chart Title

Defects Points

• Show the number of defects per release

• Also shows the defect point total per release

• Allowed us to gauge how effective each release was vs the other releases

• Point total gave us an quick view of how impactful the defects were

Change Management

Change Scheduled Start

Scheduled End

Actual Start

Actual End

C10502 1/5/18 2:00

1/5/184:00

1/5/18 2:00

1/5/184:10

C10503 1/5/18 3:30

1/5/185:00

1/5/18 3:30

1/5/184:08

C10504 1/5/18 4:00

1/5/18 5:00

1/5/18 4:00

1/5/18 4:55

• We actually were able to utilize a pretty generic Change Management report

• This report gave us the estimated downtime start/end vs the actual downtime start/end

• This report was used by the Change Management group in post mortem reports

• Was helpful to us in that it helped us understand how/when we needed to plan for downtime based on how complex the release was (see Release Report)

Page 11: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Incident Management Incident Change Incident Description

IM10105 C10502 Users unable to access application

IM10205 C10503 Users reporting slowness when logging into application

• We actually were able to utilize a pretty generic Incident Management report already in use

• This report showed us any incidents reported against our application, specifically ones tied to our change request

• Care was taken to evaluate incidents and determine if they were related to a release and assigned to a change

• Showed us any Incidents, which we defined as outages or severe disablement of functionality vs just defects

Incidents Over Time

• Show the number of Incidents per release

• Also shows the defect point total per release

• Showed us if there was any correlation between complexity and risk (high incidents)

• Incident total gave us an quick view of how negatively impactful the releases were (basically any incidents related to the release were a very bad sign)

0

2

4

6

8

10

12

14

16

January February March April

Incidents Over Time

Defects Points Incidents

Page 12: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Testing Report

0

1

2

3

4

5

6

7

8

9

R10541 R10542 R10543 R10544 R10545 R10546 R10547 R10548

Requirement Kick-Backs

Requirement

• Our testing report gave us the number of times a requirement went from ‘testing’ back to ‘development’

• This told us how well our initial development went

• We used this to identify trends

• Is a certain developer getting items kicked back more often?

• Is a certain BA kicking back more items?

• Did a releases complexity take determine if more items got kicked back?

Testing Over Time

• Show the number of test kickbacks per release

• Also shows the defect point total per release

• Allowed us to see immediately if the complexity of the release led to more testing kickbacks

0

2

4

6

8

10

12

14

16

January February March April

Testing and Kickbacks

Defects Points Kickbacks

Page 13: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

User Satisfaction Report

User Satisfaction

Very Satisfied Satisfied Unsatisfied Very Unsatisfied

• The Satisfaction report simply reported off the surveys sent to end users

• Was mainly used as an input for requirements, as users could suggest enhancements

• By running the report for the time period between releases, we could hopefully see how generally satisfied users were with the software

Satisfaction Over Time

• We were able to run the Satisfaction Survey for time periods that matched our release periods

• Also shows the defect point total per release

• Should be trending upward as we improve the capabilities of the product 0

2

4

6

8

10

12

14

16

January February March April

Satisfaction Over Time (1-10)

Defects Points Satisfaction

Page 14: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Some Findings Along the

Way:Testing

• At one point during the project, the Business wanted to free up more Business Analyst time for other duties

• The first thing they did was take away the daily interaction with the BA’s and the developers

• We noticed an immediate spike in testing kickbacks

• Which also resulting in less impactful releases (lower point totals)

• After two releases, we renewed the daily meetings with the BA’s and developers

0

2

4

6

8

10

12

14

16

18

September October November December

Testing and Kickbacks

Items Points Kickbacks

Some Findings Along the Way-

Incidents• We found that supposedly ‘High

Impact’ releases, actually rarely caused Incidents

• Many times, a specific item or two in the release that were rated very low risk ended up causing issues

• Our Incident report allowed us to go back and look at what actually caused the Incident

• We re-evaluated how we classified enhancements

• “Easy” didn’t always mean “Low Risk”

0

2

4

6

8

10

12

14

16

18

September October November December

Testing and Kickbacks

Items Points Kickbacks

Page 15: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Some Findings Along the

Way:Surveys• User satisfaction surveys are kind of

a crap-shoot

• They never really mirrored any of the other reports

• I.E. if we had a ‘bad’ release with a longer than normal outage or other issues, user satisfaction was rarely affected

• People did respond more positively when minor bugs were fixed than when major functionality was added

• Overall seemed to be a poor barometer as to how you are doing with your releases

Some Findings Along the Way:

Defects• Developers really do not like defects

being kicked back for requirements they are working on

• Encouraged the developers to work more closely with the Business Analysts to make sure they understood the requirements as the BA’s understood them

• Business Analysts were also measured on ‘Requirement Defects’ which were requirements they worked on with the business that were not fully met

• Encouraged the Business Analysts to work more closely with the process owners to understand their requirements

0

2

4

6

8

10

12

14

September October November December

Defects and Total Release Points

Defects Points

Page 16: Metrics in DevOps - HDI Conference/media/HDIConf/Files/Handouts/Session702.pdf•Development—typically Agile •Operations-Deployment, monitoring, maintenance, etc •The Developers

Some Findings Along the Way:

Process• Our process changed multiple times

during the engagement

• We were able to use reporting to identify trends in our development cycle

• Worked with Developers to work more closely with Business Analysts

• Worked with Business Analysts to work more closely with the Business Owners

• Able to identify deployment items that would be more likely to cause defects or incidents


Recommended