Date post: | 12-Apr-2017 |
Category: |
Technology |
Upload: | ranger4-limited |
View: | 620 times |
Download: | 2 times |
www.ranger4.com DevOpstastic
“The number of issues we had from production emergencies that were triggered by an ops change
essentially went to zero. Because we were able to roll changes out in an automated fashion, and then test
those changes in the various environments, by the time code got to production, it had been through three
other environments – dev, integration, customer test – before it got to production.”
Jez Miller Puppet Labs 2015 State of DevOps Report
www.ranger4.com DevOpstastic
Shi$ Le$
Continuous Delivery Lean Management
Continuous integration
Automated testing
Deployment automation
Version control
Limit WIP
Use visual displays
Use monitoring tools to make business decisions
www.ranger4.com DevOpstastic
“DevOps is a grassroots thing and so probably a bit difficult to manage from the top down, but we were keen to leave it that way. To stand a chance of success, you have to get your key technologists interested and let them lead – although you also have to be a bit careful that
it doesn’t turn into a sandpit.” Adrian Le Grand
Service Delivery Director at DigitasLBi.
www.ranger4.com DevOpstastic
“OrganisaHons, and large enterprises in parHcular, want to aIract and retain good talent but if they’re viewed as fuddy-‐duddy, they won’t appeal to or keep the cool kids. And there’s real demand for this kind of experHse.”
Jay Lyman Research Manager at 451 Research
But that’s not to say that everything will be rosy in the garden when adopHng a DevOps approach. The key challenge, for many, is change management and simply geRng staff to buy into the concept, mainly on the operaHons side as this is predominantly a grassroots developer’s movement.
www.ranger4.com DevOpstastic
DevOps will shi$ from being a niche approach to applicaHon development and deployment and move into the mainstream over the next 12 months or so, according to Gartner. In fact, so appealing will this grassroots philosophy prove that it will be taken up by a quarter of Global 2000 organisaHons, creaHng a so$ware tools market forecast to leap in size by 21.1% from $1.9bn last year to $2.3bn in 2015, the market researcher believes.
Cath EvereI ComputerWeekly.com
www.ranger4.com DevOpstastic
Case Study
Telrock designs and develops applications for mobile
phones.
It offers mobile banking, debt management, mobile utility,
and channel control solutions.
www.ranger4.com DevOpstastic
The Challenge Telrock wanted to improve their software delivery processes in order to:
– Reduce delivery cycle time from idea to implementation
– Improve quality and the customer experience – Drive efficiencies in the software delivery
process
www.ranger4.com DevOpstastic
WORK OBJECTIVE Provide an in depth technical review of Telrock’s so$ware development processes and exisHng toolchain in view of industry best pracHces and provide observaHon and recommendaHons for the output of a roadmap including actual technical flows, steps and tools for compleHng ConHnuous Delivery in order to move up the DevOps Maturity Scale and achieve consistency, predictability and reliability in the release process.
www.ranger4.com DevOpstastic
DevOps Maturity
1
5
4
3
2
Optimising DevOps
Managed DevOps
Starting DevOps
Fundamental DevOps
Not started DevOps
DevOps DONE – fine tuning and tied tightly to business goals.
Automated build, cross-functional teams, product-focused, cultural change happening
Thinking about cultural change, starting to write scripts, looking at test automation
Outages, war-rooms, blame, unplanned work, delays and defects.
Happy people with integrated toolchain to pre-empt failure, automate test and
deployment – Continuous Delivery
www.ranger4.com DevOpstastic
Prac%ce Build management and con%nuous integra%on
Environments and deployment
Release management and compliance Tes%ng Data management
Level 3 – Op%mizing: focus on process improvement
Teams regularly meet to discuss integraHon
problems and resolve them with automaHon, faster feedback and beIer
visibility.
All environments managed effecHvely. Provisioning
fully automated. VirtualisaHon used if
applicable.
OperaHons and delivery teams regularly collaborate to manage risks and reduce
cycle Hme.
ProducHon rollbacks rare. Defects found and fixed
immediately.
Release to release feedback loop of database
performance and deployment process.
Level 2 – Managed: Process measured and
controlled Build metrics gathered,
made visible and acted on. Builds are not le$ broken.
Orchestrated deployments managed. Release and
rollback processes tested.
Environment and applicaHon heath
monitored and proacHvely managed.
Quality metrics and trends tracked. OperaHonal
requirements defined and measured.
Database upgrades and rollbacks tested with every deployment. Database performance monitored
and opHmised.
Level 1 – Consistent: Automated processes applied across whole
lifecycle
Automated build and test cycle every Hme a change is commiIed. Dependencies managed, Re-‐use of scripts
and tools.
Fully automated, self-‐service push-‐buIon process for deploying
so$ware. Same process to deploy to every environment.
Change management and approvals processes defined and enforced.
Regulatory and compliance condiHons met.
Automated unit and acceptance tests, the laIer
wriIen with testers. TesHng part of
development process.
Database changes performed automaHcally as
part of deployment process.
Level 0 – Repeatable: Process documented and
partly automated
Regular automated build and tesHng. Any build can be re-‐created from source control using automated
process.
Automated deployment to some environments. CreaHon of new
environments is cheap. All configuraHon is
externalised / versioned.
Painful and infrequent, but reliable releases. Limited
traceability from requirements to release.
Automated tests wriIen as part of story development.
Changes to databases done with automated scripts
versioned with applicaHon.
Level -‐1 – Regressive: process unrepeatable, poorly controlled and
reac%ve
Manual processes for building so$ware. No
management of arHfacts and reports.
Manual process for deploying so$ware. Environment specific
binaries. Environments are provisioned manually.
Infrequent and unreliable releases.
Manual tesHng a$er development.
Data migraHons unversioned and
performed manually.
www.ranger4.com DevOpstastic
Approach
Day 1 Day 2 Day 3
Kick off with key project stakeholder – summarise current DevOps toolchain architecture
Review previous day’s findings with key project stakeholder
Write up findings and recommendaHons
Workshop 1: Ideas Workshop 3: Test
Workshop 2: Build Workshop 4: Run
Hands on exploraHon of exisHng DevOps toolchain
Wrap up with key project stakeholder
www.ranger4.com DevOpstastic
3
Consider Consolidating Onto Confluence and JIRA Service Desk for
Customer Interactions
www.ranger4.com DevOpstastic
3
Consider Consolidating Onto Confluence and JIRA Service Desk for
Customer Interactions
www.ranger4.com DevOpstastic
4
Simplify, Make Consistent, And Make The Deployment Process Accessible and Visible Through A Self Service
Release Automation Tool
www.ranger4.com DevOpstastic
5
Speed Up Build Time & Feedback By Adopting Build and Deployment
Pipeline Patterns
www.ranger4.com DevOpstastic
6
Adopt Jenkins Job Builder In Order To Move Jenkins Jobs Into Source
Controlled Configuration and Drive Consistency
www.ranger4.com DevOpstastic
7
Continue To Break Down The Dev/Test Silo Through Closer Collaboration
www.ranger4.com DevOpstastic
8
Introduce Much Greater Degrees Of Test Automation, Particularly BDD Acceptance Testing (Cucumber) To
Drive Collaboration Across Stakeholders
www.ranger4.com DevOpstastic
9
Empower The QA Manager and Better Define The Role and Responsibilities
Across QA
www.ranger4.com DevOpstastic
10
Move Configuration and Environment Definition Into Source Controlled Code
www.ranger4.com DevOpstastic
11
Hire Or Appoint A DevOps Engineer Or Team Who Enables Other Engineers With A Better Path To Production