2
Agenda
1. Messaging-Backbone: Migrating towards microservices
2. System-Asset-Scanner: Developing a microservice oriented architecture from scratch
3. Lessions learned
Migration of a messaging backbone towards microserviceoriented technologies
3
■Message backend with groupware functions:
■Messaging
■Contacts
■Calendar
■ File Store
■ etc
■ System runs in a cluster of 12 servers
■ Codebase about 75k LoC (Java)
■ System has more than 4k requests per second (that is >10 trillion per month)!
The technology stack before migration
5
■ Java
■ Spring-Dependency-Injection
■OSGI for dynamic loading of modules
■ Camel for Message Routing
■ Karaf as Runtime Container / Server
Problems with this Architecture:
■ Domains could not be developed independently
■ Camel was not really a used
■ Dynamic swapping of modules with OSGI was not used
■OSGI + Camel added a significant amount oftechnical overhead
■ New Features required testing of all domains, even if only the functionality of one Domain was changed.
The new technology stack used
6
■ Java with Spring-Boot
■ Boot-Strap-Framework.
■ Allows fast setup of a microservice
■ Easy to integrate common functionality like metrics, logging, etc
■ Spring-MVC to implement REST services
■ Services are defined by annotations
■ Easy to integrate with Spring-Boot
■ API-Doku: Swagger
■Generates HTML-Dokumentation from Spring-MVC annotations
■HTML-Dokumentation also provides test-calls toRest-Services
■ Build-Framework: Gradle
■ Enthält Dependency Management
■Maven-Archetypes are used for quick-setup of a new Microservice
■ Execution of Services: Supervisor
Design for reliability is important for a high-load system
8
■ Circuit breaker (e.g Netflix Hystrix)
■ Backend-Integration
■ Service-2-Service Communication
■ API-Management
■ Authentication & Autorisation with AppId/AppSecrets
■Rate-Limiting / Throttling
■Monitoring/restarting of processes
■ Supervisor
■ „Securing of evidence“ is crucial
Design for diagnosability: The "magic" diagnosis triangle answers the challenges in the diagnosis of distributed systems.
9
Spring Boot Admin UI
Diagnosis of distributed systems
Metriken
TracesLogs
Prometheus
Summary: The system is quite mature in this state with few open issues
10
■Migration took over a year
■ New Architecture was deployed in production a year ago
■Main efforts drivers were:
■ Framework evaluation
■ Proof of concept building
■Coordination with operations
■ Solving technical details
■ Current task: Improve monitoring and metrics
■ Traces: Zipkin
■Metriken: Prometheus
■ The system is stable and the architecture is sustainable
Agenda
1. Messaging-Backbone: Migrating towards microservices
2. System-Asset-Scanner: Developing a microservice oriented architecture from scratch
3. Lessons learned
11
A system from scratch: System-Asset-Scanner (SAS) Collecting reports from datacenter-servers
12
■ Core idea
■ Servers send collected data to SAS
■Data is extracted and transformed to reports
■ Extraction can be quite complex, e.g. looking up external databases, using external services, etc
■Reports and assets are stored in different databases
■ Separation of services part of the security concept
■ Also flexibility is a key feature
■ Planned to run in different environments
■Custom data extractors used in various environments
■Only a fracture of all features used in all environments
The technology stack uses heavily the Spring cloud stack
14
■ Java with Spring-Boot
■ Boot-Strap-Framework.
■ Allows fast setup of a microservice
■ Easy to integrate common stuff like metrics, logging, tc
■ Spring-MVC to implement REST services
■ Services are defined by annotations
■ Easy to integrate with Spring-Boot
■ API-Doku: Swagger
■ Generates HTML-Dokumentation from Spring-MVC annotations
■ HTML-Dokumentation also provides test-calls to Rest-Services
■ Backend-Client: Netflix Feign
■ REST-Client
■ Client is also created from Spring-MVC annotations
■ Build-Framework: Maven
■ Enthält Dependency Management
■ Maven-Archetypes are used for quick-setup of a newMicroservice
■ Docker for Test-Environments
■ Using Docker in Production is a long-term goal
■ CI-Build with Jenkins
■ Go language is used by a 3rd party to implementsome data extractors
16
Inflow of data and requests Usually not constant (low tide, high tide) Unexpected variation may occur (flood, drought)
Processing of data and requests Maximum rate, that can be processed
without problems Rate, where the system is damaged
?
17
1
3
21
2
3
Reports, how much max. flow is currently possible
Adjusts the valve, so that the actual max. flow rate is not exceeded
Dam up in a big dam lake
Status: The next major step is to integrate more cloud featuresto simplify operation
19
■ Currently we have 16 different microservices
■ Codebase size is about 36k LoC
■ The System went into production a year ago No severe problems yet
■ Development of new features is still continuing
■ The architecture can be still improved in several aspects
■ Improve resilience of architecture (e.g. by adding service-discovery, cloud-config, circuit-breakers…)
■ At beginning of development we decided to use a single codebase to speedup developmentDecouple versioning/codebase of services to deploy single services independently
■ Improve Metrics and Monitoring
Agenda
1. Messaging-Backbone: Migrating towards microservices
2. System-Asset-Scanner: Developing a microservice oriented architecture from scratch
3. Lessons learned
20
The Spring-Cloud framework is a stableplatform for projects this size
21
■ Spring Cloud provides a opinionatedframework for microservice and cloudfeatures
■When using the Spring Cloud components, you automatically reach a high level in the Cloud Native Maturity Model
■ Almost all features are optional, but easy touse
■Quality is production ready
■ API documentation is generated by Swaggerfrom sourcecode
Source: pivotal.io
Module structure of a service: We always create a client modulewith the API
22
package sas.service.a.api;
public interface ServiceAPI {@RequestMapping(value = "service/path",
produces = MediaType.APPLICATION_JSON_VALUE,method = RequestMethod.GET)
ResultDTO restServiceMethod(@PathVariable("id") String id) ;}
package sas.service.a.app;
@RestControllerpublic class ServiceController implements ServiceAPI {
@Overridepublic ResultDTO restServiceMethod(@PathVariable("id") String id) {
// implement service here…
}}
package sas.service.a.client;
@FeignClient(url = "${services.serviceurl}")public interface ServiceClient extends ServiceAPI {
// no implementation is needed, as Netflix Feign takes care of that}
Runnable code andconfiguration canbe created by a
Maven Archetype
The Job-DSL Plugins is trivial, yet the advantages are significant
23
■ The plugin generates from the groovy scripts the „config.xml“ of theJenkins jobs
■ Best practise: Use a simple „Seed“-Job to configure all other jobswith the Job-DSL plugin
■ The description of the CI-builds is stored in the SCM (as thedescription of the build, e.g. with Maven POM)
■ Restoring or cloning of CI-Jobs is a matter of seconds
■ Build configurations are versioned in the SCM
CI-as-Code with the Jenkins Job-DSL plugin
24
job('SAS/SAS-INPUT-QUEUE-BUILD') {
// additional description of the jobdescription('SAS Input Queue Maven build')
// configure jdkjdk('jdk-1.8-docker-node')
// git configuration and triggerscm {
git {branch('origin/master')remote {
url('https://www.qaware.de/git/SAS')credentials('xxx')
}configure { scm ->
// configure "git" (not "jgit") and fisheye repository browserscm / gitTool << 'Git'scm / browser(class: ‚
hudson.plugins.git.browser.FisheyeGitRepositoryBrowser') {url('https://www.qaware.de/fisheye/changelog/SAS')
}
// only include current folderscm / 'extensions' / 'hudson.plugins.git.extensions.impl.PathRestriction' {
'includedRegions'('code/input-queue/.*')}
}}
}triggers {
scm('H/15 * * * *') // every fifteen minutes (e.g. um :07, :22, :37, :52)}
// configure docker container to execute maven buildwrappers {
buildInDocker {dockerHostURI('tcp://nio-build-1.intern.qaware.de:4243')image('10.81.16.196/sas/buildnode')startCommand('/bin/cat')
}}configure { node ->
// configure the network bridge to 'host'node / buildWrappers
/ 'com.cloudbees.jenkins.plugins.okidocki.DockerBuildWrapper' / net << 'host'
}
steps {// build dependenciesmaven {
rootPOM('code/commons/pom.xml')goals('clean install -Dmaven.test.failure.ignore=true')
}// build input-queuemaven {
rootPOM('code/input-queue/pom.xml')goals('clean install -Dmaven.test.failure.ignore=true')
}}
// post build publisherspublishers {
archiveJunit('**/target/surefire-reports/*.xml')}
}If we were to start today, we would use the Jenkins pipeline DSL.
Provisioningof Docker Jenkins nodes
Compile, Test & Package
Create App Packages
Provisioning ofDocker App
ImagesRun Integration-
TestDeploy & Run Staging-Env
Containerize your CI pipeline: More flexibility and throughput ofthe CI process
25
Docker file(s)Docker file
A test pyramid with tests of various granularity ensures codequality and integration
26
■ Unit Tests: The classic unit tests (JUnit, Mockito)■ Service Tests: Tests the REST-Controller and Client
of Services (JUnit, Spring MVC Tests, Wiremock)■ Integration Tests:■ Tests the interaction of multiple deployed containers
(JUnit, Spring MVC Tests)
■ Performance Tests with Gatling
■ UI-Tests: Tests basic UI-functionality against a deployed system (Protractor)
Run all these tests continuously in your build pipeline and check the results (test errors, test coverage, run times, resource consumption, etc.)
UI tests
Unit tests
Service tests
Integration tests
In both projects the key was to simplify and automatedevelopment, testing, building and operating the system
27
■ Spring boot is a solid technology
■ Archetypes can be used for to bootstrap a new microservice
■ Diagnosability is much more important than in traditional systems
■ Protect services with intelligent handling of exessive loads
■ The Job-DSL plugin automates maintaining the build pipeline
■ Using a test pyramid to test different layers and stages in the build and deployment process