+ All Categories
Home > Documents > Fundamentals of e ective cloud management for the new NASA...

Fundamentals of e ective cloud management for the new NASA...

Date post: 20-May-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
1
Sergi Blanco-Cuaresma [email protected] http://www.blancocuaresma.com/s/ Sergi Blanco-Cuaresma¹ and the ADS team ¹ Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA. Fundamentals of eective cloud management for the new NASA Astrophysics Data System The new NASA Astrophysics Data System (ADS) is designed as a service- oriented architecture (SOA) that consists of multiple customized Apache Solr search engine instances plus a collection of microservices, containerized using Docker, and deployed in Amazon Web Services ( AWS). For complex systems, like the ADS, the loosely coupled architecture can lead to a more scalable, reliable and resilient system if some fundamental questions are addressed. After having experimented with dierent AWS environments and deployment methods, we decided in December 2017 to go with Kubernetes for our container orchestration. Defining the best strategy to properly set-up Kubernetes has shown to be challenging: automatic scaling services and load balancing trac can lead to errors whose origin is dicult to identify, monitoring and logging the activity that happens across multiple layers for a single request needs to be carefully addressed, and the best workflow for a Continuous Integration and Delivery (CI/CD) system is not self-evident. CloudWatch Fluent-bit FluentD AWS ElasticSearch logs RDS PostgreSQL data Pod A Node 1 Pod B Gateway Pod FluentD Retention policy: never expire Retention policy: ~14 days Node (3,4…) Nginx ingress HTTP Trace ID SVC SVC ING daemon set Fluent-bit daemon set Auth token Auth token Graylog Pod MongoDB metadata & setup logs Node 2 SVC Fluent-bit Node (2,3…) Monty Solr daemon set stateful set Persistent Volumen Claim 500 GB EBS Anity route cookie Fluent-bit Node 1 Monty Solr Nginx ingress daemon set stateful set Persistent Volumen Claim 500 GB EBS Application Load Balancer Classic Load Balancer ING Solr kubernetes cluster Keel Pod Microservices kubernetes cluster Users Developers DevOps Developers Authentication Libraries User preferences ORCID Metrics Articles metadata Fulltext Redis request counts Rate limits Rate limits Making sure the whole system is healthy and responding to users’ requests is a priority. We developed a custom monitoring tool that emulates users’ behavior (e.g., executing searches, accessing libraries, exporting records, filtering results) and alerts us to unexpected results or errors via slack. This emulation happens every five minutes. Historical data is also accumulated and daily reports are generated to measure trends and improvements that could be correlated with microservices updates or infrastructure changes. Monitoring tool Responding to a single user request may involve multiple microservices (e.g., libraries, solr search service) and dierent data requests (e.g., bibcodes in a library, records in solr). At the very first step, when the user request reaches the AWS application load balancer, a trace identifier is attached to the HTTP request and we propagate it for each required internal request inside our infrastructure. All the microservices output logs to stdout, including key information such as the trace identifier and the user’s account. Logs are captured by fluent- bit and distributed to Graylog and CloudWatch via fluentd. Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin; Grant, Carolyn S.; Thompson, Donna M.; Chyla, Roman; McDonald, Steven; Shapurian, Golnaz; Hostetler, Timothy W.; Templeton, Matthew R.; Lockhart, Kelly E.; Bukovi, K. The deployment of new microservice releases is automatically managed by Keel. The developers push new commits to GitHub and/or make releases, which triggers unit testing via Travis continuous integration and image building via docker hub. When a new image is built, Keel deploys it directly to our development environment (each pushed commit) or to our quality assurance environment (each new release). Confirmation to deploy a release in production is provided via slack, where Keel reports its operations and reacts to developers approvals. Several services still require manual intervention in order to deploy new releases, Keel does not cover all our development cases and we are working on a new custom tool to meet our needs (after discarding other tools available in the market). We seek to fully automate the deployment process, while ensuring traceability and easy roll-backs based on automatic functional tests from our monitoring tool. Additionally, to reduce the required resources and simplify operations, we will evaluate other engines for searching through our logs such as Kibana via ElasticSearch (provided by AWS). Future plans Monitoring Logging Deploying Are you a front- end developer? We are hiring! https://ui.adsabs.harvard.edu/
Transcript
Page 1: Fundamentals of e ective cloud management for the new NASA ...adass2018.astro.umd.edu/abstracts/P6-1.pdf · containerized using Docker, and deployed in Amazon Web Services (AWS).

Sergi Blanco-Cuaresma [email protected] http://www.blancocuaresma.com/s/

Sergi Blanco-Cuaresma¹ and the ADS team¹ Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA.

Fundamentals of effective cloud management for the new NASA Astrophysics Data System

The new NASA Astrophysics Data System (ADS) is designed as a service-oriented architecture (SOA) that consists of multiple customized Apache Solr search engine instances plus a collection of microservices, containerized using Docker, and deployed in Amazon Web Services (AWS). For complex systems, like the ADS, the loosely coupled architecture can lead to a more scalable, reliable and resilient system if some fundamental questions are addressed. After having experimented with different AWS environments and deployment methods, we decided in December 2017 to go with Kubernetes for our container orchestration.

Defining the best strategy to properly set-up Kubernetes has shown to be challenging: automatic scaling services and load balancing traffic can lead to errors whose origin is difficult to identify, monitoring and logging the activity that happens across multiple layers for a single request needs to be carefully addressed, and the best workflow for a Continuous Integration and Delivery (CI/CD) system is not self-evident.

CloudWatch

Fluent-bit FluentDAWS

ElasticSearchlogs

RDS PostgreSQL

data

Pod A

Node 1

Pod B

Gateway Pod

FluentD

Retentionpolicy: never expire

Retentionpolicy: ~14 days

…Node (3,4…)

Nginx ingress

HTTP Trace ID

SVCSVC

ING

daemon set

Fluent-bitdaemon set

Auth token

Auth token

GraylogPod

MongoDBmetadata & setup

logs

Node 2

SVC

Fluent-bit

Node (2,3…)

Monty Solr

daemon setstatefulset

Persistent Volumen Claim

500 GB

EBS

Affinity routecookie

Fluent-bit

Node 1

Monty Solr

Nginx ingress

daemon setstatefulset

Persistent Volumen Claim

500 GB

EBS

Application Load Balancer

Classic Load Balancer

ING

Solr kubernetes cluster

Keel PodMicroservices kubernetes cluster

Users

Developers DevOpsDevelopers

AuthenticationLibrariesUser preferencesORCIDMetrics…

Articles metadataFulltext…

Redisrequestcounts

Rate limits

Rate limits

Making sure the whole system is healthy and responding to users’ requests is a priority. We developed a custom monitoring tool that emulates users’ behavior (e.g., executing searches, accessing libraries, exporting records, filtering results) and alerts us to unexpected results or errors via slack. This emulation happens every five minutes. Historical data is also accumulated and daily reports are generated to measure trends and improvements that could be correlated with microservices updates or infrastructure changes.

Monitoring tool

Responding to a single user request may involve multiple microservices (e.g., libraries, solr search service) and different data requests (e.g., bibcodes in a library, records in solr). At the very first step, when the user request reaches the AWS application load balancer, a trace identifier is attached to the HTTP request and we propagate it for each required internal request inside our infrastructure. All the microservices output logs to stdout, including key information such as the trace identifier and the user’s account. Logs are captured by fluent-bit and distributed to Graylog and CloudWatch via fluentd.

Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin; Grant, Carolyn S.; Thompson, Donna M.; Chyla, Roman; McDonald, Steven; Shapurian, Golnaz; Hostetler, Timothy W.; Templeton, Matthew R.; Lockhart, Kelly E.; Bukovi, K.

The deployment of new microservice releases is automatically managed by Keel. The developers push new commits to GitHub and/or make releases, which triggers unit testing via Travis continuous integration and image building via docker hub. When a new image is built, Keel deploys it directly to our development environment (each pushed commit) or to our quality assurance environment (each new release). Confirmation to deploy a release in production is provided via slack, where Keel reports i ts operations and reacts to developers approvals.

Several services still require manual intervention in order to deploy new releases, Keel does not cover all our development cases and we are working on a new custom tool to meet our needs (after discarding other tools available in the market). We seek to fully automate the deployment process, while ensuring traceability and easy roll-backs based on automatic functional tests from our monitoring tool. Additionally, to reduce the required resources and simplify operations, we will evaluate other engines for searching through our logs such as Kibana via ElasticSearch (provided by AWS).

Future plans

Monitoring Logging Deploying

Are you a front-end developer? We are hiring!

https://ui.adsabs.harvard.edu/

Recommended