+ All Categories
Home > Documents > Zulip Documentation - Read the Docs

Zulip Documentation - Read the Docs

Date post: 23-Dec-2021
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
43
Zulip Documentation Release 1.6.0 The Zulip Team Jun 09, 2018
Transcript

Zulip DocumentationRelease 1.6.0

The Zulip Team

Jun 09, 2018

Welcome to the ODP team

1 Columbia Global Retail Consumer Experience 3

2 Architecture Roadmap 5

3 Cloud Principles 7

4 CI/CD Pipeline requirements 9

5 Vision 11

6 ODP Demo #1 13

7 ODP Demo #2 15

8 This a a test of the table rendering 19

9 Implementation Strategy 21

10 ODP Frameworks 23

11 Dynamics 365 Ecosystem and Automation 25

12 3rd Party Boundary Integrations (Quick Wins) 27

13 Functional APIs (Integration Frameworks) 29

14 HR Systems 31

15 Creative Solutions 33

16 On Premises Integrations 35

17 Payment Solutions 37

18 Store Solutions 39

i

ii

Zulip Documentation, Release 1.6.0

Contents:

• Welcome to the ODP team

• Open Data Platform Demos

• Implementation Strategy

Welcome to the ODP team 1

Zulip Documentation, Release 1.6.0

2 Welcome to the ODP team

CHAPTER 1

Columbia Global Retail Consumer Experience

3

Zulip Documentation, Release 1.6.0

4 Chapter 1. Columbia Global Retail Consumer Experience

CHAPTER 2

Architecture Roadmap

The consumer journey for Retail is ever changing and requires constant adjustments and innovation. This roadmapdiscusses the architecture around the core components of the Global Retail Platform and the retail experience.

2.1 Introduction

The Global Retail Platform is a new solution designed to make Columbia Sportswear more agile and adaptive tochanging consumer trends. This journey transforms Columbia into a digital data service provider. Throughout thisjourney, various architectures, frameworks and standards are presented and detail the impact to the current and futurestate environment. To that end Enterprise Architecture presents you with the vision, strategy and roadmap for theGlobal Retail Platform.

After the introduction, we detail how the Columbia or Integration teams can interact with the Global Retail Plat-form. After the architecture breakdown, you will find in each section the following structure that is centered aroundinteracting with the consumers.

• Journey Scenario

• Themes

• Case Study in Retail including Use Cases

• Reference Architecture and Data

• Frameworks and Patterns

• Guidelines

• Analytics

• Monitoring and Operations

5

Zulip Documentation, Release 1.6.0

2.2 Journey Scenarios

The steps come from a common digital Omni-commerce retail process. The GRP follows common retail frameworksand is designed to use a common process framework. Where possible, this framework uses the BPM notation forsimplicity and commonality. While GIS is not necessarily interested in the Level 1 and 2 processes, it is greatlyinterested in the Level 3 and 4 processes. This is because as the retail leaders draft their proposals and initiatives, GISuses the Level 3 and 4 models to understand the integration points between the business and services, solutions orinfrastructure.

Global Retail Platform Capabilities

2.3 Themes

Themes are high level concepts that are being implemented and have direct impact of users. Of key note is the abilityto use technology and architecture to implement the processes. The reference architecture is built based on the themesand encompasses the use cases and technology.

2.4 Case Study in Retail

The case studies provide various use cases related to that particular step in the journey. These use cases center on theneeds of the consumer, data and integrations needed to support that particular step in the journey. These use cases haveclear deliverables and can be expanded upon.

2.5 Reference Architecture and Data

The Reference Architecture presents several views of the GRP architecture and how it relates to data and API strategy.This strategy implements new technology and eliminates technical debt.

As you study the reference architecture, it becomes apparent that the key concern of this architecture is abstraction.Abstraction allows the systems being implemented to remain the owners of their own domain and enables the businessto swap solutions or implement new solutions when the need arises.

You will also note the combination of batch and real-time processing into a new paradigm called “as-data-happens”.This paradigm accommodates legacy batch and delayed data processing while implementing newer technologies andarchitecture patterns to bring real time insights and predictive analytics to help in the “human” decision making pro-cess. Coupled with the “as-data-happens” paradigm are new integration architectures that save resources and providegreater insight to the data.

../assets/images/digitalservicefabric-96.png../assets/images/opendataplatform-96.png

6 Chapter 2. Architecture Roadmap

CHAPTER 3

Cloud Principles

3.1 Vision

• First, the principle of disposability. When we work with cloud infrastructure, we’re necessarily building dis-tributed systems. This, in turn, requires a critical shift in both our mindset and our architectural and risk man-agement principles. In internet-connected distributed systems, we must accept that failures and securitybreaches are inevitable. Thus we change the focus of our work from trying to prevent outages or attacks todeveloping the capability to detect them and restore service rapidly. To prove we can withstand failure, wecontinually inject it into our systems, proactively attacking our infrastructure — a key principle behind clouddisaster recovery testing exercises and Netflix’s Chaos Monkey, known as chaos engineering.

• Systems built on cloud infrastructure must assume the infrastructure is unreliable. Thus to meet our goal ofbeing able to restore service rapidly in the event of failure, we should treat it as disposable, ensuring we canrebuild it from scratch automatically using only configuration and scripts held in version control.

3.2 Principles

To build consistent, healthy, production-ready applications, incorporate the following practices into your developmentworkflow from the beginning.

• Design for operations at the beginning. Application Teams will be empowered to own more of their footprints;Operate what you build

• Architect for failure; Optimize for detecting and recovering from failures quickly MTTR

• Optimize applications for total cost of ownership (TCO) such as: turning off when not needed

• Minimize work in GUI, instead favoring automation and configuration-as-code through Terraform or ARMtemplates

• Everything needed to run and operate the platform that is not a secret will be managed by configuration man-agement

• All configuration must be stored in centralized version control system (TBD) unless it is a secret

7

Zulip Documentation, Release 1.6.0

• All changes should be tested locally and then tested in the staging environment either manually or optimallyautomatically by our pipeline

• A Pull Request (PR) is created that addresses a required change. The change should be reviewed by a peer andmerged by reviewer

• Each application and accompanying platform will include monitoring examples:https://cloud.gov/docs/ops/continuous-monitoring/

• Zero downtime deploys using green/blue deploys and other tactics

• Platform as a service (PAAS) is a plus and a minus. Columbia may not always get to choose when changesoccur.

• RBAC (Role Based Access Control) to maintain security and minimize unintended changes

8 Chapter 3. Cloud Principles

CHAPTER 4

CI/CD Pipeline requirements

4.1 Vision

• CI/CD pipeline will be the only mechanism that delivers changes to production environments (regardless ofwhere they are hosted, including On-Prem, Azure, AWS, etc).

• A reasonably familiar person should be able to initiate and debug a pipeline even if they did not create theunderlying artifacts or services.

• More broadly the deployment pipeline’s job is to detect any changes that will lead to problems in production.These can include performance, security, or usability issues. A deployment pipeline should enable collaborationbetween the various groups involved in delivering and operating solutions and provide everyone visibility aboutthe flow of changes in the system, together with a thorough audit trail.

• The key test is that a business sponsor could request that the current development version of the software can bedeployed into production at a moment’s notice - and nobody would bat an eyelid, let alone panic.

• The pipeline delivers EVERYTHING required to operate a service/application including:

– test cases

– monitoring

– infrastructure

– containers

– plaforms

– permissions

– configuration

– API calls

– runbooks

– etc.

9

Zulip Documentation, Release 1.6.0

4.2 Requirements (MVP)

• Microsoft Developers can leverage must have features of visual studio or visual studio team services (vsts)while interacting with the pipeline

– PaaS Requirements: Logic Apps, APIs, etc.

– IaaS Requirements: ?

• Pipeline as code: With “Pipeline as Code”, your code, your automation and your orchestration are now commitedto the source code management. The pipeline have the exact same versioned lifecycle, helping you to ensurelong term maintainability.

• IT Auditor, Security, Change Review considerations

• The pipeline should respect idempotence principles

• Automated Builds : Generate artifacts from source following directives

– MS Build

– Make

– Rake

• Automated Tests : Get insight of the application behavior and reliability. We should have any TDD scriptsalongside the application & infrastructure configuration

– Unit Tests - Junit,

– Load Tests - Jmeter, Gatlin

– Acceptance Tests - Cucumber, Selenium,

• Automated Release : Create and package release to the specified version endpoint

– Artifact repository - Nexus, Artifactory, etc.

– Object storage - Azure Blob, S3, On premise

• Automated Deployment : Deploy any specific version (including prior versions) to the target/region

– Infrastructure provisioning - AWS, Azure, Terraform, VMware

– Server provisioning - Chef

– Application Provisioning

• Pipeline Orchestration

– (Preferred) GitLabCI

– TravisCI

– Jenkins

– Bamboo

4.3 Resources

• Continuous Delivery

• Deployment Pipeline

10 Chapter 4. CI/CD Pipeline requirements

CHAPTER 5

Vision

1. In my mind these demos are a powerful mechanism to educate the broader organization on the benefits of Agile.Let’s no lose any opportunities to advance the state of Agile / DevOps / Continuous Improvement at Columbia

2. I’d like to weave a connected story through our various activities, using the product API service that provides acoherent theme.

3. Over the next several demos we should be able to show a similar storyline that either introduces new componentsor goes deeper into a given area.

4. Optimally, I’d like to have each of you demonstrate something so that you will get your chance to shine butI’d also like for us to gain experience running a well-orchestrated demo that has nearly seamless handoffs. Weshouldn’t necessarily feel like everyone has to demo a component each sprint if it doesn’t make sense We shouldrecord our demo on WebEx so that folks that missed the demo can take a look and we can go back and measureour progress as a sprint team

5. Minimum Viable Process (Demo)

6. Live demos > Cooking Show demos > PowerPoint demos

7. Demo scripts will consistently be available prior to the sprint planning and will guide stories & backlogitems.Every day we could run a “build” to gauge progress against the demo

11

Zulip Documentation, Release 1.6.0

12 Chapter 5. Vision

CHAPTER 6

ODP Demo #1

6.1 Intro - 2 mins

1. Ginger King will kick off the demo and introduce the team members and serve as head MC

6.2 Dynamics 365 - 5 mins

1. Chris Lundy will show the enterprise D365 site

6.3 Product Service - 15 mins

1. Jennifer Canney will demonstrate the Product Service V1

2. Shane will introduce Product Service API (V1)

3. Shane will introduce the API gateway and show the Product Service API (V1) with Swagger

4. Shane will modify the existing Product Service V1 (real time) to expose new fields. This process will Version2 of the Product Service

5. Shane will return to Swagger to show that both versions are concurrently available (data consumers not yetready for version 2 could change on their own schedule)

6. Jennifer Canney will show the documentation for the Product Service (V1) in "read the docs"

7. Jason Knowles will publish a new markdown document for version 2 of the api in "read the docs"

6.4 Sample Web App - 5 mins

1. Brooks will show an existing sample Web Application

13

Zulip Documentation, Release 1.6.0

2. Brooks will demonstrate a chaos event (removing the resource group from Azure)

3. Brooks will demonstrate re-deploying app via CI/CD pipeline

6.5 Automated Technical Documentation - 5 mins

1. Bill starting from a blank template site, the technical documentation for a sample application will be dynamicallybuilt and published to an azure website

6.6 Omniture - 5 mins

1. Craig Rowley will demonstrate Omniture and how it will enhance ecommerce scenarios

6.7 Data Factory ETL Patterns - 5 mins

1. Casey will demonstrate a custom application to build mappings to generate repeatable, resuable JSON docu-ments

6.8 Columbia Blog Post - 2 mins

1. As time allows, Scott push a blog post of the demo

14 Chapter 6. ODP Demo #1

CHAPTER 7

ODP Demo #2

7.1 Vision

Demonstrate an end to end “build” that highlights functionality (but not frameworks) with emphasis onthe product service from a consumer point of view. We will run this build several times in the next sprint.Ideally, the entire demo can be run from one laptop versus switching to team members laptops.

As required, we may need to record a portion of the demo if there are substantial wait periods associatedwith our items.

7.2 Adobe Analytics & Power BI

• Craig will demonstrate a Power Bi visualization of data flowing from Adobe Analytics to Power BI for Ecom-merce scenario (bonus points: using Azure to-be subscription/resource group)

7.3 Identity Service v1

• A team member will demonstrate a MVP identity service. This scenario is important so that we can standardizeusing group role access.

7.4 Product Service v3

The primary purpose of Product Service v3 is to show how quickly net new data will flow into various usecases (Dynamics, Web, Power BI) versus traditional ETL patterns.

• A team member will demonstrate a MVP product catalog web app (consuming Product API v3) and showa Columbia product including an image with several example data attributes. This scenario is important as it

15

Zulip Documentation, Release 1.6.0

simulates what an external consumer of ODP (such as REI) would experience using a Columbia Product API.–> VSTS PBI 3243

• Chris will demonstrate a Dynamics 365 product catalog (consuming Product API v3 or data factory). Thisscenario is important as it demonstrates basic integration between Columbia’s data and the ODP integrationstrategy. –> VSTS PBI 3571

• Craig will demonstrate an adhoc Power Bi visualization of serving up some MVP data from data lake. Weshould show a quick download of data into a flat file.

• Lakshmi (behind the scenes) will present new data to data lake for the depicted product.

• Shane (behind the scenes) will present new data to Cosmos Db for the depicted product.

• Bill (behind the scenes) will break the Product Service (v3) to generate an MVP monitoring/altering event. Thisscenario is important as it will help teams to undestand how they will work with solutions in the future. –>VSTS PBI 3456

• Bill (behind the scenes) will fix the Product Service (v3) by redeploying from Source to generate an MVPmonitoring/alerting event. This scenario is important as it will help teams to understand how they will workwith solutions in the future. –> VSTS PBI 3454

• Shane will demonstrate that the MVP product catalog web app (Product API v3) shows net new data. This isimportant because we want to show how quickly data can show up in the endpoint without ETL / Informaticaprocessing –> VSTS PBI 3243

• Chris will demonstrate that the Dynamics 365 product catalog (Product API v3 or data factory) shows net newdata. This is important because we want to show how quickly data can show up in the endpoint without ETL /Informatica processing

7.5 Product Service v4

The primary purpose of Product Service v4 is to further demonstrate that existing consumers can chooseto move to Product Service v4 if they are ready or can continue to use Product Service v3. v4 will be thevehicle to indirectly demonstrate many of our foundational frameworks such as CI/CD.

• Brooks (behind the scenes –> cooking show method) will push an enhancement to the Product API (now versionv4) to version control in Development Subscription. The CI/CD pipeline will deploy version 4 so that our sampleweb application & Dynamics 365 can consume new fields. Associated frameworks should be updated at checkin(but not demonstrated): –> VSTS PBI 3455

– Read the Docs Markdown

– TDD / Specflow / NUnit

– MOQ for Sample Data

– Code Quality (SonarQube)

– Code Security (Varicode)

– Required Azure Tags for managing cost

• Brooks (behind the scenes –> cooking show method) will promote the Product Service v4 to the ProductionSubscription using the CI/CD pipeline

• A team member will demonstrate the MVP notification that is generated after a new version is deployed toProduction.

• A team member will demonstrate that the MVP product catalog web app in Production Subscription (ProductAPI v4) shows net new data

16 Chapter 7. ODP Demo #2

Zulip Documentation, Release 1.6.0

7.6 Azure Cost Dashboard

• Bill will demonstrate the CIO Azure cost dashboard in PowerBI. This is an important scenario because we wantto start to educate stakeholders on costs and cost components as well as show that they will be able to monitorcosts for their own cost centers. Bonus points for Cosmos db

7.6. Azure Cost Dashboard 17

Zulip Documentation, Release 1.6.0

18 Chapter 7. ODP Demo #2

CHAPTER 8

This a a test of the table rendering

|This | IS | A | Table | | ——— | ——– | ——– | ——– | | col 1 | col 2 | col 3 | col 4 |

19

Zulip Documentation, Release 1.6.0

20 Chapter 8. This a a test of the table rendering

CHAPTER 9

Implementation Strategy

The implementation strategy is a three-pronged approach that includes foundational, implementation and added capa-bilities. While each of these can be implemented separately, it is the goal to use an Agile approach and implementthem in unison thus delivering functionality quickly and in tandem.

While this is a newer concept to Columbia, it has worked effectively in many organizations and will speed up ourwork. Though it requires more focus and staffing up front, as the teams start delivering functionality the organizationwill embrace the work and the teams will start to move faster and deliver better functionality.

Within each prong of the support there may be many teams. For instance, we are building the Open Data Platform,a foundation for all integrations and solutions while at the same time delivering data and analytics as part of a McK-inzie project. Another example is with the functional teams. There are 5 teams that deliver work but they all worktoward the common goal of making Dynamics 365 functional and modernizing the Columbia greater omni-commerceenvironment.

21

Zulip Documentation, Release 1.6.0

9.1 Foundation

The foundation of the Global Retail Platform starts with the Open Data Platform (ODP) and Microsoft Azure. TheODP provides access to cloud services, compute, data management and API functionality through the Digital Hybridand Cognitive Integration Architecture (DHCI). The ODP will be delivered in phases.

The following phases are in order of importance. As the team decomposes these capabilities, they will decomposethem into user stories and tasks and then facilitate the actual implementation of key resources.

Each group and phase has links to the Specflow Feature Files used to implement the functionality. Each Feature Filehas automated testing associated with it.

22 Chapter 9. Implementation Strategy

CHAPTER 10

ODP Frameworks

10.1 Automation Framework for Azure and Development

The Automation Framework for Azure and Development is designed to rapidly onboard developers and ensure theyare adhering to Columbia’s development standards. This framework also includes code quality checks, automatedtesting, vulnerability testing and a delivery pipeline. The pipeline runs automatically with code check-in. Included inthis pipeline are Azure templates that automatically deploy the assets into the appropriate region in Microsoft Azure.This is to ensure compliance and cost management.

10.2 Data Management Framework with ADW and ADLS

The Data Management Framework with Azure Data Warehouse and Azure Data Lake Store is designed to ensureall data external to Columbia with our third-party vendors or other partners, is retrieved and made available to anyauthenticated consumer. The Azure Data Warehouse contains conformed data and is intended for heavy workloadswhereas the Azure Data Lake Store is intended to be the location where all data is stored first then made available forcloud to cloud integrations or cloud to on-premises interfaces.

10.3 API Service Fabric and Gateway

The API Service Fabric and Gateway is the new Columbia Sportswear Marketplace. This marketplace is intended tobe used by developers, whether internal or external, application designers who want to use our data, or third-partyintegrators who will consume and publish data to the marketplace. The marketplace is also where business partnersare able to purchase data from Columbia. This data may be in the form of APIs, applications or mashups.

23

Zulip Documentation, Release 1.6.0

10.4 Code Quality and Security Framework

The Code Quality and Security framework is built into the development environment and is required for any newdevelopment work. This framework utilizes SonarQube and Veracode. SonarQube is an open source solution that istied into the build process. When a developer builds and publishes their code to the repository, the code quality checksrun and return results not only to the developer but also display them in a dashboard. As a start, only critical and highissues are to be resolved. As Columbia matures medium issues may be resolved as well.

10.5 Migration Delivery Framework

Critical to the work with Microsoft, the Migration Delivery Framework is designed to make repeatable pipelines anddata factories that prepare and move data into the primary staging database for Microsoft Dynamics 365. The reason itneeds to be repeatable and automated is because as we manage datasets and start our migration steps we will do several“dry runs” to move data from the legacy environments into the Dynamics environment. If these tasks are automated itreduces the likelihood of errors and mishaps during critical phases of the move to production.

10.6 Service Bus Framework and Implementation

The Service Bus Framework and implementation strategy is designed to create topics necessary to store transactionaldata in a persisted state. This pub/sub model allows the consumers to determine when to remove data from the topicand when to archive it. The Service Bus Framework is a global, “build once, use many” framework and solution.

10.7 Functional Service Implementation

The Functional Service implementation consists of rapid prototyping and iteration. Each iteration provides more func-tionality and meets the requirements of the business users. This implementation strategy allows Columbia developersto work quickly and deliver quality, production ready services within 2-3 weeks.

24 Chapter 10. ODP Frameworks

CHAPTER 11

Dynamics 365 Ecosystem and Automation

11.1 D365 Sandbox Initiated

11.2 Enterprise Product Service v3

11.3 Enterprise Identity Service

11.4 Continuous Integration Activation

11.5 Continuous Test Automation

11.6 Reporting Connectivity to Azure

11.7 Migrations to D365

25

Zulip Documentation, Release 1.6.0

26 Chapter 11. Dynamics 365 Ecosystem and Automation

CHAPTER 12

3rd Party Boundary Integrations (Quick Wins)

12.1 Omniture Services Exposed and Data Stored Automatically

12.2 Convert Friends and Family into D365

12.3 Shopper Trak Exposed and Data Stored Automatically

12.4 Store Force Exposed and Data Stored Automatically

12.5 JDA data to cloud database for sunset migration

12.6 Ceridian Gift Card Exposed and Data Stored

12.7 SVS Gift Card Exposed and Data Stored

12.8 Enterprise Product Service Revision

27

Zulip Documentation, Release 1.6.0

28 Chapter 12. 3rd Party Boundary Integrations (Quick Wins)

CHAPTER 13

Functional APIs (Integration Frameworks)

13.1 DAM Exposed and Data Stored

13.2 Demandware Functional API

13.3 SAP Functional API

29

Zulip Documentation, Release 1.6.0

30 Chapter 13. Functional APIs (Integration Frameworks)

CHAPTER 14

HR Systems

14.1 PeopleSoft Exposed and Data Stored

14.2 Taleo Exposed and Data Stored

14.3 Taxware Exposed and Data Stored

14.4 Kronos Exposed and Data Stored

14.5 ADP Exposed and Data Stored

31

Zulip Documentation, Release 1.6.0

32 Chapter 14. HR Systems

CHAPTER 15

Creative Solutions

15.1 Adobe Analytics Exposed

15.2 Adobe AEM Exposed

15.3 Adobe DTM Exposed

15.4 Adobe Scene 7 Exposed

15.5 Adobe Target Exposed

15.6 Episilon Email Exposed

33

Zulip Documentation, Release 1.6.0

34 Chapter 15. Creative Solutions

CHAPTER 16

On Premises Integrations

16.1 Maple Lake Encapsulated

16.2 TM1 Encapsulated

35

Zulip Documentation, Release 1.6.0

36 Chapter 16. On Premises Integrations

CHAPTER 17

Payment Solutions

17.1 Merchant Link Exposed and Monitored

17.2 Cyber Source Exposed and Monitored

17.3 PayPal Exposed and Monitored

37

Zulip Documentation, Release 1.6.0

38 Chapter 17. Payment Solutions

CHAPTER 18

Store Solutions

18.1 Store Setup and Registration

18.2 RGIS Exposed

18.3 OGone/Ingenico Registered and Exposed

18.4 Sunset Chirpify

39


Recommended