+ All Categories
Home > Documents > © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective...

© NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective...

Date post: 26-Mar-2015
Category:
Upload: danielle-hahn
View: 216 times
Download: 3 times
Share this document with a friend
Popular Tags:
23
© NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A user’s perspective May 27, 2010
Transcript
Page 1: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

© NYSE Euronext. All Rights Reserved.

Deploying Data Centers with Puppet – A user’s perspective

May 27, 2010

Page 2: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

2

Objective of this Presentation

Share to the Puppet community the solution arrived by NYSE Euronext’s System Architecture and Engineering (SAE) team to address the technical challenges imposed by the Business Strategy of the company:

• Consolidate various Data Centers across the globe in two brand-new state-of-the-art facilities, one in the US and another in Europe;

• Adapt a Global standard process to build and configure servers and applications across multiple countries and teams;

• Quickly deploy changes in large scale in an automated way.

Page 3: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

3

Main Tools Used in the Solution

• RHEL Kickstart– A single, very frugal kickstart profile for each RHEL release for the

entire company.

• RH Network Satellite– Manage packages of the OS, Third Parties and Home-grown RPMs.– Manage patches and upgrades.

• Puppet– Work horse of this model– Common framework to apply and configure the Global standard server

configuration and particularities of each environment and applications.

Page 4: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

4

The Three Layer Approach

We have organized our configuration needs in three levels:

Base: All NYX global customizations on top of the RHEL default settings. These must be propagated across the enterprise.

Example: global standard size for /usr file system, kernel parameter kernel.core_pattern = /var/crash/core.%e.%p.%h

Zone (or environment or network): Configuration items common to a specific network or environment. Zone layer inherits the Base layer with the ability to override anything if necessary.

Example: /etc/resolv.conf file across one given production network.

Application: Any specific configuration required by any given application. This layer inherits the Zone layer and has the ability to change anything set on the previous two layers.

Example: UTP application requires a 50GB /appl file system and a “qt” package installed.

Page 5: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

5

The Puppet Modules behind of this approach

Page 6: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

6

An example of the modules-base manifests• NYX Global Build customizes 250+ items on top of RHEL default OS• All servers include the entire modules-base via the module name ‘base’• Example:• class base::setup {

include ssh::setup include banners::setup include resolver::setup include ntp::setup include sysctl::setup include profile::setup include services::setup include locale::setup include email::setup include yum::setup include crashdump::setup include bootloader::setup include rclocal::setup include sudo::setup include hardware::setup include tcpwrappers::setup include firewall::setup include etc_services::setup}

Page 7: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

7

An example of the modules-base manifests

class sysctl::setup { # Configure kernel core dumps

sysctl { "kernel.core_pattern":

val => “/var/crash/core.%e.%p.%h",

ensure => present,

}}

class profile::setup {

# Deploy custom /etc/bashrc

file { "/etc/profile":

owner => root, group => root, mode => 0644,

source => "puppet:///profile/etc_profile",

}

}

Page 8: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

8

An example of the modules-zones manifests The organization has 20+ zones (and growing) across the globe, including production, QA, development networks, which one with own settings.

Example of a zone setting:class trading::setup {

$country = 'us'

$searchpath = 'nyx.com'

$domain = 'nyx.com'

$nameservers = [ "10.0.X.X", "10.0.X.X" ]

$timeservers = [ "10.0.X.X version 3 prefer", "10.0.X.X" ]

$snmp_ro_communities = [ "dfasdfasdfasdf" ]

$snmp_rw_communities = [ "sdfasdfasdfasd 127.0.0.1" ]

$snmp_dlmods = [ "cmaX /usr/lib64/libcmaX64.so" ]

$trapsink = [ "sdfasdfasd" ]

$syslogserver = "10.0.X.X"

$postfix_mailhost = "mailrelay.nyx.com"

$ilo_licensed = "true"

$ilo_password = "my_password"

# Including the base classes

include base::setup

}

Page 9: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

9

An example of the modules-app manifests• 60+ applications requiring their own settings

• We try to configure the application puppet manifest to be pluggable on any zone

Example of one application manifest:

class sg::setup {

package { ["libxslt", "libmng"]:

ensure => present,

}

lvm::config_lvm { "tcpdump":

mount_point => "/tcpdump",

lvname => "ltcpdump",

vgname => vg00,

lvsize => "5G",

fstype => "ext3",

owner => “myuser",

group => "users",

mode => "0755",

}

}

Page 10: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

10

How the three layers work together

•The zone class usually sets variables that will be used by modules-base for configuration.

•The zone class necessarily includes the modules-base.

•All servers must belong to one zone.

•The application module is coded to be pluggable to any zone (i.e. match engine application module can be on a QA environment or production environment)

•The ability to build anything, anywhere and mimic Production config to QA environment and vice-versa.

•The modules-app can override settings inherit on modules-zones. The modules-zones can override settings inherit on modules-base.

• Using puppet, we synchronize the build server process with the latest requirements of the applications.

Page 11: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

11

The Last Piece: the Node Definition FileThe node definition is the entity that matches the hostname to the zone and application modules. In addition, it has the networking configuration of the server.

node "buildtest.nyx.com" { # Zone Class include corplan::setup # Application Class include sg::setup # Networking network::interface::setup {"bond0": ensure => present, config_type => bonded, ip => "10.0.0.100", netmask => "255.255.255.0", slave0 => "eth0", slave1 => "eth1", gateway => "10.0.0.254", arp_target => "10.0.0.254", arp_interval => "3000", restart_network => false, }}

Page 12: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

12

NYX Build Server Components

• DHCP/DNS Server (optional)

• TFTP / PXE Server

• Satellite Server or Proxy

• Apache Server

• Puppet Server

• MySQL Server

• The server resides on a separate network called build network since we do not build any server on productions network.

Page 13: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

13

Automated Installation Process - Overview

Server out of the box

Default kickstart label that executes a perl script via “wget” to register the Serial Number and the MAC address and reboots server.

On the Build Server, the register script generates a customized PXE file after looking up Serial Number in mysql database. Custom PXE has hostname and OS version (RHEL4 or RHEL5).

PXE Boot for 1st time

Server come back from the first reboot

PXE Boot for 2nd time

Start the RHEL installation with the minimum packages and Global Standard File Systems setup and sizes.

The kickstart post install script register the new server on the satellite server and assign it to the appropriate channels. Puppet client is installed just before server is rebooted.

Continue

PreBoot Phase

Phase 0

Page 14: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

14

Automated Installation Process – Overview Cont.

Server reboots for the third time

An init.d script runs puppet for the first time and downloads its own configuration from the puppet server.

A node definition is mandatory at this point and puppet client configures server to the NYX Global base (default) configuration and networking.

Load lean OS from disk + init script

Server might reboot or execute Phase2 init.d script directly

Phase2init script

On Phase2 is where Puppet is executed for the last time to configure the server to their final zone (environment) and application settings.

The phase 2 execution can be triggered directly from Phase1 init script (in this case is executed on the build network) or can be executed once the server is on the production network.

Phase 1

Phase 2

Page 15: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

15

PreBoot Phase

• Very simple kickstart profile launched by default PXE label

• It does nothing than executing a script via wget

• It registers the server with two parameters: Serial Number and the MAC

• The script will create a custom PXE boot file with given MAC that contains the RHEL release and the hostname of the server.

• Example of the kickstart pre-script:%pre

SERVER=`grep 'url address' /tmp/anaconda.log | awk '{print $6}'`

cd /tmp && wget -q http://${SERVER}/pub/dmidecode

chmod +x /tmp/dmidecode

SN=`dmidecode | grep "Serial Number" | head -1 | awk '{print $3}'`

MAC=`ifconfig | grep HWaddr | awk '{print $5}'`

wget -q http://${SERVER}/cgi-bin/register.pl?mac=$MAC\&sn=$SN

reboot

Page 16: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

16

Phase 0

• A single kickstart profile for each RHEL release to the entire organization

• Very minimum number of packages installed

• Larger OS File system sizes than the RHEL Default

• Server is registered to the Satellite

• Puppet client is installed (puppetd)

• NYX-Firstscript is set to run on the next reboot via init.d

Page 17: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

17

Phase 0 – Fragment of the Kickstart profile%post --interpreter /bin/shecho "Installing puppet..."yum -y install puppet facter ruby-rdoc ruby-shadow# Puppet is install via the activation key but by default will run as a daemon

echo "Chkconfig'ing puppet off..."chkconfig puppet off# Grab nyx-firstboot script from the satellite or proxyRHN_SERVER=`grep '^serverURL=' /etc/sysconfig/rhn/up2date | cut -d'/' -f3`

echo "Detected Puppetmaster/RHN as: $RHN_SERVER"echo "Downloading nyx-firstboot script..."wget -O /etc/init.d/nyx-firstboot http://$RHN_SERVER/pub/glb/nyx-firstboot

# Make it executable and make it start on firstbootecho "Installing nyx-firstboot..."chmod 755 /etc/init.d/nyx-firstbootchkconfig --add nyx-firstboot

Page 18: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

18

Phase 1

• It is run by the nyx-firstboot init script

• nyx-firstboot executes puppetd twice: one to configure itself and another to set the server to the base config.

• nyx-firstboot script command lines that executes puppetd

# Get puppet server address, the same as the RHN Proxy/RHN SatellitePUPPET_SERVER=`grep '^serverURL=' /etc/sysconfig/rhn/up2date | cut -d'/' -f3`

# Run puppet once to configure itself and get SSL certs auto-signed

puppetd -tov --server=$PUPPET_SERVER --tags utilities::puppetclientbootstrap --no-noop

# Run puppet again to apply all base build manifests

FACTER_sys_building=true puppetd -tov --server=$PUPPET_SERVER --tags sys_building --no-noop

Page 19: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

19

Phase 2 (Last phase)The nyx-phase2 script runs puppetd again, this time puppetd fully executes the puppet manifests which configures the server entirely to the zone and application. The nyx-phase2 is set to execute automatically from init.d OR triggered automatically by nyx-firstboot.

Fragment of the script:

# Run puppet to apply application specific manifests

puppetd -tov --no-noop | /usr/bin/tee /tmp/puppet2b

if egrep -q "err:|warning:" /tmp/puppet*; then

logger -t nyx-phase2 "Found excessive errors on puppet logs!"

else

# Shutdown the server - to be shipped to the rack

logger -t nyx-phase2 "Shutting down the system! No errors found on the puppet logs!"

/usr/bin/poweroff

fi

Page 20: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

20

Data Center Deployment: Numbers and Lessons• One server gets completely installed in 20 minutes in unattended mode

• We usually install up to 15 servers in parallel, but we had to upgrade first the puppet master to run on Apache Web Server. The default web server has shown not to be very powerful.

• Usually, one individual in one day can install up to 50 servers by himself

• The most difficult part was getting the requirements of each application

Page 21: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

21

Ancillary Tools for This Project• Subversion (extremely important):

– we had to disseminate the culture of version control among many individuals and teams.– Each layer of the modules and node definitions have its own subversion repositoriy

• Web Subversion

• OpenGrok (search the manifests)

• Redmine (ticket and changes control)

• Tidal Scheduler – To execute puppet client once the servers are in the production network

Page 22: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

22

Next Steps• Run puppetd to deploy changes across the board on the day-by-day basis. This will require more a cultural change and coordination across more departments.

• Run puppetd in report mode daily (no-intrusive) to detect discrepancies between the reality and the puppet manifests

• Use puppet dashboard as a central database of Node definitions (called External Node Classifier)

• Create more custom facts and types in ruby (diminish the number of execs inside the manifests).

• Use of external resources to automate other configuration such as Veritas Cluster.

Page 23: © NYSE Euronext. All Rights Reserved. Deploying Data Centers with Puppet – A users perspective May 27, 2010.

23

Thank you!

Q & A


Recommended