+ All Categories
Home > Documents > A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you...

A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you...

Date post: 31-Oct-2019
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
12
A VMware User’s Guide to Repatriating Cloud Instances with OpenStack Written by: John S. Tonello Global Technical Marketing Manager, SUSE ® www.suse.com SUSE Guide
Transcript
Page 1: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

A VMware User’s Guide to Repatriating Cloud Instances with OpenStack Written by: John S. Tonello Global Technical Marketing Manager, SUSE®

www.suse.com SUSE Guide

Page 2: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 2

Table of Contents

Table of Contents .................................................................................................................................................................. 2

Evolution of the Data Center ................................................................................................................................................. 3

What You’ll Need .................................................................................................................................................................. 3

More Than a Hypervisor ........................................................................................................................................................ 3

Instance Basics ..................................................................................................................................................................... 5

Deploy an Application ........................................................................................................................................................... 7

Deploy a Stack ...................................................................................................................................................................... 8

Conclusion .......................................................................................................................................................................... 12

Page 3: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 3

Evolution of the Data Center

So, you’ve decided to repatriate some of your AWS or Azure applications to the OpenStack cluster that you just spun up

in your data center, but you and your team are scratching your heads. After all, OpenStack doesn’t look a whole lot like

the VMware stuff you’re familiar with and it has only a passing similarity to public clouds.

Where to start?

This SUSE Guide walks you through a simple OpenStack deployment so you can get comfortable with the environment. It

will help you figure out how to deploy your own apps by focusing first on the infrastructure and how it differs from a VM

environment. Then, you’ll step through a software-defined implementation of an application. In the end, you’ll be better

able to wrap your head around building your own on-premises cloud and be on your way to bringing key cloud apps back

in-house.

What You’ll Need

To get the most out of this guide, you need access to an OpenStack deployment. If you already have one up and running,

great. If you don’t, you can get a free trial version of SUSE OpenStack Cloud: install Kolla-Ansible (the containerized

OpenStack project); grab one of the downloadable training labs made for Windows, Mac or Linux; use a public instance

via the OpenStack Public Cloud Passport or even deploy DevStack.

You need the following enabled in your stack:

Nova (compute)

Neutron (networking)

Glance (images)

Heat (for stack-building)

Horizon (dashboard)

Cinder block storage isn’t strictly required for this example, but it’s useful if you plan to scale beyond this sample project.

If you’re using the command line, you also need to have openstack-heat or the python-heatclient installed

because the “stack create” command is not a function of the openstack command. To install it, use pip:

$ sudo pip install python-heatclient

More Than a Hypervisor

If you’re used to working with VMware, it’s important to think beyond basic server virtualization. VMware and Hyper-V

provide hypervisors that make it easy to deploy virtual machines at scale. VMware and Hyper-V are pretty good at this

and offer a more robust tool than KVM and Xen, but public clouds and OpenStack move well beyond providing

hypervisors for VMs.

For example, instead of placing a particular VM or group of VMs on a particular ESXi host in a VMware cluster,

OpenStack pools the resources from all your physical hosts and abstracts the hardware, networking and storage. It

decides where to place each instance based on previously allocated resources and assigns CPU, RAM and network

resources based on what is available in the cluster. Instead of relying on VM templates derived from previously built VMs,

OpenStack uses code to define the components of each instance so they can be consistently—and quickly—launched

and replicated.

Page 4: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 4

This is much like how AWS Azure (or any cloud provider) enable rapid VM and application deployment. You choose your

OS and the flavor and size of the instance you want, and you quickly have a server. The underlying infrastructure—

including networking and storage—is abstracted.

In this way, OpenStack instances are more ephemeral than the VMware VMs you might be used to. Yes, not only can

OpenStack instances be modified or made to last as long as you want, but also they can be recreated at will with a simple

command or text file. As long as you maintain the core building blocks—images, instance flavors, networking and

storage—you can spawn new instances at will.

Contrast this with VMware, where resources are shared among the VMs on which you eventually place each specific host,

identified in vSphere as a node. In the VMware paradigm, VMs and their resources generally follow specific nodes.

Figure 1: In this VMware vSphere example, ESXi nodes organized by regions form the VMware cluster.

With OpenStack, the resources of each bare-metal server or node are pooled and instances share those resources, which

are managed by the Nova Compute service. Nova keeps track of resources and uses them to spawn new virtual

machines, known in OpenStack as instances.

Page 5: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 5

Figure 2: In this OpenStack example, resources from individual hosts are pooled and abstracted.

Instance Basics

To create a new instance, you must first have the building blocks in place. These are usually deployed with OpenStack

itself when you’re making decisions on networks, storage, CPU and RAM. For the basics, you need:

An image to boot: This is typically a Linux-based image, often in qcow2 format. Depending on the purpose, your

images might be barebones and pretty small or customized and fairly big. New images can be added at any time

by simply uploading new OS files, which, in their various forms, are generally small, pre-built VMs of your favorite

Linux distribution. The OpenStack Glance service keeps track of every image you add. The example we show

later in this guide uses the Fedora image, available for download, at

https://download.fedoraproject.org/pub/fedora/linux/releases/29/Cloud/x86_64/images/Fedora-Cloud-Base-29-

1.2.x86_64.qcow2. You have many public OpenStack images from which to choose.

Figure 3: Images provide the base OS for OpenStack instances.

Page 6: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 6

A flavor: In OpenStack, as in public clouds, instances come in flavors, such as m1.tiny, m1.small and

m1.medium. These are arbitrary sizes that you can define. Each flavor has an amount of CPU, RAM, disk and

other features, so when you choose m1.small you know you’re choosing to create an instance that has 2Gb of

RAM, 1 CPU and a 20Gb disk.

A network: When first deploying OpenStack, you typically add a private network that traffic inside your

OpenStack cluster uses (something like 10.0.0.0/24). This enables instances to communicate with each other,

but not with the outside world. It’s common to add an external network from which you can draw floating IP

addresses that give the outside world access to your instances. Typically, OpenStack automatically assigns free

IP addresses from your various subnets to your instances and keeps track of them. Virtual routers manage the

traffic between your networks.

Figure 4: A simple, two-network topology. Here, demo-net is internal to OpenStack, public1 provides external connectivity and the

demo-router connects them.

Contrast that with VMware or KVM where, when launching VMs, you usually need to pre-determine the IP addresses and

hostnames you want to assign. OpenStack draws from pools automatically.

A key pair: Unlike stand-alone VMs, you don’t usually log into OpenStack instances with a username and

password. Instead, you use shared keys. OpenStack can generate keys for you (the private half of which you

need to download and keep safe) or you can upload an existing public key, such as ~/.ssh/id_rsa.pub.

These keys are identical to the types of shared keys you create on any Linux system with the ssh-keygen

command. You use these keys to log in to any instance:

$ ssh -i ~/.ssh/id_rsa centos@-a-public-address

Depending on the flavor of your image, you’ll use something like “centos” or “opensuse” or “ubuntu” as the

username to log in.

A security group: Most versions of OpenStack (Rocky included) create a default security group that allows

network traffic to access your instance. For example, port 22 for ssh is usually enabled. If you don’t associate a

security group with your instance and set network rules for that group, all network ports on your instance will be

Page 7: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 7

closed to incoming and outgoing network traffic—meaning, you won’t be able to interact with it. That is probably

not useful.

Figure 5: A Security Group includes a basic set of ingress and egress rules that can be applied to any OpenStack instance.

Compare these steps with creating a VM in VMware or KVM. For those, you need to provide an .iso for the operating

system you want, set the compute resources, select a network interface, choose where you want to store the resulting

VM, and boot and configure that OS. With VMware, you choose a node in the cluster on which to run that new VM. With

KVM, you choose a storage path. By providing resources up front, OpenStack abstracts, or pre-loads, most of those

decisions, and each OS image is pre-configured and ready to run. There is no need to pre-boot or otherwise provision

them ahead of deploying applications.

Deploy an Application

You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

CentOS or any other Linux operating system virtual machine. As soon as OpenStack finishes spawning the instance, it’s

up, running and accessible as a raw system. You can create one or hundreds, depending on your available resources.

Figure 6: Use OpenStack’s Horizon dashboard to create one or more instances and treat them like VMs.

One of the strengths of OpenStack is its ability to script the creation steps associated with an entire application, not just

the virtual machine on which it runs. For example, instead of launching a CentOS instance and then installing Apache,

MariaDB, PHP and WordPress, you can define the entire stack at once with a Heat script.

Page 8: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 8

Deploy a Stack

One of the examples that OpenStack uses to illustrate Heat scripts is a simple deployment of WordPress, the website

content management system. Let’s look at the underlying YAML of the Heat script so you can get a better understanding

of how stack-creation works.

Deploy WordPress

This OpenStack example uses a Fedora cloud image. It installs MariaDB, sets root MySQL credentials, creates a

“wordpress” database, installs WordPress and its credentials, starts the httpd.service, and assigns an IP address from the

default internal network. It does all that and more in just a couple of minutes. You can think of it as full-stack automation.

The script is deploying the “server” and everything you want to run on it.

When the deployment is finished, you can launch and configure Wordpress, at http://internal-ip-

address/wordpress or associate a floating IP address to access it from an external network, http://public-ip-

address/wordpress/. If you want to start over, you can just re-run the script.

The full script is available here:

http://git.openstack.org/cgit/openstack/heat-

templates/plain/hot/F20/WordPress_Native.yaml

This YAML file, which defines an instance called, “WordPress” and all its required moving parts, contains a resources

section that defines the instance (the underlying “server”) and the commands that set up MariaDB:

resources:

wordpress_instance:

type: OS::Nova::Server

properties:

image: { get_param: image_id }

flavor: { get_param: instance_type }

key_name: { get_param: key_name }

user_data:

str_replace:

template: |

#!/bin/bash -v

yum -y install mariadb mariadb-server httpd wordpress

touch /var/log/mariadb/mariadb.log

chown mysql.mysql /var/log/mariadb/mariadb.log

systemctl start mariadb.service

# Setup MySQL root password and create a user

mysqladmin -u root password db_rootpassword

cat << EOF | mysql -u root --password=db_rootpassword

CREATE DATABASE db_name;

GRANT ALL PRIVILEGES ON db_name.* TO "db_user"@"localhost"

IDENTIFIED BY "db_password";

FLUSH PRIVILEGES;

EXIT

EOF

sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf

Page 9: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 9

sed -i "s/Require local/Require all granted/"

/etc/httpd/conf.d/wordpress.conf

sed -i s/database_name_here/db_name/ /etc/wordpress/wp-config.php

sed -i s/username_here/db_user/ /etc/wordpress/wp-config.php

sed -i s/password_here/db_password/ /etc/wordpress/wp-config.php

systemctl start httpd.service

params:

db_rootpassword: { get_param: db_root_password }

db_name: { get_param: db_name }

db_user: { get_param: db_username }

db_password: { get_param: db_password }

This section of the Heat script includes a number of yum commands, some sed commands that place custom values into

the database, WordPress configuration files and other commands. These customizations are either entered in an

environment file, as parameters or manually.

When you launch a new stack in the Horizon dashboard using this YAML file, you’ll be prompted to add all the items

outlined in the parameters section, including key_name, instance_type, image_id, db_name and other elements that tell

OpenStack just how to construct the stack.

Using the Horizon dashboard, you can create this stack by clicking “+ Launch Stack” under Project -> Orchestration ->

Stacks. Set the template source to “URL” and enter the URL into the WordPress_Native.yaml file:

http://git.openstack.org/cgit/openstack/heat-

templates/plain/hot/F20/WordPress_Native.yaml.

Figure 7: Launch a stack using the Horizon dashboard and a Heat script available via URL.

Page 10: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 10

Skip the environment source file for now and choose “Next.” OpenStack will download the remote YAML file and pre-

populate the parameters referenced in the Heat script. Then add the following: a Stack Name, the name you gave your

Fedora image (fedora in the above example) and the name of your ssh keypair. In the example, all database usernames

and passwords are preset to “admin”. Click “Launch” to deploy.

Figure 8: Enter the various parameters to match your OpenStack environment.

If you want to try this same Heat script with a different OS image, such as openSUSE or Ubuntu, you’ll need to download

a copy of the file and edit it. For example, Fedora uses the yum package manager to install MariaDB. If you use

openSUSE, you would need to update those portions in order to use zypper commands. For Ubuntu, you would change

them to use apt.

In addition to using Horizon, you can use the command line to build a stack. You would use the same script and make

sure you have python-heat installed. You would also need to source your openrc file in order to give yourself authority

to run the stack creation process.

You can get your unique openrc file right from OpenStack. Click on your username in the top right of the Horizon console.

Figure 9: Download your unique openrc file from the OpenStack dashboard.

Page 11: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 11

After you download the openrc file, you need to source it and enter your OpenStack password when prompted. If you’re

using an admin account, your openrc file will use your admin credentials:

$ source username-openrc.sh

Now you can run the Heat script using the -- parameter flag for each parameter, giving it a name (shown here as

“teststack”):

$ openstack stack create -t http://git.openstack.org/cgit/openstack/heat-

templates/plain/hot/F20/WordPress_Native.yaml \

--parameter key_name=kingdel \

--parameter image_id=fedora \

--parameter instance_type=m1.small \

Teststack

Once deployed, you can view the various aspects of the stack.

Figure 10: Details of the deployed application instance, in this case WordPress running in m1.small.

Page 12: A VMware User’s Guide to Repatriating Cloud Instances with ... · You can create instances as you would on a typical VM and launch, say, a SUSE Linux Enterprise Server, openSUSE,

p. 12

Conclusion

As with any software-defined infrastructure, OpenStack works hard to abstract away the resources required to build

applications and the virtual machines on which the applications live. It starts with pools of storage, CPU and RAM, and

whole subnets from which OpenStack can autonomously draw and assign resources and IP addresses. The result is a

cloud infrastructure that offers the agility of public clouds, greater flexibility and automation than VMware alone and

includes features that will save you and your team time and money.

With SUSE OpenStack Cloud, you can build a private cloud infrastructure with the operational agility, speed, scalability

and control to take full advantage of new business opportunities and rapidly evolving technology trends, such as DevOps

and containers. SUSE OpenStack Cloud is backed by the widest industry and open source community support, so it’s

ideal for developing new, innovative business workloads and DevOps environments, as well as for transforming traditional

data centers. It provides a feature-rich private cloud that you can trust to future-proof your investment.

Learn more, at suse.com/products/suse-openstack-cloud/.

235-001079-001 | 05/19 | © 2019 SUSE LLC. All rights reserved. SUSE and the SUSE logo are registered trademarks of SUSE LLC in

the United States and other countries. All third-party trademarks are the property of their respective owners.


Recommended