(Course code AN11)
April 2011 edition
The information contained in this document has not been submitted
to any formal IBM test and is distributed on an “as is” basis
without
any warranty either express or implied. The use of this information
or the implementation of any of these techniques is a
customer
responsibility and depends on the customer’s ability to evaluate
and integrate them into the customer’s operational environment.
While
each item may have been reviewed by IBM for accuracy in a specific
situation, there is no guarantee that the same or similar results
will
result elsewhere. Customers attempting to adapt these techniques to
their own environments do so at their own risk.
© Copyright International Business Machines Corporation 2009,
2011.
This document may not be reproduced in whole or in part without the
prior written permission of IBM.
Note to U.S. Government Users — Documentation related to restricted
rights — Use, duplication or disclosure is subject to
restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
Trademarks
The reader should recognize that the following terms, which appear
in the content of this
training document, are official trademarks of IBM or other
companies:
IBM® is a registered trademark of International Business Machines
Corporation.
The following are trademarks of International Business Machines
Corporation in the United
States, or other countries, or both:
Intel is a trademark or registered trademark of Intel Corporation
or its subsidiaries in the
United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United
States, other countries, or
both.
Microsoft, Windows and Windows NT are trademarks of Microsoft
Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United
States and other
countries.
Java and all Java-based trademarks and logos are trademarks or
registered trademarks of
Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other
companies.
Active Memory™ AIX 5L™ AIX 6™
AIX® AS/400® BladeCenter®
developerWorks® EnergyScale™ eServer™
i5/OS™ i5/OS® Micro-Partitioning®
System p5® System Storage® System z®
V6.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Contents iii
Contents
Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xix
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
iv PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Checkpoint (1 of 3) . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .1-85 Checkpoint (2
of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .1-87 Checkpoint (3 of 3) . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . .1-89 Exercise: Introduction to partitioning .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.1-91 Unit summary . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-93
V6.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Contents v
Physical location codes (2 of 2) . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 2-97 Flexible service
processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 2-99 Advanced System Management
Interface (1 of 2) . . . . . . . . . . . . . . . . . . . . . . .
2-102 Advanced System Management Interface (2 of 2) . . . . . . . .
. . . . . . . . . . . . . . . 2-105 ASMI example . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 2-107 Checkpoint (1 of 2) . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2-111 Checkpoint (2 of 2) . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 2-113 Exercise:
System hardware components . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 2-115 Unit summary . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 2-117
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
vi PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
V6.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Contents vii
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
viii PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Unit 4. Hardware Management Console maintenance . . . . . . . . . .
. . . . . . . . . . . . . .4-1 Unit objectives . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .4-2 The major maintenance components . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-4 Backup
critical console data (1 of 3) . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .4-6 Backup critical console
data (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .4-9 Backup critical console data (3 of 3) . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.4-12 Scheduling backups (1 of 2) . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .4-14 Scheduling
backups (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .4-16 Check HMC code . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .4-18 HMC update methods . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.4-20 Install corrective service . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .4-22 HMC
corrective service . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .4-25 Fix Central: Select
HMC fixes (1 of 5) . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .4-28 Fix Central: Select HMC fixes (2 of 5) . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-30
Fix Central: Select HMC fixes (3 of 5) . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .4-32 Fix Central: Select HMC
fixes (4 of 5) . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .4-34 Fix central: Select HMC fixes (5 of 5) . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-36
HMC software upgrades . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .4-38 Prepare for HMC
software upgrades . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .4-41 HMC reload procedure . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.4-43 Managed system firmware update (1 of 2) . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .4-46 Managed system firmware
update (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .4-49 Examine current firmware level . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .4-52
Obtaining new firmware . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .4-55 Change LIC for
current release (1 of 5) . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .4-58 Change LIC for current release (2 of 5)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.4-61 Change LIC for current release (3 of 5) . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .4-64 Change LIC for
current release (4 of 5) . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .4-66 Change LIC for current release (5 of 5)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.4-68 Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-70
Exercise: HMC and managed system maintenance . . . . . . . . . . .
. . . . . . . . . . . . .4-72 Unit summary . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .4-74
V6.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Contents ix
Exercise: System power management . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 5-36 Unit summary . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 5-38
Unit 6. Planning and configuring logical partitions . . . . . . . .
. . . . . . . . . . . . . . . . . 6-1 Unit objectives . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 6-3 Partition environment . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 6-5 Partition resources . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-7
Dividing the system resources (1 of 2) . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 6-9 Dividing the system
resources (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 6-12 Creating partitions and profiles . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-15 Memory resources (1 of 2) . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 6-17 Memory
resources (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 6-20 Memory usage (1 of 2) . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 6-23 Memory usage (2 of 2) . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-27 Processor resources . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 6-31 I/O
resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 6-34 Virtual SCSI
devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 6-37 Virtual Ethernet options . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 6-40 Create a logical partition: System state . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-42
Create logical partition wizard . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 6-44 Partition ID and
name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 6-46 Profile name . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 6-49 Set processor type . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 6-52 Configure dedicated processors . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 6-54 Configure shared
processors . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 6-56 Configure memory mode: Dedicated (1 of
2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-58
Configure memory mode: Dedicated (2 of 2) . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 6-60 Configure memory mode: Shared (1
of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-64 Configure memory mode: Shared (2 of 2) . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 6-66 Configure I/O slots:
Dedicate memory partition . . . . . . . . . . . . . . . . . . . . .
. . . . . . 6-70 Virtual I/O adapters setup . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-72
Logical Host Ethernet Adapter (1 of 3) . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 6-74 Logical Host Ethernet
Adapter (2 of 3) . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 6-76 Logical Host Ethernet Adapter (3 of 3) . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-78 Host
Channel Adapter . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 6-80 Optional Settings . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 6-83 Check logical partition Profile
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 6-86 Editing a partition's configuration . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 6-88 Additional
configuration options (1 of 2) . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 6-90 Additional configuration options
(2 of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 6-92 Checkpoint (1 of 2) . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-96
Checkpoint (2 of 2) . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 6-98 Exercise:
Configuring logical partitions . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 6-100 Unit summary . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 6-102
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
x PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
V6.0
TOC
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Contents xi
Unit 8. Dynamic LPAR infrastructure . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 8-1 Unit objectives . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 8-2 How DLPAR works . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 8-4 Dynamic logical partitioning . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8-7 DLPAR operations: Overview . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 8-11 DLPAR operations .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 8-14 Add/remove dedicated processor
operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8-16 Add/remove shared processing units operation . . . . . . . . .
. . . . . . . . . . . . . . . . . 8-19 DLPAR status (1 of 3) . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 8-22 DLPAR status (2 of 3) . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 8-24 DLPAR status (3 of 3) . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26 Move
Memory operation . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 8-28 Add I/O slots operation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 8-30 AIX commands for I/O operations . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8-32 Removing I/O slots in AIX (1 of 2) . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 8-35 Removing I/O
slots in AIX (2 of 2) . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 8-37 HMC move/remove I/O slots
operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 8-41 LHEA dynamic operations . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-43
Move/remove an LHEA from the logical partition . . . . . . . . . .
. . . . . . . . . . . . . . . 8-45 HMC command for DLPAR: chhwres .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8-48 List resources with lshwres command . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 8-50 DLPAR
troubleshooting: Symptoms (1 of 2) . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 8-52 DLPAR troubleshooting: Symptoms (2
of 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8-54 Check rsct subsystem . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 8-57 DLPAR
troubleshooting: Network setup . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 8-59 Checkpoint . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 8-61 Exercise: Dynamic resource allocation . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-63 Unit
summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 8-65
Appendix A. Checkpoint solutions. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . A-1
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xii PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Trademarks xiii
V6.0
TMK Trademarks
The reader should recognize that the following terms, which appear
in the content of this
training document, are official trademarks of IBM or other
companies:
IBM® is a registered trademark of International Business Machines
Corporation.
The following are trademarks of International Business Machines
Corporation in the United
States, or other countries, or both:
Intel is a trademark or registered trademark of Intel Corporation
or its subsidiaries in the
United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United
States, other countries, or
both.
Microsoft, Windows and Windows NT are trademarks of Microsoft
Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United
States and other
countries.
Java and all Java-based trademarks and logos are trademarks or
registered trademarks of
Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other
companies.
Active Memory™ AIX 5L™ AIX 6™
AIX® AS/400® BladeCenter®
developerWorks® EnergyScale™ eServer™
i5/OS™ i5/OS® Micro-Partitioning®
System p5® System Storage® System z®
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xiv PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Instructor course overview
xv
V6.0
pref Instructor course overview
This course is an introduction to performing system administration
in a
Power Systems p environment. It will provide the students with
an
overview of basic LPAR terminology, planning and configuration
tasks
associated with an IBM Power Systems based server. Additionally,
the
course covers configuration rules for partitions running either AIX
or
Linux operating systems
This is a three day course with checkpoint questions and
hands-on
exercises at the end of each unit to check the students’
understanding
of the materials.
This course has been designed to support the V7R7.2.0 of the
Hardware Management Console (HMC) software and AIX 7.1
Course strategy
The course purpose is to give new IBM customers the knowledge
to
configure and manage partitions. For the students, no
previous
knowledge of logical partitioning is expected.
The course design strategy is to define basic terms and
concepts
related to logical partitioning, cover key configuration
information for
the HMC, and explain how to configure the logical partitioning
options.
Each unit contains a list of review questions on the last page.
Use
these to review the material from the units that you covered on
the
previous day each morning for the first 20-30 minutes.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xvi PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Course description
xvii
V6.0
Duration: 3 days
Purpose
Learn how to perform system administration in a Power Systems
p
environment. Learn the skills needed to become an effective
administrator on IBM's POWER6-based systems that support
logical
partitioning (LPAR). Learn about the features of PowerVM
Editions
(Advanced POWER Virtualization - APV) and how to configure
and
manage LPARs running AIX V7.1 using the Hardware Management
Console (HMC).
individuals, and IBM business partners who implement LPARs on
IBM
System p systems.
experience.
prerequisite can be met by attending TCP/IP for AIX System
Administrators (AN21).
• Describe important concepts associated with managing
POWER6
processor-based systems, such as logical partitioning,
dynamic
partitioning, virtual devices, virtual processors, virtual
consoles,
virtual local area network (VLAN), and shared processors
• Describe the features of the PowerVM editions
• Use the System Planning Tool to plan an LPAR
configuration
• Describe the functions of the HMC
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xviii PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
• Configure and manage the HMC, including users and
permissions,
software, startup and shutdown, remote access features,
network
configuration, security features, HMC backup and restore
options,
and the HMC reload procedure
• Describe the rules associated with allocating resources,
including
dedicated processors, processing units for Micro-Partitions,
memory, Logical Host Ethernet Adapter, and physical I/O for
AIX
and Linux partitions
interface (GUI) and HMC commands
• Interpret physical and AIX location codes and relate to the
key
hardware components
• Power on and power off the POWER6 or POWER7 based system
• Use the HMC to back up and restore partition data
• Perform dynamic LPAR operations
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Agenda xix
V6.0
Day 2
(01:15) Exercise 3: Exploring the HMC V7 interface
(00:45) Unit 4: Hardware Management Console maintenance
(00:30) Exercise 4: HMC and managed system maintenance
(00:45) Unit 5: System power management
(00:45) Exercise 5: System power management
(01:00) Unit 6: Planning and configuring logical partitions
Day 3 (00:30) Unit 6: Planning and configuring logical partitions
(continued)
(01:00) Exercise 6: Configuring logical partitions
(01:15) Unit 7: Partition operations
(01:00) Exercise 7: Partition operations
(01:15) Unit 8: Dynamic LPAR infrastructure
(01:15) Exercise 8: Dynamic resource allocation
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
xx PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-1
V6.0
Estimated time
This unit introduces basic partitioning concepts and features on
IBM
POWER6, and POWER7 processor-based servers.
What you should be able to do
After completing this unit, you should be able to:
• Describe the following terms:
- Dynamic logical partitioning
(HMC)
• Describe the overall process for configuring partitions
• List references for IBM POWER5, POWER6, and POWER7
processor-based system partitioning
• Checkpoint
http://publib16.boulder.ibm.com/pseries/index.htm
and Configuration Redbook
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-2 PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Figure 1-1. Unit objectives AN112.0
Notes:
The objectives list what you should be able to do at the end of
this unit.
© Copyright IBM Corporation 2011
Describe the following terms:
Dynamic logical partitioning
Describe the functions performed by the POWER Hypervisor
Describe the overall process for configuring partitions
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-3
V6.0
Purpose — Review the objectives for this unit.
Details — Explain what we will cover and what the students should
be able to do at the end
of the unit.
This unit introduces terms and concepts.
Because this is the first unit, point out the references listed on
the front page of this unit.
Each unit will have its own list of references.
Additional information —
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-4 PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Notes:
Partition
When a computer system is subdivided into multiple, independent
operating system
images, those independent operating environments are called
partitions . The resources on
the system are divided among the partitions. Applications running
on a partitioned system
do not have to be redesigned for the partitioned environment.
Independent operating environment
Each partition runs its own operating system that might or might
not match operating
systems in other partitions on the same system. Each partition can
be started and stopped
independently of other partitions.
© Copyright IBM Corporation 2011
A partition is the end result of partitioning.
Partitioning is the process of subdividing the H.W. resources of a
computer system into multiple smaller independent environments over
which an operating system can be hosted.
A partition is a full fledged node with its own system
resources.
From one to many.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-5
V6.0
Purpose — Introduce the generic concept of a partition.
Details — Define a partition as a configured set of resources
running its own independent
operating system image.
From security, network, application, and operational perspectives,
partitions are like having
separate physical systems with the benefit of having all resources
in the same system for
maximum configuration flexibility.
Additional information —
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-6 PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Figure 1-3. Physical partition AN112.0
Notes:
This page defines physical partitioning, which we will contrast
with logical partitioning on
the next visual. IBM Power Systems support logical partitions
(LPARs), not physical
partitions (PPARs).
PPARs
The visual shows an example of a system with three system building
blocks, each made up
of a number of processors, an amount of memory, and a number of I/O
slots. These three
building blocks can be configured into one, two, or three
partitions, each made up of one or
more entire building blocks. The size of the building blocks
depends on the vendor and
system model.
Adding or removing resources
To add or remove resources in a PPAR environment, entire building
blocks must be added
or removed. For example, if more memory is needed, you might have
to add more
processors and I/O slots.
© Copyright IBM Corporation 2011
Blocks contain groups of processors, memory, and I/O
slots.
CPU, memory, and I/O
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-7
V6.0
Purpose — This visual shows an example of a PPAR system.
Details — Students might have heard the phrase physical
partitions from the UNIX server
market. These are partitions based on physical system units, such
as a system board,
which contain a certain amount of processing power and memory and a
number of I/O
slots. IBM’s POWER5+, POWER6, and POWER7 processor-based systems
support
LPARs. We mention PPARs here so we can compare them with
LPARs.
Review the PPAR definition and use the visual to illustrate. Point
out that the example in
the visual can be configured with only one, two, or three
partitions because there are three
system building blocks.
The drawback is the lack of flexibility in configuring a system. If
you just want to add a few
more processors or a bit more memory to get past an increase in
workload (for example,
quarter-end processing), you must add or remove an entire building
block of resources,
which might be difficult to do.
Additional information —
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-8 PowerVM Virtualization I © Copyright IBM Corp. 2009, 2011
Figure 1-4. Logical partition AN112.0
Notes:
Logical partitioning is the ability to make a single system run as
if it were two or more
systems. Each partition represents a division of resources in your
computer system. The
partitions are logical because the division of resources is virtual
and not along physical
boundaries. There are, however, configuration rules that must be
followed.
For the rest of the course, logical partitions will be called LPARs
or partitions for brevity.
Implemented in firmware
The system uses firmware to allocate resources to partitions and
manage the access to
those resources. Although there are configuration rules, the
granularity of the units of
resources that can be allocated to partitions is very flexible. You
can add just a small
amount of memory (if that is all that is needed) without a
dependency on the size of the
memory cards and without having to add more processors or I/O slots
that are not needed.
Firmware refers to underlying software running on a
system independently from any
operating system. On IBM System p systems and IBM Power Systems,
this includes the
software used by the flexible service processor (FSP) and the POWER
Hypervisor.
© Copyright IBM Corporation 2011
Logical partition
A partition is the allocation of system resources to
create logically separate systems within the same physical
footprint.
A logical partition exists when the isolation is implemented
with firmware.
Not based on physical system building block
Provides configuration flexibility
SYS1
1:00
Japan
SYS4
12:00
UK
SYS2
10:00
USA
SYS3
11:00
Brazil
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-9
V6.0
Uempty Instructor notes:
Purpose — Differentiate the generic term partition from
the more specific term logical
partition .
Details — Describe what is meant by logical partitions .
The visual shows different time zones and country flags to show
that because LPARs are
separate operating environments, system variables, such as the time
zone, can be set in
each operating system of each LPAR. This topic is discussed more on
the next visual.
Additional information — Other vendors might also have LPARs, but
each vendor
implements them differently. Some vendors refer to LPARs as soft
partitions and physical
partitions as hard partitions.
Transition statement — Let’s look at the characteristics of a
partition that are independent
from other partitions on the same computer system.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-10 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Figure 1-5. Partition characteristics AN112.0
Notes:
Characteristics of a partition
The visual illustrates how each partition is independent. As stated
before, each partition
runs its own operating system. The version of the operating system
can be any valid
version that is supported on the system. Other things you would
expect on a physically
separate system are also separate for partitions. There are even
independent virtual
consoles.
What is the same between partitions on the same system?
Each partition shares a few physical system attributes, such as the
system serial number,
system model, and processor feature code with other partitions. In
addition, you can
choose to share other hardware, such as Small Computer System
Interface (SCSI)
devices, among partitions.
The planar in an I/O drawer is also an example of a component that
is used by all LPARs
that use an adapter on that planar.
© Copyright IBM Corporation 2011
Operating system
Console
Resources
Other things expected in a stand-alone operating system
environment, such as:
Problem logs
Performance characteristics
Network identity
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-11
V6.0
Uempty Instructor notes:
Purpose — Describe characteristics of a partition that are
independent from other
partitions.
Details — List the things that are the same for LPARs on the same
computer system.
Additional information — Licensed Internal Code is a
term used for i5/OS partitions, and
Open Firmware is a term used for AIX/Linux
partitions.
Transition statement — The term resource is used to
specify the hardware that can be
configured into a partition.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-12 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Figure 1-6. Partition resources AN112.0
Notes:
Resources
Resources are the system components that are configured
into partitions.
The maximum number of partitions is related to the total amount of
resources on the
system. For example, a system with eight processors can be
configured with a total of 80
partitions (if there are sufficient resources). If a system has
enough resources, the upper
limit on the Hardware Management Console (HMC) is 254.
Minimum amount of resources
Each partition must be configured with at least 128 MB of memory,
one tenth of a physical
processor, and enough I/O devices to provide a boot disk and a
connection to a network.
Memory
Memory is allocated in units known as the logical memory block
(LMB). The default LMB
size is variable, depending on the total amount of physical memory
installed, and might be
as small as 16 MB. A partition can be configured with as little as
128 MB of memory or as
much as all of the available memory.
© Copyright IBM Corporation 2011
Partition resources
Resources are allocated to partitions. Memory allocated in
units as small as the LMB size Dedicated whole processors or
shared processing units Individual I/O slots
Including virtual devices
Some resources can be shared. Virtual devices Host
Ethernet adapter
Some core system components are inherently shared.
Linux
M M M SSSS
P = Processor
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-13
V6.0
Uempty Processors
A partition is configured with either dedicated whole processors or
shared processors.
Shared processors are allocated in processing units. 1.0 processing
units is equivalent to
the processing power of one processor. Partitions are configured
with at least 0.1
processing units or with as much as the equivalent of all the
available physical processors.
After the 0.1 minimum is satisfied, additional processing units can
be allocated in quantities
of 0.01 processing units.
I/O slots
I/O resources are allocated to partitions at the slot level. At a
minimum, you must configure
a partition with enough I/O resources to include the boot disk and
a network connection.
Shared devices
With software called the Virtual I/O Server installed in a special
partition, Ethernet and
storage devices can be configured to be shared between
partitions.
Secure environments and shared I/O
Highly secure environments can choose not to take full advantage of
the cross-partition
sharing of devices. Even subtle visibility (for example, different
response times from a
shared resource) can be considered a covert channel of
communication. For this reason,
by design, all shared or virtual resources must be consciously
enabled.
Shared core resources
Some devices can be shared because they are core resources to the
entire system. For
example, even though you have allocated separate amounts of memory
to different
partitions, that memory can be on the same memory card. Likewise,
processors, I/O
drawers, and other core system components are shared. Because of
this, a hardware
failure might bring down more than one partition and could
potentially bring down the entire
system; however, there are many fault containment, in-line
recovery, and redundancy
features of the system to minimize unrecoverable failures.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-14 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Define what is meant by the term resources, and give
examples of each type.
Details — Explain that resources are what is allocated to
partitions. Some resources are
dedicated, and some can be shared.
Mention that even though processing power or an amount of memory is
configured in a
partition, it does not mean that the underlying hardware is
dedicated to that partition. For
example, memory allocated to several partitions can physically
reside on the same memory
chip.
Additional information —
Transition statement — Let’s look at an example of how resources
might be divided
between partitions.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-15
V6.0
Uempty
Notes:
This visual shows how a system’s resources might be divided among
four partitions. With
LPARs, resources can be allocated based on computing needs. You do
not need to
allocate all resources to partitions; that is, some resources might
remain unallocated until
they are needed.
Dynamic logical partitioning
With dynamic logical partitioning (DLPAR), resources can be added,
removed, or moved
between partitions as computing needs change without restarting the
partitions.
© Copyright IBM Corporation 2011
Flexibility to allocate resources depending on need
With DLPAR operations, resources can be moved, removed, or added
with a restarting the partition.
Processors
Memory
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-16 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Show an example of dividing system resources among
multiple partitions.
Introduce the term DLPAR .
Details — Show how resources are divided among partitions with
little dependency on
underlying hardware architecture.
Additional information —
Transition statement — Next we will see the POWER5+ processor-based
servers that
support partitions.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-17
V6.0
Uempty
Notes:
The visual lists IBM POWER5+ processor-based servers that support
LPAR. Please check
www.ibm.com for the current list.
IBM System p servers are IBM's previous generation of products for
AIX and Linux clients;
they have been subsequently replaced by the new IBM Power
Systems.
System p models and resources remain available for AIX and Linux
clients.
© Copyright IBM Corporation 2011
IBM System p5 entry, mid-range, and high-end servers
Example models:
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-18 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — List example models that support partitions.
Details — This is a quick reference so that students see some of
the models that are
available and how they fit together in the product line. Because
this list grows regularly,
point students to www.ibm.com for the most up-to-date information.
There are also models
that end in Q, for Quad.
The pictures of systems in the visual are representative of some of
the models.
Additional information —
Transition statement — Next, we will see the POWER6 processor-based
servers that
support partitions.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-19
V6.0
Uempty
Notes:
The visual lists IBM POWER6 processor-based servers that support
LPAR. Please check
www.ibm.com for the current list.
IBM Power Systems is a single, energy efficient, and easy-to-deploy
platform for all of your
UNIX, Linux, and IBM System i applications.
© Copyright IBM Corporation 2011
IBM Power Systems
IBM Power 520
IBM Power 550
IBM Power 560
IBM Power 570
IBM Power 575
IBM Power 595
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-20 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Details —
Additional information —
Transition statement — Next, we will see the POWER7 processor-based
servers that
support partitions.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-21
V6.0
Uempty
Notes:
The visual lists IBM POWER7 processor-based servers that support
LPAR. Please check
www.ibm.com for the current list.
© Copyright IBM Corporation 2011
IBM Power Systems
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-22 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — List example models that support partitions
Details — At the time of writing (March 2010) only the POWER7
systems listed in the slide
are available.
Additional information — In February 2010, IBM announced the first
of a new generation
of smarter systems with the Power 750, Power 770, and Power 780
servers. In April 2010,
IBM announced new blade servers that exploit the innovative
multi-core design of the new
POWER7 processor.
IBM BladeCenter PS700, PS701, and PS702 Express are built on the
proven foundation of
the IBM BladeCenter family and feature four, eight, or 16 POWER7
cores. The
BladeCenter PS700, PS701, and PS702 Express blades are ideal for
the virtualization of
application server workloads that demand both cost and energy
efficiency in a flexible
package.
POWER7 systems combined with IBM systems software, middleware, and
storage deliver
unprecedented performance optimized for both transactional and
throughput workloads.
Transition statement — Now that you know a bit about what a
partition is, let’s look at why
companies are using them.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-23
V6.0
Uempty
Notes:
Sometimes large, symmetric multiprocessing systems are used to run
several different
applications. This can be an efficient use of resources in some
cases. In other cases,
separate physical computers are used to run individual
applications. This page describes
reasons why it might be better to create separate partitions rather
than run everything in
the same operating system image or use separate physical computers
for each application.
Capacity management
You might want to use partitions to dynamically reallocate
resources when the system
workload changes. For example, if at the end of each month, one
partition runs
CPU-intensive batch jobs, you can reconfigure the system monthly to
take processors from
another lower priority partition and loan them to the partition
with the batch application.
Consolidation
Using partitions gives you the ability to reallocate expensive
resources and manage them
all with one interface (the HMC). You can reallocate processors,
memory, or any I/O
adapter (and thus device) by reconfiguring the partitions or by
using dynamic partition
© Copyright IBM Corporation 2011
Efficient use of resources
Separate workloads
Guaranteed resources
Data integrity
Test on same hardware
The ability to have virtual Ethernet and virtual I/O devices is a
benefit to using POWER5, POWER6, and POWER7
processor-based partitions.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-24 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
operations. All of the resources are located within one server,
potentially reducing the
amount of floor space needed.
Application isolation
Partitioning isolates an application from others in different
partitions. For example, two
applications on one SMP system could interfere with each other or
compete for the same
resources. One decision support database query could bring a
second, interactive
application to a frustrating snail’s pace. By separating the
applications into their own
partitions, they cannot interfere with each other. Also, if one
application were to hang or
crash the operating system, this would not have an effect on the
other partitions.
Also, with partitions, one server can support multiple applications
that use different time
zones or that run on different operating system release
levels.
Partitions can also be used to comply with application license
requirements. For example,
a four processor partition could be created to comply with an
application license that only
allows for a four processor server. Check the vendor’s application
license requirements
carefully.
Merge production and test environments
Many customers utilize smaller development systems to develop,
test, and migrate
applications. These smaller systems might not be the same hardware
or have the same
software, devices, or infrastructure as the real production system.
These issues can be
largely avoided by utilizing a partition on the same system as the
production applications
for development and testing. This also protects the production
partition from the activities
on the test partition. When the testing is complete, the resources
used for the development
partition can be reallocated to the production partition.
Partitions have an exclusive set of resources
The amount of resources allocated to a partition is generally fixed
(although there are some
exceptions that can be configured). This could be a benefit or a
disadvantage. On a
symmetric multiprocessor system running multiple applications
within the same operating
system image, there might be greater sharing of resources than on a
partitioned system
where the applications are isolated in their own partitions.
Virtual Ethernet and virtual I/O devices
On POWER6 and POWER7 processor-based servers, you can configure a
virtual Ethernet
connection that acts like an Ethernet connection but is really a
memory-to-memory
connection with another partition. Virtual I/O devices allow
partitions to use physical
devices that are owned by another partition. Also, the host
Ethernet adapter (or IVE), which
is available on all IBM Power Systems (except high-end servers),
provides logical Ethernet
adapters that communicate directly to LPARs. This reduces the
possible overall interaction
with the POWER Hypervisor.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-25
V6.0
Additional information —
Transition statement — Next, we will talk about the software
licensing models. As you
might expect, it is more complicated for partitioned systems.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-26 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Figure 1-12. Software licensing AN112.0
Notes:
Software licenses on a partitioned system
A partition runs its own separate copy of an operating system and
programs. Language
feature codes, security, user data, most system values, and
software release and fixes,
also known as program temporary fixes (PTFs), are unique for each
partition.
For third-party software, you will have to discuss with the vendor
how to license packages
on a partitioned system.
Effect of using sub-processors
Because operating systems and many other applications use the
number of processors as
the basis for licensing, if you use shared processors and take
advantage of sub-processor
allocations, IBM rounds up to the nearest whole number in
calculating the appropriate
software licenses, and IBM will not charge you for more software
licenses than the number
of physical processors on your server.
If you plan to run different operating systems (that is, AIX 6 or
Linux) on the same server,
you need licenses for each individual operating system. The
licenses are based on
© Copyright IBM Corporation 2011
Software licensing
Licensing is per operating system and is based on processing
power.
Partial processor and shared processor pool features affect
licensing.
Third-party application provider licenses will vary.
IBM hardware
Operating systems
Other software
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-27
V6.0
Uempty processing power. For example, on an eight processor system,
you might have licenses for
seven processors for AIX 6 and one processor for Linux. If you
reconfigure your partitions
so that, for example, you have 7.2 processors in the partition
running AIX 6 and your
licenses only allow seven processors, you will receive out-of-
compliance messages. Either
contact IBM to purchase more licenses or reconfigure the partition
to use less processing
power to stop these messages.
Effect of using the shared processor pool
For now, it is sufficient to understand that a software license
must be purchased to cover
the maximum processing power that your partition might have at any
point. POWER5,
POWER6, and POWER7 advanced processing features allow partitions
using the shared
processor pool to optionally use excess processing power from other
partitions. This
configuration feature is called uncapped processor allocation.
Because a partition that uses
uncapped processor allocation could potentially use all of the
processors in the shared
processing pool, the license for the software in that partition
must take this into account.
The POWER6 and POWER7 multiple shared processor feature can reduce
the number of
software licenses by putting a limit on the amount of processors
that an uncapped partition
can use.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-28 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Describe the licensing model for POWER5+, POWER6, and
POWER7
processor-based LPAR-capable systems.
Details — The new subprocessing and shared processor pool features
complicate
licensing on POWER5, POWER6, and POWER7 systems. You must round up
to the next
whole processor number for licenses, and licenses are affected by
the use of uncapped
shared processors.
The multiple shared processor pools feature of POWER6 and POWER7
systems can affect
licensing costs. This will be covered later.
Additional information —
Transition statement — The next page introduces the functions of
the POWER
Hypervisor.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-29
V6.0
Uempty
Notes:
Introduction to the POWER Hypervisor
Partitions are isolated from each other by firmware (underlying
software) called the
POWER Hypervisor. The names POWER Hypervisor and hypervisor will be
used
interchangeably in this course.
Virtual memory management by the hypervisor
There is no program access permitted between partition memory and
I/O memory.
Software exceptions and crashes are contained within a partition.
The hypervisor controls
the page tables used by partitions to ensure a partition has access
to only its own physical
memory segments. It uses a physical memory offset value for each
partition so that the
operating system in each partition can continue to use memory
address zero as its starting
point.
Virtual console support
The hypervisor provides input/output streams for a virtual console
device that can be
presented on the HMC.
© Copyright IBM Corporation 2011
The POWER Hypervisor is firmware that provides: Virtual
memory management:
Controls page table and I/O access
Manages real memory addresses versus offset memory addresses
Virtual console support
Security and isolation between partitions:
Partitions allowed access only to resources allocated to them
(enforced by the POWER Hypervisor)
Shared processor pool management
LPAR 1 LPAR 2 LPAR 3 LPAR 4
Security and isolation barriers
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-30 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Security and isolation between partitions
Besides managing virtual memory, the hypervisor also ensures that a
partition accesses
only devices allocated to it. It also clears memory, reinitializes
processors, resets processor
registers, and resets I/O devices when devices are allocated to a
partition (statically or
dynamically).
Always active on POWER5, POWER6, and POWER7-based servers, the
POWER
Hypervisor is responsible for dispatching the LPAR workload across
the shared physical
processors. The hypervisor creates a shared processor pool from
which it allocates virtual
processors to the LPARs as needed.
Micro-partitioning (or shared processing) allows LPARs to share the
processors in shared
processor pools. Each LPAR that uses shared processors is assigned
a specific amount of
processor power from its shared processor pool.
On POWER6 and POWER7-based systems there is support for multiple
shared processor
pools (MSPPs).
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-31
V6.0
Purpose — Describe the functions of the POWER Hypervisor.
Details — Describe the purpose of the hypervisor. You can use a
traffic cop as an analogy.
The operating systems must go through the hypervisor to access
resources. The
hypervisor controls access to these resources to make sure security
is not compromised.
Additional information —
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-32 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Figure 1-14. Hardware Management Console AN112.0
Notes:
HMC description
The HMC is a PC-based console that is available in a desktop or a
rack-mount model. It
runs a customized version of Linux with a Java-based management
application. The user
can only access the management application, and no additional
applications can be
installed. A second HMC can be connected to a single managed system
for redundancy.
Multiple managed systems can be managed by a single HMC.
There are desktop and rack-mount models of HMCs. Desktop models
have the machine
type and model numbers like 7310-C0x. Rack-mount models have
machine type and
model numbers like 7310-CRx.
The new machine type HMC 7042-C0x and 7042-CRx
(desktop/rack-mounted) are shipped
with POWER6-based processor systems. 7042 is pre-loaded with HMC
Version 7 code
(and only V7 code is supported).
The older HMC model 7315 is not supported with POWER6/HMC V7.
© Copyright IBM Corporation 2011
LPAR configuration and operation management
Capacity on demand (CoD) management
Service tools
Remotely accessible
Desktop Rack mount
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-33
V6.0
Uempty POWER7-processor based systems require HMC V7.7.1.0. HMC
V7.7.1.0 is supported on
IBM Power Systems with POWER7, POWER6, or POWER5 processors.
Check the IBM HMC support website for the latest information about
the HMC hardware
and software:
Remote access to the HMC functions
Remote access to the HMC Version 7 application is provided by using
a web browser from
a remote workstation. By default, remote browser access to the HMC
is enabled. In
addition, there are extensive HMC command-line controls accessible
through the use of
the Secure Shell (SSH).
HMC: Independent from the managed system and its partitions
The managed system refers to the POWER5, POWER6, or POWER7 system
being
managed by the HMC. Although the HMC is necessary for some
functions, such as
configuring LPARs, it will not affect the operation of any
partitions if something goes wrong.
The partition configuration information is not only kept on the
HMC, but it is also kept in
Non-Volatile RAM (NVRAM) on the managed system; therefore, if the
HMC were to crash,
the partitions would continue to run. In fact, you can remove the
HMC, replace it with
another, and then download the partition data from the NVRAM on the
managed system
without affecting the running of the partitions.
Service errors focal point
If a hardware error occurs, that error can be reported by multiple
partitions. To prevent
confusion, the HMC is also used as a service focal point for error
reporting. An application
on the HMC serves as a filter for errors to ensure IBM service
calls are placed only once
per actual hardware error. Alternatively, a partition configured as
the service partition can
collect system errors and report them to IBM.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-34 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Describe the basic functions of the HMC.
Details — Describe when the HMC is required and the basic functions
that the HMC
provides.
Point out the link to the IBM technical support website for more
information about HMC
hardware and software.
Introduce the term managed system .
Additional information — If asked, the screenshots and the HMC
features in this entire
course are based on the HMC Version 7 Release 3.2 software.
Transition statement — The next page shows an example screen from
the HMC
application.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-35
V6.0
Uempty
Notes:
• Partitions are independent operating environments, and their
resources are managed
by the hypervisor.
• NVRAM is used on the managed system to hold a copy of the
partition configuration so
that if the HMC or the network fails, the partitions can continue
to run and even reboot if
necessary.
• Partitions are configured and managed on the HMC, which is
a separate Linux PC
console. A copy of the partition configuration data is also kept on
the HMC (in addition
to the primary copy in NVRAM).
• The HMC is connected to the managed system through an
Ethernet connection to the
service processor. The service processor is a separate, independent
processor that
provides hardware initialization during system load, monitoring of
environmental and
error events, and maintenance support.
© Copyright IBM Corporation 2011
l a
t i l e
R A M
Hypervisor
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-36 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Review the basics of how partitions, the POWER
Hypervisor, and the HMC
relate to each other.
Details — Describe how the LPAR configuration information is not
only kept on the HMC
but is also kept on the managed system.
The HMC connects to the managed system through an Ethernet
connection to the service
processor.
Additional information —
Transition statement — The first half of this unit covered the
basics. The next half
introduces advanced partition features.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-37
V6.0
Uempty
Notes:
This visual lists the advanced partition features covered in the
rest of this unit.
The dynamic resource allocation, the advanced processor
configuration options, the virtual
I/O, and the CoD are features available on POWER5, POWER6, and
POWER7
processor-based servers.
Live Partition Mobility
Live Partition Mobility (LPM) is a POWER6 and POWER7-based feature
that enables you
to migrate running AIX and Linux partitions and their hosted
applications from one physical
server to another without disrupting the infrastructure services.
The migration operation,
which takes just a few seconds, maintains complete system
transactional integrity. The
migration transfers the entire system environment, including
processor state, memory,
attached virtual devices, and connected users.
© Copyright IBM Corporation 2011
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-38 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Introduce the listed advanced partition features.
Details — This page provides a transition from the first half of
this unit, which covered
partition basics, and the second half, which talks about more
advanced features.
Set expectations that this part of the unit will simply introduce
terms and concepts.
Additional information —
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-39
V6.0
Uempty
Notes:
Dynamic partitioning refers to the ability to move resources
between partitions without
shutting down the partitions. The opposite of dynamic partitioning
is static partitioning,
where new configurations are only used when a partition is
reactivated.
DLPAR operations do not weaken the security or isolation between
LPARs. A partition sees
only resources that have been explicitly allocated to the partition
along with any potential
connectors for additional virtual resources that might have been
configured.
Resources are reset when moved from one partition to another.
Processors are
reinitialized, memory regions are cleared, and adapter slots are
reset.
DLPAR operations
You can add, remove, and move resources among partitions. The
resources include
memory regions, processing units, and I/O slots. This can be
accomplished from the HMC
application or by using HMC command-line commands.
© Copyright IBM Corporation 2011
Dynamic partitioning
DLPAR is the ability to add, remove, or move resources between
partitions without restarting the partitions.
Resources include:
The ability to add and remove virtual devices.
Security and isolation between LPARs are not compromised.
A partition sees its own resources plus other available
virtual resources.
Resources are reset when moved.
Applications might or might not be DLPAR-aware.
DLPAR allows you to react to changing resource needs.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-40 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
With virtual devices, you can add or remove them, but you cannot
move them directly from
one partition to another. You can, however, dynamically change the
configuration that
specifies what type of virtual adapter is in a virtual slot.
With the Integrated Virtual Ethernet (IVE), you can add or remove
logical host Ethernet
adapters (LHEAs). You can even move LHEAs from one partition to
another dynamically.
Applications might not be DLPAR-aware
Most applications are unaware of the underlying resource specifics,
but some applications
and utilities, particularly monitoring tools, might inhibit some
DLPAR operations if they bind
to processors or pin memory. Many resource-aware applications have
been rewritten in
recent years to allow DLPAR. Check with your sales representative
about your
applications.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-41
V6.0
Details — List the operations and the resources involved with
DLPAR.
Additional information —
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-42 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Figure 1-18. How DLPAR works AN112.0
Notes:
1. The DLPAR request originates at the HMC.
2. The request is made over the network to the POWER
Hypervisor.
3. Partition A and Partition B communicate with the HMC about the
DLPAR operation
through a process running on both partitions.
4. The POWER Hypervisor makes the resource allocation change.
As you can see in the visual, DLPAR operations are dependent on a
functioning
network between the HMC and the managed system and between the HMC
and the
partitions. The link between the HMC and the service processor is
used to initiate the
operation and process the hardware add or remove operation. The
link to the partition
or partitions from the HMC is used to notify the operating system
of the hardware
changes, enabling it to take actions as required.
If the network is down, either between the HMC and the managed
system or between
the partitions and the HMC, DLPAR operations cannot occur.
© Copyright IBM Corporation 2011
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-43
V6.0
Purpose — Show what is involved in a DLPAR operation.
Details — Review the components involved and what they do during a
DLPAR operation.
The point to make on this slide is that there are two separate
network connections that
must be fully functional to make DLPAR operations work. First, the
link between the HMC
and the service processor is used to initiate the operation and
process the hardware
add/remove operation. The link to the partition or partitions from
the HMC is used to notify
the operating system of the hardware changes, enabling it to take
actions as required.
Additional information —
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-44 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Figure 1-19. Processor concepts AN112.0
Notes:
This visual summarizes the various concepts concerning POWER5,
POWER6, and
POWER7 processors. We will see in the next slides that POWER6 and
POWER7-based
processor systems support other features, such as multiple virtual
shared processor pools.
Along the bottom are whole, physical processors installed in the
computer system. These
are configured in various ways into the three partitions.
Processing units, partial processors, and logical processors
Partitions are allocated in dedicated whole processors or in
processing units. A processing
unit is the equivalent of 1.0 of a physical processor. A partition
can be configured with as
little as 0.1 processing units, and after that minimum is
satisfied, processing units can be
allocated in units of 0.01 processing units.
The terms micro-partitioning and partial
processors refer to the ability to allocate less
than
a whole physical processor to a partition.
© Copyright IBM Corporation 2011
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-45
V6.0
Uempty Deconfigured
A physical processor can be automatically deconfigured from the
system because of
detected errors or user deconfiguration.
Inactive, CoD processors
Inactive processors can be added as a dedicated or shared processor
through the
activation of a CoD license key. CoD is an option that can be
purchased. You will learn
more about this in a few moments.
Shared versus dedicated processors
Dedicated processors are physical processors that are allocated to
a partition and are
dedicated to that partition. Other partitions will not use any time
slices on those processors
while that partition is active. Shared processors are put into a
shared pool. Partitions use
processing units from the pools as needed within configuration
guidelines.
Virtual processors
If you were to allocate 2.0 processing units to a partition, the
partition might get bits of
execution time on up to 20 physical processors. This concept is
known as virtual
processors.
Virtual processors are the representation of the assigned
processing units (defined as
processors) in the operating system. To run threads, the operating
system dispatches
threads to the virtual processors. The hypervisor takes the threads
and dispatches them to
physical processors. The number of virtual processors in the
operating system limits the
number of physical processors the hypervisor can use for that
partition during each clock
cycle. The processing units are spread across the virtual
processors.
Logical processors
If simultaneous multithreading is enabled for AIX Version 5.3 or
later, each virtual
processor is utilized as if it were two processors on POWER5 and
POWER6-based
systems and up to four processors on POWER7-based systems.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-46 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Review each of the processor concepts.
Details — Try to touch on each of these lightly, starting from the
deconfigured and working
your way up to virtual. These will be covered in detail in later
units. At this time, it is only
important to understand each term at a basic level. The slide shows
two logical processors
per virtual processor. Point out that with POWER7, up to four
logical processors can be
configured per virtual processor.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-47
V6.0
Uempty
Notes:
Micro-partitioning is defined as the ability to create a
partition and allocate fractional
amounts of processing capacity to it.
Processing power can be allocated to partitions using dedicated
processors or shared
processors. For shared processor partitions, processing power can
be allocated in the
granularity of 0.01 processing units. A partition must have a
minimum of 0.1 processing
units.
The visual shows seven partitions being run on a processing pool of
four physical
processors. The diagram represents a single 10 millisecond (ms)
interval. Each partition
gets a percentage of the execution dispatch time on the processors
in the pool, based on
its capacity assignment. Do not worry; we will come back to this
later. This page is here to
give you some basic terminology.
© Copyright IBM Corporation 2011
Time sliced sub-processor allocations are dispatched according to
demand and entitled capacity.
This example shows one 10 ms time slice, seven running
partitions, and four processors.
Partition 1
Partition 2
Partition 3
Partition 4
Partition 5
Partition 6
Partition 7
Physical
processors
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-48 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose — Introduce the concept of virtual processors and time
slicing.
Details — If students seem quite confused when you get to this
page, stick to the basics:
now you can allocate a partial processor to a partition and those
partial processors are
actually time slices, which can be spread across multiple physical
processors.
Additional information —
Transition statement — Next, we will provide an overview of the
multiple shared pools
feature available on POWER6 and POWER7-based processor
systems.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-49
V6.0
Uempty
Notes:
By default, all physical processors that are not dedicated to
specific LPARs are grouped
together in a shared processor pool. You can assign a specific
amount of the processing
capacity in this shared processor pool to each LPAR that uses
shared processors.
With the POWER6 systems, you can now define MSPPs and assign the
shared partitions
to any of these MSPPs. With this new capability, the processors
installed on a POWER6
system used to run a set of micro-partitions is called the physical
shared processor pool.
The system administrator can assign a set of micro-partitions to a
specific shared
processor pool to control the processor capacity consumed from the
physical processor
pool.
Reserved and maximum processing unit values
The administrator can activate a shared processor pool by setting a
maximum processing
unit value and, optionally, a reserved processing unit
value for that pool. The maximum
processing unit value limits the total number of processing units
that can be used by the set
of LPARs in the shared processor pool. The reserved processing unit
value is the number
of processing units that are reserved for the use of uncapped LPARs
within the shared
processor pool.
LPAR3LPAR1
Shared
Dedicated
LPAR
LPAR
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-50 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Instructor notes:
Purpose —
Details — Introduce the concept of MSPPs. Point out the default
shared pool ID 0. Also
mention that 63 additional virtual shared pools can be
activated.
Additional information — Just give an overview of the MSPPs
concept. Inform the
students that this concept is detailed in unit two.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-51
V6.0
Uempty
Notes:
Virtual I/O basics
Each partition, by default, is configured to support 10 virtual I/O
slots, and each slot can be
populated with a virtual adapter instance, which allows partitions
to share devices. It also
provides virtual Ethernet connections between partitions on the
same system. More virtual
slots can be configured.
Virtual adapters interact with the operating system like any other
adapter card except they
are not physically present. Virtual adapters are recorded in system
inventory and
management utilities.
As with physical I/O adapters, a virtual I/O adapter must first be
deconfigured from the
operating system to perform a DLPAR remove operation.
© Copyright IBM Corporation 2011
Configurable for each partition
Ethernet, SCSI, or Fibre Channel
Virtual I/O slots can be dynamically added or removed just like
physical I/O slots.
Cannot be dynamically moved to another partition
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-52 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Virtual Ethernet
Virtual Ethernet provides the same function as using an Ethernet
adapter and is
implemented through high-speed, inter-partition, in-memory
communication. There are two
options with virtual Ethernet:
• A virtual Ethernet connection can be configured between two LPARs
on the same
managed system. There is no actual physical adapter. This provides
a fast network
connection between the partitions.
• A virtual Ethernet connection can be configured on one
partition to connect to a network
using a shared Ethernet adapter (SEA) of another partition (called
a hosting partition or
a Virtual I/O Server) on that managed system.
Virtual SCSI
The virtual SCSI (VSCSI) option provides access to block storage
devices in other
partitions (that is, device sharing). It uses the client/server
model where the server exports
disks, logical volumes, files, or other SCSI-based devices, and the
client sees the imported
device as a standard SCSI device.
Virtual Fibre Channel
A virtual Fibre Channel adapter is a virtual adapter that provides
client LPARs with a Fibre
Channel connection to a storage area network through the Virtual
I/O Server LPAR. The
Virtual I/O Server partition provides the connection between the
virtual Fibre Channel
server adapters and the physical Fibre Channel adapters assigned to
the Virtual I/O Server
partition on the managed system.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
© Copyright IBM Corp. 2009, 2011 Unit 1. Introduction to
partitioning 1-53
V6.0
Details — Describe the benefits of using virtual I/O devices.
Additional information — The virtual channel adapters and NPIV are
introduced in the
next slide comments.
Transition statement — The next page shows how virtual Ethernet and
VSCSI devices
are implemented.
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
1-54 PowerVM Virtualization I © Copyright IBM Corp. 2009,
2011
Figure 1-23. Virtual I/O example AN112.0
Notes:
Client/server relationship
Virtual I/O devices provide for sharing of physical resources, such
as adapters and SCSI
devices, among partitions. Multiple partitions can share physical
I/O resources, and each
partition can simultaneously use virtual and physical (natively
attached) I/O devices. When
sharing SCSI devices, the client/server model is used to designate
partitions as users or
suppliers of resources. A server makes a VSCSI server adapter
available for use by a
client partition. A client configures a VSCSI client adapter that
uses the resources provided
by a VSCSI server adapter.
If a server partition providing I/O for a client partition fails,
the client partition might continue
to function, depending on the significance of the hardware it is
using. For example, if the
server is providing the paging volume for another partition, a
failure of the server partition
will be significant to the client.
© Copyright IBM Corporation 2011
DMA
buffer
Instructor Guide
Course materials may not be reproduced in whole or in part
without the prior written permis