Active Memory Sharing Georgia IBM POWER User Group January 2013
IBM PowerVM
Active Memory Sharing
One Customer’s Experience
Asa Hendrick Technology Engineer
Martin Mihalic Technology Engineer
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
The following documentation is based on one customer‟s
experience with IBM PowerVM Active Memory Sharing (AMS).
It contains suggestions and recommendations based on that
experience.
• The presenters are not employed by nor compensated by IBM.
• No warranty is expressed or implied by IBM or the presenters.
• Use at your own risk.
• Mileage may vary.
DISCLAIMER
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
The IBM developerWorks Wiki for Active Memory Sharing (AMS):
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/
Power%20Systems/page/Active%20Memory%20Sharing%20%28AMS%29
Shared Memory Overview:
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphat/iphatsm
overview.htm
IBM Active Memory Sharing InfoCenter:
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/arecu/arecua
ms.htm
PowerVM Virtualization Active Memory Sharing (REDP-4470-01):
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=redp-4470-01
Reference Links
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Supported On
IBM POWER6 and POWER7 Frames
(VIO Server Required)
Allows over-subscribing of physical memory
(similar to how shared processor pools work)
Increases memory utilization in a managed system
Memory is automatically re-allocated between participating partitions
Active Memory Sharing (AMS)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
PROS
Works on both POWER6 and POWER7
Very low overhead
CONS
Paging device (disk) management
Monitoring
Why AMS (vs AME)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Employees A, B, C have company cell phones.
Each employee is allocated a fixed 500 minutes per month.
The Company is billed per Allocated minute ($0.10/min).
Emp Alloc Usage Unused
A 500 300 200
B 500 400 100
C 500 200 300
total 1500 900 600
1500 min Alloc * $0.10/min = $150 charged
$150 over 900 min Usage = $0.166/min actual cost (60% util).
AMS Cell Phone Plan Analogy (dedicated)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
If instead the company was billed for a shared pool of
1500 minutes, it may be possible to add another employee
to the plan (at no extra cost, other than the phone):
Emp Est Usage Unused
A 500 300 n/a
B 500 400 n/a
C 500 200 n/a
D 500 400 n/a
pool 1500 1300 700
1500 min Alloc (pool) * $0.10/min = $150 charged
$150 for 1300 min Usage = $0.115/min actual cost (86% util).
AMS Cell Phone Plan Analogy (shared)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
For both Dedicated and Shared plans, there is still a
small penalty (say $0.01/min) when an employee goes
over their Allocated minutes.
For Shared plan, there is also a small penalty (say
$0.02/min) if the Company goes over the plan total,
but that is offset by the benefit of getting additional
capacity (Emp D) and/or better utilization for the same
cost.
AMS Cell Phone Plan Analogy (comparison)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Shared Memory Pool
The part of real physical memory which is virtualized by the
hypervisor to allocate physical memory to the shared memory
partitions.
Shared Memory Partition
A partition whose memory is associated with a shared memory pool.
Paging Virtual I/O Server
A VIOS partition that provides paging services for a shared memory
pool and manages the paging devices for shared memory partitions
associated with the shared memory pool.
Paging Device
Physical or logical devices associated with a shared memory pool (via
Paging VIOS) that provide the paging space for shared memory.
AMS Terminology
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
On a shared memory partition, two parameters define the
memory configuration:
Logical memory
Quantity of memory that the operating system manages and can
access (Desired). Logical memory pages that are in use may be
backed up by either physical memory or a pool‟s paging device.
Memory weight
Relative number used by the hypervisor to prioritize the physical
memory assignment from the shared memory pool to the logical
partition.
AMS Terminology
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Virtualization Control Point (HMC, IVM, IBM Director)
Provides the administrator interface to the Active Memory
Sharing functions and communicates with the management
interfaces of the Active Memory Sharing environment.
Active Memory Sharing Manager (AMSM)
A hypervisor component that manages the shared memory pool
and the memory of the partitions associated with the shared
memory pool. The AMSM allocates the physical memory blocks
that comprise the shared memory pool.
Virtual Asynchronous Service Interface (VASI)
A virtual device that allows communications between the
Virtual I/O Server and the hypervisor. In AMS environment, this
device is used for handling hypervisor paging activity.
AMS Components
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Collaborative Memory Manager (CMM)
An operating system feature that provides hints about memory
page usage that the PowerVM hypervisor uses to target pages to
manage the physical memory of a shared memory partition.
• Loaning memory pages
• Stealing aged pages
• For pages saved to paging device it frees the memory and loans it to
the hypervisor
AMS Components
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
AMS Components - Illustrated
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Virtual Memory Pages
• LPAR logical memory is divided into virtual memory pages.
• Hypervisor is responsible for mapping these virtual memory
pages into physical memory (or to disk if needed).
Memory Affinity
• There is no explicit binding of memory to processors.
• There is no binding of physical memory to an LPAR's logical
memory.
Memory Sharing Concepts
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Security
• The Active Memory Sharing Manager guarantees that the
contents of stolen pages are not readable to any other
partitions by zeroing the contents of the page before the
pages are allocated to another partition.
Memory Sharing Concepts
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Page Loaning
• By loaning pages, the operating system reduces the activity of
the hypervisor, improving performance of the memory pool.
• Instead of performing only page stealing, the hypervisor also
requests to free some logical memory pages and the operating
system can choose which pages are more appropriate to free.
• The AIX operating system allows tuning of the algorithm that
selects the logical pages to be loaned.
Memory Sharing Concepts
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
There are three possible shared memory scenarios that
can occur depending on the environment and workloads:
Non over-commit
The amount of real memory available in the shared memory
pool is enough to cover the total amount of logical memory
(working set) that could be in use.
Logical over-commit
The total logical (desired) memory can be higher than the
physical memory, however the working set memory never
exceeds the physical memory.
Memory Sharing Concepts
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Physical over-commit
• The working set memory requirements exceed the physical
memory in the shared pool.
• Logical (desired) memory has to be backed by both the physical
memory in the pool and by the paging devices.
• The hypervisor backs the excess logical memory using paging
devices that are accessed through its paging Virtual I/O Server.
Memory Sharing Concepts
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
AMS Pool and LPARs
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Allows Over-Subscribed Memory
• Allows over-subscribing of physical memory which increases
memory utilization in a managed system.
• Allows creation of more partitions than would be otherwise
possible.
Dynamic Memory Allocation
• Automatically adjusts physical memory dispatched based on
partition workload activity, such as:
― Mixed workloads with different time of day peaks
• (e.g. CRM by day, batch at night)
― Grouped workloads with sporadic memory requirements
― Multiple application stacks that are not all active at the same time
AMS Value Proposition
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Autonomic
• Memory automatically re-allocated among participating partitions
• No user intervention required after initial configuration
Active Memory Aware
• Only actively referenced memory (working set) needs to stay
resident in a workload memory footprint
“Free”
• AMS feature is included with PowerVM Enterprise Edition – no
additional license required
AMS Value Proposition
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Similarities with Shared Processor Features
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Differences with Shared Processor Features
Active Memory Sharing Georgia IBM POWER User Group January 2013
Active Memory Sharing
PLANNING
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
AMS - Requirements
* These disks must be equal to or greater size than the Maximum memory parameter on
each LPAR (+10% if using LPAR Suspend/Resume).
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Once the requirements are met and the Shared Memory Pool is
configured, these constraints apply:
• The shared memory partitions cannot be activated until the HMC has
established an RMC connection with the Virtual I/O Server.
• By design, the HMC prevents assignment of a physical I/O slot to a
shared memory partition.
• LPAR 4K memory page size only. Large pages are not supported.
• The shared memory pool size cannot be larger than the amount of
available system memory minus memory in use by dedicated memory
partitions.
Constraints
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
• The shared memory pool Paging Device VIOS 1 & VIOS 2 LPARs cannot
be changed (pool must be deleted and recreated).
• LPAR primary and secondary paging VIOS assignments cannot be
changed dynamically.
• Once a dedicated memory LPAR is activated, that dedicated memory is
unavailable to the shared memory pool, even if the LPAR is not
running.
• LPAR memory mode cannot be changed dynamically from Dedicated
memory to Shared memory. Profile change and Shutdown/Activate is
required.
Constraints
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
• At LPAR activation, if either of the paging VIOS servers is
unavailable, the partition will fail to activate (unless override
specified).
• At LPAR activation if there is no paging device available that is
larger than or equal to the Maximum Memory setting, the
partition will fail to activate.
Constraints for LPAR Activation
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
RECOMMENDATIONS
Use VIO physical volumes (rather than Logical Volumes)
• provides a slightly shorter instruction path to the physical storage
• simplifies the performance monitoring of the device
Use standard sizes rather than custom size for each LPAR
Place the paging device disk LUNs on a separate adapter pair
where possible
Perform normal disk and adapter I/O tuning for paging devices to
optimize access time (queue depth, etc)
Paging Devices on VIO Server(s)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Appropriate number of paging devices must be created and
assigned to dual VIO servers and to the shared memory pool
• At lease one per shared Memory LPAR
• Size equal to or greater than LPAR Maximum Memory value
Paging devices can be added dynamically to shared memory pool
Paging devices can be removed dynamically from the shared
memory pool (if not allocated to an LPAR)
Shared Memory Pool Paging Devices
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Should be based on percentage of sum of Desired Memory for LPARs
using AMS
Pool Size is limited to system „Available Memory‟
• Physical System memory less Hypervisor memory and Dedicated LPAR
memory
The shared memory Pool Size and Maximum Pool Size can be
changed dynamically
Maximum Pool Size can be set higher than Physical System Memory
• Not really necessary since it can be changed dynamically
• Wastes hypervisor memory (256 MB per 16 GB for page tables)
Shared Memory Pool Sizing
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Non over-commit
• The amount of real memory available in the shared memory pool is
always enough to cover the total LPAR Desired Memory.
Logical over-commit
• The total LPAR Desired Memory can be higher than the shared memory
pool, but the working set never exceeds the shared memory pool.
Physical over-commit
• The working set memory requirements exceed the physical memory in
the shared pool. (AMS paging occurs)
Memory Commitment Scenarios (again)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
A desirable goal is to avoid physical over-commit
• Monitor shared memory pool and paging device usage
Logical over-commit (Desired total > Shared Memory Pool)
is not only acceptable but is desirable.
• This is where physical memory utilization begins to increase
Physical over-commit actually means high memory efficiency
• If hypervisor memory paging or paging device utilization is high,
consider adjusting shared memory pool size
• if there is concern for specific LPARs, the profile Memory Weight value
or the AIX loan_policy could be adjusted
Memory Over-Commitment
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
When using Active Memory Sharing, paging can occur either in the
operating system (AIX) or the shared Memory pool (Hypervisor).
AIX paging
Dedicated memory partition
• occurs when the working memory set needed exceeds the size of the
physical memory (Desired Memory value) assigned.
Shared memory partition
• occurs when the working memory set exceeds the size of the logical
memory (Desired Memory value) assigned to the LPAR.
In vmstat output when you see non-zero values in the pi or po
column, it means that AIX is paging for the above reason
AIX paging vs Hypervisor paging
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
When using Active Memory Sharing, paging can occur either in the
operating system (AIX) or the shared Memory pool (Hypervisor).
Hypervisor paging
Mostly occurs when the system working memory set needed
exceeds the size of the physical memory in the Shared Memory
pool (physical over-commit)
In vmstat -h output, when you see non-zero values in the hpi or
hpit column it means that the Hypervisor is paging memory for the
shared memory pool
AIX paging vs Hypervisor paging
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
AIX interacts with the hypervisor for shared memory handling by
classifying the importance of logical memory pages and by
receiving the request to loan them from the hypervisor
Loaned in logical memory pages are kept free by AIX for a longer
time and are not shown in the free statistics of commands such as
vmstat or lparstat.
When the hypervisor needs to reduce the number of physical
memory pages assigned to a logical partition, it first selects loaned
memory pages, with no effect on logical partition‟s memory access
time (see above).
When AIX does need to start loan logical memory pages, it first
selects pages from cached file data.
Loaning of Memory Pages
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Default page loaning
• Use file cache pages for loaning
# vmo -a ams_loan_policy=1
Disable page loaning
• Don‟t add any loaned pages, only give up working pages if No free or
file cache pages
# vmo -a ams_loan_policy=0
Aggressive page loaning
• When file cache pages are depleted, continue loaning by paging out
working storage pages
# vmo -a ams_loan_policy=2
• If a working page selected, it is first paged copied to local paging
space (increases AIX paging)
Page Loan Tuning
Active Memory Sharing Georgia IBM POWER User Group January 2013
Active Memory Sharing
IMPLEMENTATION SEQUENCE
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Determine that all pre-requisites are met
Determine if enough system physical memory available
Create paging device disk LUNs
Assign paging device disk LUNs to VIO Server(s)
Create the shared memory pool via HMC • Memory allocation
• Paging VIO servers
• Paging devices
Modify profile for the desired LPARs • Memory Mode: Shared
Shutdown/Activate the desired LPARs to use shared memory
Implementation Flow
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Plan It, Hollywood
LPAR ID Type Profile Mode Wgt min DES Max InUse
vios1 1 VIO normal DED n/a 4 4 4
vios2 2 VIO normal DED n/a 4 4 4
lpar061 3 AIX normal shared 128 4 8 12 6.5
lpar062 4 AIX normal shared 128 8 18 24 12.5
lpar063 5 AIX normal shared 128 6 12 16 7.5
lpar051 6 AIX normal Ded n/a 4 8 12
lpar052 7 AIX normal Ded n/a 8 10 12
128
26
38
38
27
29
Shared Memory total
MEMORY
System Memory
Dedicated Memory total
Suggested Max pool size
Pool size (InUse)
Pool size (75%)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create paging device disk LUNs:
• Based on Max memory settings above we could use 30 GB LUN paging
devices for all three LPAR (or 16, 20, 28 GB)
• SAN team creates LUNs and assigns them to WWN for fcs2/fcs3
(unused adapter pair) on VIO servers vios1 and vios2
• Acquire the paging device LUNs on the Paging VIO Servers
$ cfgdev
• Confirm paging devices on VIO servers
$ lsmap –all –ams
• Tune the paging device disk LUNS on the VIO servers
Paging Devices
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create the shared memory pool via HMC:
• Decide Pool size
― If memory usage available for the LPARs, use the sum of values
― If memory usage not available for the LPARs, use estimate
• 75% of total LPAR Desired Memory in this example
• Maximum pool size = 38 GB
• Pool size = 29 GB (75%)
• Paging VIOS 1 = vios1
• Paging VIOS 2 = vios2
• Add paging devices:
― Select from list three 30 GB disk
Create Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Modify profile to make LPARs use shared memory
• Memory mode = Shared
• Leave Min/Des/Max memory values alone
• Weight = 128
• Paging VIOS:
― lpar061 (id 3) • Primary Paging VIOS = vios1 (Paging VIOS 1)
• Secondary Paging VIOS = vios2 (Paging VIOS 2)
― lpar062 (id 4) (even) • Primary Paging VIOS = vios2 (Paging VIOS 2)
• Secondary Paging VIOS = vios1 (Paging VIOS 1)
― lpar063 (id 5) • Primary Paging VIOS = vios1 (Paging VIOS1)
• Secondary Paging VIOS = vios2 (Paging VIOS2)
• Custom I/O entitled memory – Leave unchecked
Shutdown/Activate LPAR
Change LPAR Profiles
Active Memory Sharing Georgia IBM POWER User Group January 2013
Active Memory Sharing
Configuration Highlights
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Verify Feature Enablement
On the HMC, under Systems Management, select the server where
Active Memory Sharing will be configured:
Configuration Prep
Now the server LPARs are displayed in the right pane. With no LPARs
selected, go to the bottom pane where it says Tasks and click on Properties :
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Configuration Prep
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Configuration Prep
If the value is not ‘True’, this probably means that the PowerVM Enterprise
Edition enablement key has not been applied yet.
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Check System Memory
Make note of Available and Configurable memory values.
Click Cancel to close the Properties window.
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Select disk devices to be added to the shared memory pool as Paging Devices:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Create Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Change Shared Memory Pool
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
LPAR Profile
When there is no shared memory pool defined, there is not be a Shared
Memory option in the LPAR Memory configuration (just says Dedicated
Memory), which looks like this:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
LPAR Profile
Here is a profile Memory tab for an
LPAR on a managed system where the
shared memory pool has been created.
It is still set for Dedicated mode, but
notice the Memory mode options:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
I/O entitled memory
The maximum amount of physical memory guaranteed to be
available for I/O mapping. The HMC or IVM calculates the I/O
entitled memory based on the I/O configuration.
LPAR profile Custom I/O entitled memory allows over-ride of
calculated value.
The I/O entitled memory value can be changed using a dynamic
LPAR operation.
This value RARELY needs to be changed unless monitoring tools are
reporting excessive I/O mapping failure operations (iomaf).
LPAR Profile
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
LPAR Profile
When Shared is selected, the panel changes to show Shared Memory options:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
LPAR Profile – Paging VIOS
RECOMMENDATION
The primary and secondary paging VIO servers operate in active-passive
mode. It is recommended that the primary paging VIOS be alternated to
provide load balancing should heavy AMS paging occur.
• If the Partition ID number is Odd, specify the pool’s Paging VIOS 1 (odd
numbered paging VIO) as the Primary Paging VIOS and the pool’s Paging
VIOS 2 (even numbered VIO) as the Secondary Paging VIOS.
• If the Partition ID number is Even, specify the pool’s Paging VIOS 2 (even
numbered VIO) as the Primary Paging VIOS and the pool’s Paging VIOS 1
(odd numbered VIO) as the Secondary Paging VIOS.
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
LPAR Profile - Activate
When the LPAR is activated, a memory pool paging device is automatically
assigned from the paging devices previous designated (the smallest device
larger than the Maximum memory setting).
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
View Paging Device Assignment
Active Memory Sharing Georgia IBM POWER User Group January 2013
Active Memory Sharing
MONITORING
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Enable Data Collection
Utilization data collection has to be enabled in order for the data to be
available. Only one LPAR on the managed system needs to have this enabled,
usually the first VIO server is a good choice.
Under partition Properties go to the Hardware tab, then the Processors tab:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Check Sampling Rate
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
View Memory Data
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
View Memory Data
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
View Memory Data
Select a snapshot:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
View Memory Data
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
View Memory Data
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
View Memory Data
This is a memory over-commit because the AMS LPARs are requesting 6 GB
and the Pool size is 4 GB, which is 2 GB (50%) over.
Logical or Physical over-commit?
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Pool Memory Utilization – HMC CLI
Utilization statistics for the shared memory pool can be obtained by using
the lslparutil command with the mempool resource type:
labroot:~> lslparutil -m P7-750_#2 -r mempool time=12/02/2010 14:55:03,event_type=sample,resource_type=mempool,sys_time=12/02/2010 14:55:03,curr_pool_mem=4096,lpar_curr_io_entitled_mem=462,lpar_mapped_io_entitled_mem=24,lpar_run_mem=6144,sys_firmware_pool_mem=272,page_faults=9985225,page_in_delay=17920438297
curr_pool_mem = Pool Size
lpar_run_mem = total Partition logical memory (Desired)
These are the same value as shown on the HMC GUI in the
View Utilization >> Shared Memory Pool window.
It is obtained from the same utilization sample snapshot data on the HMC.
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Alternatively, the command can be executed to retrieve the past ##
minutes:
labroot:~> lslparutil -m P7-750_#2 -r mempool –F time,resource_type,
sys_time,curr_pool_mem,lpar_curr_io_entitled_mem,lpar_mapped_io_entitled_mem,lpar_run_mem, sys_firmware_pool_mem,page_faults,page_in_delay --header --minutes 40
Note1:
The lpar_run_mem value only changes when a shared memory LPAR is
started or stopped.
Note 2:
The values for page_faults and page_in_delay are cumulative, so it is
necessary to calculate deltas to determine how much change is occurring.
Pool Memory Utilization – HMC CLI
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
time,resource_type,sys_time,curr_pool_mem,lpar_curr_io_entitled_mem,lpar_mapped_io_entitled_mem,lpar_run_mem,sys_firmware_pool_mem,page_faults,page_in_delay vvvv
11/09/2010 16:10:26,mempool,11/09/2010 16:05:32,6144,462,0,768,272,39352,33835762
11/09/2010 16:11:27,mempool,11/09/2010 16:06:33,6144,462,0,768,272,39352,33835762
. . .
11/09/2010 16:16:30,mempool,11/09/2010 16:11:36,6144,462,0,768,272,39352,33835762
11/09/2010 16:17:31,mempool,11/09/2010 16:12:37,6144,462,0,2560,272,39352,33835762 << 1st LPAR starts
11/09/2010 16:18:31,mempool,11/09/2010 16:13:37,6144,462,8,2560,272,46590,41469830
11/09/2010 16:19:32,mempool,11/09/2010 16:14:38,6144,462,8,2560,272,46591,41470021
11/09/2010 16:20:33,mempool,11/09/2010 16:15:38,6144,462,8,2560,272,46591,41470021
11/09/2010 16:21:33,mempool,11/09/2010 16:16:39,6144,462,8,4352,272,46591,41470021 << 2nd LPAR starts
11/09/2010 16:22:34,mempool,11/09/2010 16:17:40,6144,462,16,4352,272,50079,44676255
11/09/2010 16:23:35,mempool,11/09/2010 16:18:40,6144,462,16,4352,272,50079,44676255
11/09/2010 16:24:35,mempool,11/09/2010 16:19:41,6144,462,16,4352,272,50086,44677328
11/09/2010 16:25:35,mempool,11/09/2010 16:20:41,6144,462,16,6144,272,50086,44677328 << 3rd LPAR starts
11/09/2010 16:26:36,mempool,11/09/2010 16:21:42,6144,462,24,6144,272,56156,50367493
. . .
11/09/2010 16:49:50,mempool,11/09/2010 16:44:56,6144,462,24,6144,272,56327,50402273 << normal activity
11/09/2010 16:50:51,mempool,11/09/2010 16:45:56,6144,462,24,6144,272,56328,50402459
11/09/2010 16:51:52,mempool,11/09/2010 16:46:58,6144,462,24,6144,272,56328,50402459
. . .
11/09/2010 16:54:53,mempool,11/09/2010 16:49:59,6144,462,24,6144,272,66351,54400919
11/09/2010 16:55:54,mempool,11/09/2010 16:51:00,6144,462,24,6144,272,66352,54401118
^^^^^ ^^^^^^^^ cumulative values
Pool Memory Utilization – HMC CLI
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Unfortunately, there currently is no single parameter available from
the HMC that shows shared memory usage.
Something like the scriplet below can gather the data needed:
for ID in `list_of_shared_mem_lpar_ids`
do
lslparutil -m P6-520_#1 -r lpar --filter lpar_ids=$ID -F lpar_id,lpar_name,mem_mode,curr_mem,phys_run_mem
done
3,lpar229_ams,shared,2048,1500
4,lpar230_ams,shared,2048,1711
5,lpar231_ams,shared,2048,1398
At this moment the shared memory LPARs are using 4609 MB of 6144
MB pool, or 75% utilization.
Pool Memory Utilization (calculation)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Shared memory pool utilization data for an LPAR can be listed via the
HMC command line using the lslparutil command with the lpar
resource type:
labroot:~> lslparutil -m P7-750_#2 -r lpar --filter lpar_ids=15 time=12/02/2010 15:43:28,event_type=sample,resource_type=lpar,sys_time=12/02/2010
15:43:27,time_cycles=3497959218072648,lpar_name=lpar042,lpar_id=15,curr_proc_mode=shared,curr_proc_units=1.0,curr_procs=2,curr_sharing_mode=uncap,curr_uncap_weight=128,curr_shared_proc_pool_name=DefaultPool,curr_shared_p
roc_pool_id=0,curr_5250_cpw_percent=0.0,mem_mode=shared,curr_mem=2048,curr_io_entitled_mem=154,mapped_io_entitled_mem=9,phys_run_mem=1632,run_mem_weight=200,mem_overage_cooperation=0,entitled_cycles=1104728269535095,capped_cycles=13706811655052,uncapped_cycles=3396108135806,shared_cycles_while_active=0
curr_mem = Desired Memory
phys_run_mem = Active Physical Memory
mem_overage_cooperation = amount of memory being Loaned via Hypervisor
LPAR utilization of Shared Memory (HMC CLI)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Specific shared memory pool utilization data for an LPAR can be listed
using the –F flag:
labroot:~> lslparutil -m P7-750_#2 -r lpar --filter lpar_ids=15
-F mem_mode, curr_mem, phys_run_mem, run_mem_weight, mem_overage_cooperation
shared,2048,1624,200,0
So, this LPAR currently has 1624 MB of physical memory
backing 2048 MB of logical memory.
LPAR utilization of Shared Memory (HMC CLI)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
The „minutes‟ flag can also be used with this command to gather
previous samples:
labroot:~> lslparutil -m P7-750_#2 -r lpar --filter lpar_ids=15
-F sys_time, mem_mode,curr_mem, phys_run_mem, run_mem_weight, mem_overage_cooperation --minutes 5
12/02/2010 15:39:24,shared,2048,1624,200,0
12/02/2010 15:40:25,shared,2048,1628,200,0
12/02/2010 15:41:26,shared,2048,1630,200,0
12/02/2010 15:42:27,shared,2048,1632,200,0
12/02/2010 15:43:28,shared,2048,1624,200,0
So now we can see the active memory usage fluctuating while the
shared memory LPAR is running.
LPAR utilization of Shared Memory (HMC CLI)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Shared memory pool paging devices and assignments can also be
listed via the HMC command line using the lshwres command with
rsubtype flag for pgdev:
Full details (just one disk shown):
labroot:~> lshwres -m P7-750_#2 -r mempool --rsubtype pgdev
device_name=hdisk20,paging_vios_name=lpar005_vio1,paging_vios_id=1,size=30720,
type=phys,state=Inactive, phys_loc=U5802.001.00H3494-P1-C1-T1-W500507680140B0BE-
L15000000000000, is_redundant=1,redundant_device_name=hdisk20,
redundant_paging_vios_name=lpar006_vio2,redundant_paging_vios_id=2,
redundant_state=Inactive,redundant_phys_loc=U78A0.001.DNWHN7W-P1-C3-T1-
W500507680140B0BE-L15000000000000,lpar_id=none
. . .
. . .
(See Bonus section added to end of presentation for information about paging device redundancy).
List Paging Devices (HMC CLI)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Specific details, comma delimited:
labroot:~> lshwres -m P7-750_#2 -r mempool --rsubtype pgdev --header
-F device_name,paging_vios_name,paging_vios_id,size,type,state,is_redundant, redundant_device_name,redundant_paging_vios_name,redundant_paging_vios_id,redundant_state, lpar_id
device_name,paging_vios_name,paging_vios_id,size,type,state,is_redundant,redundant_device_name,redundant_paging_vios_name,redundant_paging_vios_id,redundant_state,lpar_id
hdisk20,lpar005_vio1,1,30720,phys,Inactive,1,hdisk20,lpar006_vio2,2,Inactive,none
hdisk21,lpar005_vio1,1,30720,phys,Inactive,1,hdisk21,lpar006_vio2,2,Inactive,none
. . .
hdisk34,lpar005_vio1,1,30720,phys,Inactive,1,hdisk34,lpar006_vio2,2,Inactive,none
hdisk35,lpar005_vio1,1,30720,phys,Active,1,hdisk35,lpar006_vio2,2,Active,16
hdisk36,lpar005_vio2,2,30720,phys,Active,1,hdisk36,lpar006_vio1,1,Active,17
hdisk37,lpar005_vio1,1,30720,phys,Active,1,hdisk37,lpar006_vio2,2,Active,15
hdisk38,lpar005_vio1,1,30720,phys,Inactive,1,hdisk38,lpar006_vio2,2,Active,none
The three „Active‟ devices are the same three we saw earlier in the HMC GUI
Paging Device example.
List Paging Devices (HMC CLI)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Paging Devices via lsmap
A listing of the shared memory pool paging devices can also be obtained from
the Paging VIOS lpars.
Use the –ams flag with the lsmap command:
lpar005$ lsmap -all -ams -field paging clientid status redundancy backing -fmt ,
vrmpage0,0,inactive,yes,hdisk20
vrmpage1,0,inactive,yes,hdisk21
. . .
vrmpage14,0,inactive,yes,hdisk34
vrmpage15,16,active,yes,hdisk35
vrmpage16,17,active,yes,hdisk36
vrmpage17,15,active,yes,hdisk37
vrmpage18,0,inactive,yes,hdisk38
Monitoring AMS at VIO Server
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Paging Devices via viostat (iostat)
Using the „active‟ devices from the lsmap command above,
lpar005$ viostat -disk hdisk34 hdisk35 hdisk36 3 3
(lpar005# iostat -d hdisk35 hdisk36 hdisk37 3 3)
System configuration: lcpu=4 drives=59 paths=236 vdisks=55
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk36 0.0 1.3 0.3 4 0
hdisk35 0.0 1.3 0.3 4 0
hdisk37 0.0 0.0 0.0 0 0
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk36 0.0 0.0 0.0 0 0
hdisk35 0.0 1.3 0.3 4 0
hdisk37 0.0 1.3 0.3 4 0
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk36 0.0 0.0 0.0 0 0
hdisk35 0.0 0.0 0.0 0 0
hdisk37 0.0 0.0 0.0 0 0
Monitoring Paging Devices at VIO Server
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Paging Devices via viostat (iostat) – Physical over-commit
lpar005$ viostat –time -disk hdisk34 hdisk35 hdisk36 4 5
(lpar005# iostat –T -d hdisk35 hdisk36 hdisk37 4 5)
Disks: % tm_act Kbps tps Kb_read Kb_wrtn time
hdisk36 0.0 0.0 0.0 0 0 17:20:32
hdisk35 0.2 0.0 0.0 0 0 17:20:32
hdisk37 2.0 3.4 0.9 0 24 17:20:32
Disks: % tm_act Kbps tps Kb_read Kb_wrtn time
hdisk36 0.0 14.4 3.6 0 36 17:20:37
hdisk35 0.0 116.8 29.2 8 284 17:20:37
hdisk37 0.0 123.2 30.8 16 292 17:20:37
Disks: % tm_act Kbps tps Kb_read Kb_wrtn time
hdisk36 6.0 2240.3 555.2 3604 13408 17:20:42
hdisk35 8.2 375.0 79.0 12 2836 17:20:42
hdisk37 1.2 817.0 172.9 48 6156 17:20:42
Disks: % tm_act Kbps tps Kb_read Kb_wrtn time
hdisk36 14.6 9021.3 2015.4 12904 40344 17:20:47
hdisk35 16.2 14173.7 3006.5 292 83368 17:20:47
hdisk37 32.6 30443.4 7328.6 5476 174216 17:20:47
Monitoring Paging Devices at VIO Server
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
vmstat The -h option adds Hypervisor data. Relevant fields:
mmode = shared if the partition is running in shared memory mode.
mpsz Shows the size of the shared memory pool.
(mmode and mpsz are only displayed if on a shared memory partition)
hpi Shows the number of hypervisor page-ins for the partition.
hpit Shows the time spent in hypervisor paging in milliseconds for the partition.
A hypervisor page-in occurs if a page is being referenced which is not available in
real memory because it was paged out by the hypervisor previously.
If no interval is specified when issuing the vmstat command the hpi and hpit values
shown are counted from boot time.
pmem Shows the amount of physical memory backing the logical memory (GB).
loan Shows the amount of the logical memory (GB) which is loaned to the
hypervisor. The amount of loaned memory can be influenced through the
vmo ams_loan_policy tunable.
Monitoring AMS from AIX
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
For example:
# vmstat -h 10
System configuration:
lcpu=2 mem=4096MB ent=1.00 mmode=shared mpsz=4.00GB kthr memory page faults cpu hypv-page
----- ----------- ------------------------ ------------ ----------------------- --------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
0 0 998641 961 0 0 0 0 0 0 7 51 149 0 0 99 0 0.00 0.5 3 8 2.58 0.39
1 0 998648 961 0 0 0 0 0 0 6 576 1291 1 1 99 0 0.02 2.5 2666 5361 2.58 0.39
1 0 998648 961 0 0 0 0 0 0 7 945 2027 1 1 98 0 0.04 3.8 3794 8588 2.61 0.39
1 0 998649 961 0 0 0 0 0 0 7 966 2010 1 1 98 0 0.04 3.8 3995 8593 2.66 0.39
1 0 998653 961 0 0 0 0 0 0 4 971 2053 1 1 98 0 0.04 3.8 3876 8566 2.69 0.39
This LPAR has a Desired memory setting of 4 GB (mpsz) and
has 2.69 GB of physical memory assigned by the hypervisor (pmem) and
is „loaning‟ 0.39 GB of its logical memory to the hypervisor (loan).
Hypervisor Shared Memory Stats (AIX)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
vmstat Flags
-v writes to standard output various statistics maintained by the Virtual
Memory Manager. # vmstat -v -h 1048576 memory pages 1001732 lruable pages 63429 free pages 1 memory pools 160185 pinned pages 80.0 maxpin percentage 3.0 minperm percentage 90.0 maxperm percentage 1.2 numperm percentage 12104 file pages 0.0 compressed percentage 0 compressed pages 1.2 numclient percentage 90.0 maxclient percentage 12104 client pages 0 remote pageouts scheduled 0 pending disk I/Os blocked with no pbuf 0 paging space I/Os blocked with no psbuf 2484 filesystem I/Os blocked with no fsbuf 0 client filesystem I/Os blocked with no fsbuf 0 external pager filesystem I/Os blocked with no fsbuf
32930 Virtualized Partition Memory Page Faults 47296 Time resolving virtualized partition memory page faults 829437 Number of 4k page frames loaned 79 Percentage of partition memory loaned
Virtual Memory Manager Stats
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
The lparstat command has been enhanced to display statistics about
shared memory:
mpsz Shows the size of the memory pool in GB.
iome Shows the I/O memory entitlement in MB.
iomp Shows the number of I/O memory entitlement pools.
hpi Shows the number of hypervisor page-ins.
hpit Time spent in hypervisor paging in milliseconds.
pmem Shows the allocated physical memory in GB.
More Shared Memory Stats (AIX)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
This shows the physical memory assigned (pmem) and
hypervisor paging (hpi and hpit).
The remaining columns show the I/O entitled memory statistics:
# lparstat -m 1
System configuration: lcpu=2 ent=1.00 mem=4096MB mpsz=8.00GB iome=77.00MB iomp=9
physb %entc vcsw hpi hpit pmem iomin iomu iomf iohwm iomaf
----- ----- ----- ----- ----- ----- ------ ------ ------ ------ -----
0.68 0.9 238 0 0 4.00 23.7 12.0 53.3 12.7 0
0.21 0.5 215 0 0 4.00 23.7 12.0 53.3 12.7 0
0.20 0.4 208 0 0 4.00 23.7 12.0 53.3 12.7 0
For I/O memory entitlement, the most relevant field is
iomaf (I/O memory allocations failed). If this number is ever non-zero
(rare), then the custom I/O memory entitlement should be used.
More Shared Memory Stats (AIX)
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
The difference between the shared memory partition‟s assigned
memory over-commitment and its actual over-commitment. A
positive value means the partition is using less memory than
system firmware has requested it to use.
Therefore, a negative value indicates a temporary, local „over-
commit‟ meaning the LPAR wants to use more memory than the
hypervisor has currently allocated.
while :
do
date
lslparutil -m P6-520_#1 -r lpar --filter lpar_ids=3 -F lpar_id,lpar_name,mem_mode,curr_mem,phys_run_mem,run_mem_weight,mem_overage_cooperation --header
lslparutil -m P6-520_#1 -r lpar --filter lpar_ids=4 -F lpar_id,lpar_name,mem_mode,curr_mem,phys_run_mem,run_mem_weight,mem_overage_cooperation --header
lslparutil -m P6-520_#1 -r lpar --filter lpar_ids=5 -F lpar_id,lpar_name,mem_mode,curr_mem,phys_run_mem,run_mem_weight,mem_overage_cooperation --header
echo
sleep 5
done
mem_overage_cooperation
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Tue Jul 24 10:05:44 EDT 2012
lpar_id,lpar_name,mem_mode,curr_mem,phys_run_mem,run_mem_weight,mem_overage_cooperation
3,lpar229_ams,shared,2048,1500,0,-381
4,lpar230_ams,shared,2048,1711,0,-333
5,lpar231_ams,shared,2048,1398,0,-517
Tue Jul 24 10:07:18 EDT 2012
lpar_id,lpar_name,mem_mode,curr_mem,phys_run_mem,run_mem_weight,mem_overage_cooperation
3,lpar229_ams,shared,2048,1461,0,-420
4,lpar230_ams,shared,2048,1729,0,0
5,lpar231_ams,shared,2048,1420,0,0
mem_overage_cooperation
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
NMON and AMS
va229 va230 va231
6.1 tl7 7.1 tl1 6.1 tl4
15:05 started nmon byminute
15:12 started FS cache job * yes yes yes
15:17 FS cache job ends
15:20 started memtest (660 sec) 15 (960 M) 9 (576 M) 10 (640 M)
15:32 memtest ends
15:39 started FS cache job * yes yes yes
15:44 FS cache job ends
15:46 started memtest #2 (660 sec) 25 (1600 M) 20 (1280 M) 20 (1280 M)
16:02 memtest #2 ends
16:10 started FS cache job * yes yes yes
16:14 FS cache job ends
16:16 started memtest va229 only 15 (960 M) No No
16:28 memtest va229 ends
* find / usr -xdev -type f -exec cat {} > /dev/null ';'
NMON test timeline:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
NMON and AMS
Interactive NMON screen at 15:07:
┌─topas_nmon──o=Disk-Map─────────Host=lpar229─────Refresh=5 secs───15:07.18────────────────┐ │ Memory ─────────────────────────────────────────────────────────────────────────────────────│ │ Physical PageSpace | pages/sec In Out | FileSystemCache │ │% Used 51.6% 0.3% | to Paging Space 0.0 0.0 | (numperm) 4.8% │ │% Free 48.4% 99.7% | to File System 0.0 6.4 | Process 31.9% │ │MB Used 1057.7MB 5.2MB | Page Scans 0.0 | System 15.0% │ │MB Free 990.3MB 2042.8MB | Page Cycles 0.0 | Free 48.4% │ │Total(MB) 2048.0MB 2048.0MB | Page Steals 0.0 | ------ │ │ | Page Faults 2861.4 | Total 100.0% │ │------------------------------------------------------------ | numclient 4.8% │ │Min/Maxperm 58MB( 3%) 1748MB( 90%) <--% of RAM | maxclient 90.0% │ │Min/Maxfree 960 1088 Total Virtual 4.0GB | User 16.6% │ │Min/Maxpgahead 2 16 Accessed Virtual 0.6GB 15.0%| Pinned 14.6% │
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
NMON and AMS
Correlating that to the 15:07 data points on the MEM (Memory) tab shows the
990 MB free:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
NMON and AMS
On the MEMNEW (Memory Use) tab at 15:07 we see that
System/Process/Cache percentages also match relative to the Total of 2048 MB
(Desired). We can also see where AMS takes pages from AIX FS cache:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
NMON and AMS
Also, there is MEMAMS (AMS) tab:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
NMON and AMS
NMON does properly show Real and Physical memory values:
Active Memory Sharing Georgia IBM POWER User Group January 2013
Active Memory Sharing
Operational Readiness
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Two non-prod P6 595s
• 24 CPs and 768 GB memory
• 93 Active LPARs
• 59 AMS enabled LPARs
• LPAR Types enabled - DB2, Oracle and WebSphere
• LPAR Types not enabled – VIOS and ITM Management
Two non-prod P7 770s
• 32 CPs and 1 TB memory
• 137 Active LPARs
• 112 AMS enabled LPARs
• LPAR Types - WPS
• LPAR Types not enabled – VIOS and ITM Management
Environment Overview
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
P6 595 Frame 1
• 585 GB Pool Size
• 652 GB lpar run memory
P6 595 Frame 2
• 598 GB Pool Size
• 662 GB lpar run memory
P7 770 Frame 1
• 760 GB Pool Size
• 874 GB lpar run memory
P7 770 Frame 2
• 832 GB Pool Size
• 872 GB lpar run memory
Pool Size and LPAR Run Memory
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Mid July 2012 - lowest level dev environment on 1st P7 770
Late July 2012 – additional dev environments on same P7 770
Mid August 2012 – implementation continues with enablement on
2nd P7 770.
September 2012 – enablement effort begins on two P6 595s and
continued on P7 770s.
Mid October – targeted implementation complete.
Implementation Timeline – Three Months
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Multi-week evaluation of initial implementation
Request and provision of paging devices
Coordination of lpar cycles around application testing (scheduling)
Operationalization of AMS
Implementation Time Consumers
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Concern from support staff about:
• the decrease in File System Cache memory
• enabling AMS on LPARs that participate in load tests
Increase in max memory requiring a larger paging device than was
available.
Implementation Issues
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Engage all support areas and document procedures of who does
what when
List of teams and corresponding responsibilities
Paging Space device
• Sizes
• When to request new and when to use spares
Memory Weight Standards
AMS performance metrics
After reaching a steady state almost all available memory has been
allocated to the AMS pool by design. This is to help ensure all new
lpars allocated on the frame are AMS enabled.
Operationalization of AMS
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Spare Paging Devices
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Commands to gather AMS metrics from HMC
• Get lpar data for AMS
o lslparutil -m ${f} -d 1 -r lpar | grep mem_mode=shared
• Get frame data for AMS
o lslparutil -m ${f} -d 1 -r mempool -F \
time,curr_pool_mem,lpar_curr_io_entitled_mem,lpar_mapped_io
_entitled_mem,page_faults,lpar_run_mem,page_in_delay,sys_firm
ware_pool_mem
Gather Relevant Metrics
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
AMS Metrics
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Compare two LPARs over one hour period
• One DEV1 LPAR with weight of 10
• One PRDR LPAR with weight of 80
• Both provide same functionality – WPS Complex Cluster
• Both had similar volume utilization during the compared hours
DEV1
• Hypervisor Page-ins= 162
• Hypervisor Time = 478 milliseconds
PRDR
• Hypervisor Page-ins= 2
• Hypervisor Time = 2 milliseconds
Effect of Mem Weight on Hypervisor Paging
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
In the past, we‟ve used HMC available memory to do forecasts
With AMS implementation the available memory shown on the HMC
no longer provides a complete picture
The following simple formula allows the same forecasting when
AMS is enabled:
• =(((curr_pool_mem/lpar_run_mem) – predetermined_min) *
curr_pool_mem) + hmc_avail_mem
• =(((500/650) – 0.6) * 500) + 10
• = (0.77 – 0.6) * 500) + 10
• = (0.17 * 500) +10
• = 85 + 10
• = 95 GB of available memory
Forecast Available Memory with AMS
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Update loan policy for load tests.
• Change load test lpars to 0
• Change lowest level lpars to 2
Purchase of new hardware that supports memory dedup
Advantage of having LPARs of similar function (i.e. WPS or WAS) on the
same frame
Future Considerations
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Thank You!
Q & A
Wrap Up
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
notes for Ops slides
Operationalization of Active Memory Sharing
The purpose of this document is to identify the teams and outline their processes, activities and tasks to optimize the
virtualization of physical memory while maintaining service level agreements. In short, this document will show who
needs to do what for the effective and efficient use of AMS.
Capacity Engineers
The Capacity Engineers own the memory pools and corresponding AMS configuration. As part of that ownership they
have the following responsibilities:
Collect, analyze and report on AMS metrics.
Provide guidance on AMS enabling additional LPARs.
Track spare paging devices for AMS enabled LPARs.
Manage the memory pool to allow for new AMS enabled LPARs and organic growth of existing LPARs.
Manage the AMS configuration to ensure optimal use.
Provisioning
Ensure paging devices are requested for new AMS enabled LPARs.
Integration Engineers
Provide paging device allocation information to Provisioning for AMS enabled LPARs (initially via comments in the ITSP
request). See section II in the appendix for more details. .
Set the Memory Weight according to standard. See section III in the appendix for more details.
Escalate AMS suspected memory issues to a Capacity Engineer.
Core Support
Escalate AMS suspected memory issues to a Capacity Engineer.
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
notes for Ops slides
Updating
Adding new AMS Enabled LPARs or Modifying Existing Ones
The size of the paging device is based on the maximum memory setting in the lpars profile (not the desired). When
completing a TER for a new lpar or growing an existing lpar, please request a paging device of the appropriate size:
Max Mem <= 16 GB - 20 GB paging device
16 GB < Max Mem <= 32 GB - 40 GB paging device
32 GB < Max Mem <= 64 GB - 80 GB paging device
64 GB < Max Mem <= 128 GB -160 GB paging device
Adding a paging device for AMS is the same as any other LUN, except for the following exception. Add it to the
second pair of HBA’s on the VIO – denote this to the provisioning/SAN team on the request. Once the add is
complete, run engineering standards against it. Then immediately add it to the shared pool as a paging device. This
will prevent it from getting picked up as a regular lun and directly assigned to an lpar.
If you update an lpar’s memory settings and cross one of the thresholds shown above, add a new paging device for the
lpar and leave the old one behind for future use (rather than increasing the size of an existing paging lun). The plan is
to keep a few spare paging luns on each frame. This will provide some flexibility in the event of a short term need to
add/update profiles. With that said, please request the new lun even if you use an existing one. This will ensure
there are a few spares available.
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
notes for Ops slides
prod
150 – 255
train2
125-149
train1
125-149
prdr
80-124
qa
80-124
aplt
80-124
lplt
80-124
intg
40-79
itca
40-79
aitc
40-79
test
40-79
mnt2
20-39
dev2
20-39
adv2
20-39
dev4
20-39
cnv2
20-39
mnt1
10-19
dev1
10-19
adv1
10-19
dev3
10-19
cnv1
10-19
AMS Memory Weight Standards:
Unless otherwise directed, please use the first number in the standard. For example, set the memory weight of a
PRDR LPAR to 80 and an ITCA LPAR to 40. Following is a list of the memory weight standards by environment:
Environment Memory Weight Standards:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
notes for Ops slides
Key AMS metrics
Memory Pool
curr_pool_mem
current amount of physical memory allocated to the shared memory pool by frame.
lpar_run_mem
total desired memory of all shared memory partitions on a frame.
mempool_maxutil_pct
(calc) the peak one minute interval of physical run memory for all AMS LPARs on a single frame represented as a
percentage of the current physical memory allocated from the shared memory pool.
page_faults_daily_tot
(calc) number of page faults by all shared memory partitions on a frame for a calendar day.
curr_pool_mem_lpar_run_mem_pct
(calc) ratio of curr_pool_mem to lpar_run_mem. For optimal performance, the value should be greater than 59%.
Virtual I/O Server
Cpu
Disk i/o utilization for the AMS related paging devices.
Available paging devices and size by frame.
AMS Enabled LPAR server_max_pmem_pct
(calc) the peak one minute interval of physical run memory for a single AMS LPAR represented as a percentage of the
current logical memory allocated to an lpar.
mem_overage_cooperation
current amount of logical memory that is not backed by physical memory.
Additional AMS stats can be found in ITM under the AIX Premium agent of each LPAR under the System member.
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
notes for Ops slides
There is a script on each VIO server for each frame that participates in AMS.
The output of this script is saved on the VIO and there is another script that collects the output with a unique name.
The script and it’s output are below.
$ cat /tmp/VIO_AMS.ksh #!/usr/bin/ksh93 # # ver 2a # # pagingLUN,clientID,status,hdisk##,hdisk##,pvid,HIT_LUN,Size,reserve # cd /nmon/2012_nmon/scripts hn=`hostname` hostname > /tmp/VIO_INFO.OUT > /tmp/VIO_INFO.OUT date +"%H:%M.%S,%a %d %h,%Y" >> /tmp/VIO_INFO.OUT cat /tmp/VIO_INFO.OUT function lspv_size { for i in `lspv |awk '{print $1}'` do HD=`echo $i` PVID=`lsattr -El $i |grep pvid | awk '{print $2}' |cut -c 1-16` #HIT_LUN=`lscfg -vl $i|grep Z1 |awk '{print $2}'|cut -c 22-25` SIZE=`/usr/local/bin/sudo bootinfo -s $i` #RSVPOL=`lsattr -El $i |grep reserve_policy | awk '{print $2}'` printf "HD,PVID,SIZE\n" printf "$HD,$PVID,$SIZE\n" done } ### lspv_size function lsmap { for k in `sudo /usr/ios/cli/ioscli lsmap -all | grep vhost | awk '{print $1 }' ` do SLOTID=`sudo /usr/ios/cli/ioscli lsmap -vadapter $k | egrep $k | awk '{print $2}' | cut -c 23-24` printf "$k,$SLOTID\n" >> /tmp/VIO_INFO.OUT done } ### lsmap
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
notes for Ops slides
function lsmap_ams { /usr/local/bin/sudo /usr/ios/cli/ioscli lsmap -ams -all -field status paging backing clientid -fmt "," } ### lsmap_ams function combo { cd /tmp for l in `cat LSMAP_AMS.OUT ` ; do printf "${l}," l1=`printf "${l}" | awk -F',' '{print $1}' ` l2=`printf "${l}" | awk -F',' '{print $2}' ` l3=`printf "${l}" | awk -F',' '{print $3}' ` l4=`printf "${l}" | awk -F',' '{print $4}' ` for j in `grep ${l4} VIO_INFO.OUT ` ; do j1=`printf "${j}" | awk -F',' '{print $1}' ` j2=`printf "${j}" | awk -F',' '{print $2}' ` j3=`printf "${j}" | awk -F',' '{print $3}' ` j4=`printf "${j}" | awk -F',' '{print $4}' ` j5=`printf "${j}" | awk -F',' '{print $5}' ` printf "${j}\n" done done } ### combo ##### lspv_size > /tmp/VIO_INFO.OUT # ### lsmap lsmap_ams > /tmp/LSMAP_AMS.OUT > /tmp/COMBO_AMS.OUT printf "pagingLUN,clientID,status,hdisk##,hdisk##,pvid,Size\n" >> /tmp/COMBO_AMS.OUT combo >> /tmp/COMBO_AMS.OUT
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
notes for Ops slides
SAMPLE OUTPUT ……………………………… $ cat /tmp/COMBO_AMS.OUT pagingLUN,clientID,status,hdisk##,hdisk##,pvid,Size vrmpage0,79,active,hdisk112,hdisk112,00f71e1e6d092c22,20480 vrmpage1,85,active,hdisk113,hdisk113,00f71e1e6d092dcb,20480 vrmpage2,63,active,hdisk114,hdisk114,00f71e1e6d092f62,20480 vrmpage3,67,active,hdisk115,hdisk115,00f71e1e6d0930ea,20480 vrmpage4,59,active,hdisk116,hdisk116,00f71e1e6d09327b,20480 vrmpage5,57,active,hdisk117,hdisk117,00f71e1e6d093407,20480 vrmpage6,64,active,hdisk118,hdisk118,00f71e1e6d093597,20480 vrmpage7,66,active,hdisk119,hdisk119,00f71e1e6d09373a,20480 vrmpage8,60,active,hdisk120,hdisk120,00f71e1e6d0938ea,20480 vrmpage9,56,active,hdisk121,hdisk121,00f71e1e6d093a7a,20480 vrmpage10,75,active,hdisk122,hdisk122,00f71e1e6d093bff,20480 vrmpage11,73,active,hdisk123,hdisk123,00f71e1e6d093db7,20480 vrmpage12,70,active,hdisk124,hdisk124,00f71e1e6d093f46,20480 vrmpage13,83,active,hdisk125,hdisk125,00f71e1e6d0940de,20480 vrmpage14,61,active,hdisk126,hdisk126,00f71e1e6d094273,20480 vrmpage15,65,active,hdisk127,hdisk127,00f71e1e6d0943f6,20480 vrmpage16,84,active,hdisk128,hdisk128,00f71e1e6d09459c,20480 vrmpage17,82,active,hdisk129,hdisk129,00f71e1e6d09471e,20480 vrmpage18,62,active,hdisk130,hdisk130,00f71e1e6d0948a6,20480 vrmpage19,68,active,hdisk131,hdisk131,00f71e1e6d094a3a,20480
Active Memory Sharing Georgia IBM POWER User Group January 2013
BONUS
Paging Device Redundancy
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
If a shared memory LPAR is shut down (stopped) and re-activated
while both Paging VIO Servers are available, there is no impact.
If a Paging VIO Server becomes unavailable, the shared memory
LPARs that have been configured with redundant Paging VIO
Servers will automatically switch to using their active paging
devices on the remaining Paging VIO Server (no impact).
If a redundant Paging VIO Server is stopped, the LPAR paging
device paths fail over to the remaining Paging VIO Server.
HOWEVER when a redundant Paging VIO Server comes back up from
a reboot or maintenance, the paging device paths DO NOT
automatically fall back. The current Paging VIOS for the shared
memory LPARs continues to be the Paging VIO Server that the
LPARs last switched to.
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
First, determine VIO Server names for Paging VIOS 1 and 2:
labroot:~> lshwres -m P6-520_#1 -r mempool -F paging_vios_names,paging_vios_ids
"lpar024_vio1,lpar025_vio2","1,2"
From the above output, we see that Paging VIOS 1 is lpar024_vio1
and Paging VIOS 2 is lpar025_vio2.
Let‟s now check which is the current Paging VIO Server for each
shared memory LPAR: labroot:~> lshwres -m P6-520_#1 -r mem --level lpar -F
lpar_id,lpar_name,curr_paging_vios_name | grep –v ‘null’
5,lpar231_ams,lpar024_vio1
4,lpar230_ams,lpar024_vio1
3,lpar229_ams,lpar024_vio1
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Hmm, based on recommendations, LPAR ID 4 should be using even
numbered Paging VIOS #2 which is lpar025_vio2 in this example.
Before fixing it, let‟s look a little closer and verify that LPAR ID 4
profile memory settings are correct: labroot:~> lshwres -m P6-520_#1 -r mem --level lpar -F
lpar_id,lpar_name,primary_paging_vios_name,secondary_paging_vios_name,run_mem_weight,curr_paging_vios_name | grep -v 'null'
5,lpar231_ams,lpar024_vio1,lpar025_vio2,128,lpar024_vio1
4,lpar230_ams,lpar025_vio2,lpar024_vio1,128,lpar024_vio1
3,lpar229_ams,lpar024_vio1,lpar025_vio2,128,lpar024_vio1
From the output, it appears that the Primary Paging VIOS is set
correctly in LPAR 4‟s profile.
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Profile is correct, so let‟s toggle the Paging VIO Server for LPAR 4: labroot:~> chhwres -m P6-520_#1 -r mem -p lpar230_ams -o so
And now the current Paging VIOS is the same as the Primary Paging
VIOS setting: labroot:~> lshwres -m P6-520_#1 -r mem --level lpar -F
lpar_id,lpar_name,primary_paging_vios_name,secondary_paging_vios_name,run_mem_weight,curr_paging_vios_name | grep -v 'null'
5,lpar231_ams,lpar024_vio1,lpar025_vio2,128,lpar024_vio1
4,lpar230_ams,lpar025_vio2,lpar024_vio1,128,lpar025_vio2
3,lpar229_ams,lpar024_vio1,lpar025_vio2,128,lpar024_vio1
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
As listed under Constraints, if both Paging VIO Servers are not
available when a Shared Memory LPAR is activated, the LPAR will
fail to start:
HSCLA47C Partition lpar229_ams cannot be activated with the paging Virtual I/O Server (VIOS) partition
configuration specified in the profile because one of the paging VIOS partitions is not available, or a paging space device that can be used with that paging VIOS configuration is not available. However, this partition can be activated with a different paging VIOS partition configuration now. If this partition is configured to use redundant paging VIOS partitions, then this partition can be activated to use a non-redundant paging VIOS partition. If this partition is configured to use non-redundant paging VIOS partitions, then this partition can be activated to use a different paging VIOS partition than the one specified in the profile. If you want to activate this partition with the paging VIOS configuration that is available now, then run the chsysstate command with the --force option to activate this partition.
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
There are three options for the scenario where a Paging VIO Server
is unavailable and a Shared Memory LPAR is trying to Activate, in
order of preference:
• Wait until both Paging VIO Servers are available, then Activate the
LPAR. This is the ONLY option that does not require the LPAR to be
stopped and re-activated in order to restore redundancy of paging
devices (PREFERRED).
• If system memory is available, Activate the LPAR in Dedicated Memory
mode. After both Paging VIO Servers become available, stop the LPAR
and re-activate it in Shared Memory mode.
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
• Activate the LPAR without redundancy (not recommended). Two
options:
― chsysstate -r lpar -m P6-520_#1 -o on -n lpar230_ams -f
normal_ams –force
― Change LPAR profile and replace the unavailable Paging VIOS with
„none‟.
For the „no-redundancy‟ option, what usually happens when the
Paging VIOS comes back up after the shared memory LPAR was
activated, is that the paging device previously associated with it
will appear as „failed‟ on the Paging VIOS that just came back up:
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
In this case it was the Secondary Paging VIOS that was unavailable
(lpar229):
labroot:~> lshwres -m P6-520_#1 -r mem --level lpar -F lpar_id,lpar_name,primary_paging_vios_name,secondary_paging_vios_name,run_mem_weight,curr_paging_vios_name | grep –v null
5,lpar231_ams,lpar024_vio1,lpar025_vio2,64,lpar024_vio1
4,lpar230_ams,lpar024_vio1,lpar025_vio2,128,lpar025_vio2
3,lpar229_ams,lpar024_vio1,,192,lpar024_vio1
On the Secondary Paging VIOS (lpar025_vio2 in this case}:
# lsmap -all -ams -field paging clientid status redundancy backing -fmt ,
vrmpage0,5,active,yes,hdisk27
vrmpage1,3,failed,yes,hdisk28
vrmpage2,4,active,yes,hdisk29
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
The only resolution for this condition is that once both Paging VIOS
are available, shut down the shared memory LPAR and reinstate
the profile memory Paging VIOS entries (if changed), then on the
affected Paging VIOS Remove/Add the „failed‟ paging disk device.
To remove paging disk device: labroot:~> chhwres -m P6-520_#1 -r mempool -o r -p lpar025_vio2 --device
hdisk28 --rsubtype pgdev
labroot:~>
To add back paging disk device: labroot:~> chhwres -m P6-520_#1 -r mempool -o a -p lpar025_vio2 --device
hdisk28 --rsubtype pgdev
labroot:~>
(Can also be performed via HMC GUI, but many more steps – CLI recommended).
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Now, with that shared memory LPAR not yet re-activated:
vrmpage0,5,active,yes,hdisk27
vrmpage1,3,inactive,yes,hdisk28
vrmpage2,4,active,yes,hdisk29
After that shared memory LPAR is re-activated:
vrmpage0,5,active,yes,hdisk27
vrmpage1,3,active,yes,hdisk28
vrmpage2,4,active,yes,hdisk29
Well Done, grasshopper…
Paging VIOS and Paging Device Maintenance
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Supplemental / Unused Slides
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
If an attempt is made to set the Shared Memory Pool size greater than
Available Memory, the following message will be received:
There is an insufficient amount of memory available on the managed system to set the
shared memory pool to the requested size. 92160 MB of memory was requested, but only 58880 MB of memory is available. If you want to set the shared memory pool to the requested size, you must free sufficient system memory first. HSCA41A
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Logical Memory
• In a shared memory partition the Minimum, Desired, and
Maximum settings do not represent physical memory values.
• The real physical memory is part of the shared memory pool
that is virtualized by the hypervisor to allocate resources to the
shared memory partitions.
AMS Terminology
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
By design, AIX always tries to maximize its usage of logical memory
by caching as much file data as possible.
AIX interacts with the hypervisor for shared memory handling by
classifying the importance of logical memory pages and by
receiving the request to loan them from the hypervisor.
Loaning of Memory Pages
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Memory Weight example
Three shared memory LPARs with equal memory weights of 128 (vmstat –h).
Note pmem and loan columns:
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Now change memory weights and view result of hypervisor memory adjustments (pmem,loan):
lpar030> chhwres -m P6-520_#1 -r mem -o s -p lpar229_ams -a "mem_weight=192"
lpar030> chhwres -m P6-520_#1 -r mem -o s -p lpar231_ams -a "mem_weight=64"
Memory Weight example
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
Change memory weights again and view result of hypervisor memory adjustments (pmem,loan):
lpar030> chhwres -m P6-520_#1 -r mem -o s -p lpar229_ams -a "mem_weight=0"
lpar030> chhwres -m P6-520_#1 -r mem -o s -p lpar230_ams -a "mem_weight=255"
lpar030> chhwres -m P6-520_#1 -r mem -o s -p lpar231_ams -a "mem_weight=0"
Memory Weight example
Active Memory Sharing Georgia IBM POWER User Group January 2013 Active Memory Sharing Georgia IBM POWER User Group January 2013
That’s all folks . . .
the last slide