+ All Categories
Home > Documents > Hana Operations Guide

Hana Operations Guide

Date post: 31-Dec-2015
Category:
Upload: asha-patel
View: 756 times
Download: 5 times
Share this document with a friend
Description:
HANA operation guide
Popular Tags:
118
IBM Systems Solution for SAP HANA Hardware, Operating System & GPFS Operations Guide Author: Christoph Nelles, SAP on System x IBM Cloud and IT Optimization IBM Deutschland Research & Development, GmbH. In cooperation with: SAP AG Created on 11th September 2013 17:27 – Version 1.6.60-6 © Copyright IBM Corporation, 2013
Transcript
Page 1: Hana Operations Guide

IBM Systems Solution for SAP HANAHardware, Operating System & GPFS Operations Guide

Author: Christoph Nelles, SAP on System x

IBM Cloud and IT Optimization

IBM Deutschland Research & Development, GmbH.

In cooperation with: SAP AG

Created on 11th September 2013 17:27 – Version 1.6.60-6

© Copyright IBM Corporation, 2013

Page 2: Hana Operations Guide

Technical Documentation

Contents1 Preface 1

1.1 About this Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Getting the latest version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.3.1 Icons Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Code Snippets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.4 Appliance Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4.1 Determining Appliance Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4.2 Appliance Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.5 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.6 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 System Health Check 52.1 Overall System status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2.1 GPFS distributed shell command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.2 GPFS Cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.3 GPFS Cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.4 Verify cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2.5 File system status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2.6 Disk status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2.7 Disk usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2.8 Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2.9 SAP HANA Log Files Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3 SAP HANA Application status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.3.1 Command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.3.2 HDB Admin tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.3.3 SAP HANA Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.4 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4.1 Switch Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4.2 Privilege mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.4.3 System information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.4.4 FW image / Boot image information . . . . . . . . . . . . . . . . . . . . . . . . . . 182.4.5 Display Switch configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.4.6 Check interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.4.7 Check links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.4.8 Check vlans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.4.9 Check transceivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.4.10 Check vLAG / LACP information . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Additional Tools for System Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.5.1 IBM Advanced Settings Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.5.2 IBM ServeRAID MegaCli Utility for Storage Management . . . . . . . . . . . . . . 222.5.3 IBM SSD Wear Gauge CLI utility . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.5.4 IBM Systems Solution for SAP HANA Health Checker . . . . . . . . . . . . . . . . 22

3 Single Node Operations 253.1 Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2 starting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.3 stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.4 gpfs failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.5 HANA failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.6 Disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

I

Page 3: Hana Operations Guide

Technical Documentation

3.7 Network failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4 Cluster Operations 264.1 After server node failure actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.1.1 Recovering the GPFS file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.1.2 Removing the SAP HANA node . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.1.3 Remove SSH Keys of failed node . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.1.4 Installing replacement node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.2 Temporary Node Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.2.1 GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.2.2 SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.3 Adding a cluster node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.3.1 Server Source (Existing Cluster, Existing Single Node, New Server) . . . . . . . . . 314.3.2 GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.3.3 SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.4 Removing a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.4.1 Reorganizing HANA cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.4.2 Removing HANA Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.4.3 Remove host(s) from GPFS cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.5 Reinstalling a SAP HANA cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.5.1 Delete SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.5.2 Installing first SAP HANA node . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.5.3 Install other SAP HANA nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5 DR Cluster Operations 435.1 Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.3 Common operations deviating from the SAP HANA Operations Guide . . . . . . . . . . . 43

5.3.1 System shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.3.2 System startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.4 Planned failover/failback procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.4.1 Failover to secondary site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.4.2 Failback to primary site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.5 Site failover procedures with tiebreaker node . . . . . . . . . . . . . . . . . . . . . . . . . 475.5.1 Stop non-productive systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.5.2 Check current cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.5.3 Change Configuration servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.5.4 Relax Cluster Node Quorum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.5.5 Delete disks of failed sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.5.6 Restoring HA capabilities on failover Site . . . . . . . . . . . . . . . . . . . . . . . 485.5.7 Mount global shared file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.5.8 Start SAP HANA on failover site . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.5.9 (Optional) Delete nodes from failed site . . . . . . . . . . . . . . . . . . . . . . . . 495.5.10 Restripe file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.5.11 Failback with tiebreaker node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.6 Site failover procedures without tiebreaker node or loss of site A & tiebreaker node . . . . 515.6.1 Bring the file system on surviving site online . . . . . . . . . . . . . . . . . . . . . 515.6.2 Delete disks of failed sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.6.3 Restoring HA capabilities on failover Site . . . . . . . . . . . . . . . . . . . . . . . 525.6.4 Mount the file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.6.5 Start SAP (Host Agent & SAP HANA) . . . . . . . . . . . . . . . . . . . . . . . . 535.6.6 Failback to site A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.7 Node failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.8 Disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

II

Page 4: Hana Operations Guide

Technical Documentation

5.9 Network Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.9.1 Inter-Site link failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.9.2 Single switch failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.10 Expansion Box (non-productive Systems on Backup Site) Operations . . . . . . . . . . . . 565.10.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.10.2 Site Failure Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.10.3 Site to Site Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6 Hard Drive Operations 596.1 MegaCli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.2 Collecting Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6.2.1 Read out controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.3 Replacing failed hard drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6.3.1 Remove failed disk from GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606.3.2 Check physical drive status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606.3.3 Determine next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.3.4 Remove old config . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.3.5 Clear controller cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.3.6 Add new disk and configure raid . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.3.7 Rescan SCSI bus to detect new drives . . . . . . . . . . . . . . . . . . . . . . . . . 626.3.8 Add disk to GPFS file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

6.4 Replacing failed IBM High IOPS drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626.4.1 General Information on IBM High IOPS drives (Fusion-io) . . . . . . . . . . . . . 626.4.2 Failed High IOPS drive in a Single Node Server . . . . . . . . . . . . . . . . . . . . 636.4.3 Removing the failed High IOPS drive . . . . . . . . . . . . . . . . . . . . . . . . . 636.4.4 Installing a replacement card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646.4.5 Driver & Firmware Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

6.5 Add new disk to GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676.5.1 Create disk descriptor file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676.5.2 Create new disk in GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676.5.3 Add new disk to file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

7 Software Updates/Upgrades 687.1 Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687.2 Linux Kernel Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

7.2.1 Kernel Update Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687.2.2 Updating IBM Workload Optimized System x3950 X5 . . . . . . . . . . . . . . . . 687.2.3 Updating IBM Workload Optimized System x3960 X5 . . . . . . . . . . . . . . . . 70

7.3 Updating High IOPS Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717.4 Updating GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

7.4.1 Disruptive Cluster Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727.4.2 Full Cluster Rolling Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7.5 SLES for SAP 11 SP1 Upgrade to SLES for SAP 11 SP2 . . . . . . . . . . . . . . . . . . . 787.5.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787.5.2 Upgrade Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787.5.3 IBM High IOPS Card Notes (x3950 only) . . . . . . . . . . . . . . . . . . . . . . . 797.5.4 Rolling Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.5.5 Upgrade Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.5.6 Upgrade Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

7.6 SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

8 TSM Backups for HANA 85

Appendices 86

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

III

Page 5: Hana Operations Guide

Technical Documentation

A Support Script Troubleshooting 86A.1 Check Script Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86A.2 FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

A.2.1 FAQ #1: SAP HANA Memory Limits . . . . . . . . . . . . . . . . . . . . . . . . . 86A.2.2 FAQ #2: GPFS parameter readReplicaPolicy . . . . . . . . . . . . . . . . . . . . . 86A.2.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines . . . . . . . . . . . . . 86A.2.4 FAQ #4: Overlapping NSDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87A.2.5 FAQ #5: Missing RPMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87A.2.6 FAQ #6: CPU Governor set to ondemand . . . . . . . . . . . . . . . . . . . . . . . 87

B GPFS Disk Descriptor Files 89B.1 Old Disk Descriptor Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89B.2 New Disk Descriptor Format (Stanzas) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

B.2.1 GPFS Single Node Stanta File Template for SAP HANA Data . . . . . . . . . . . 91B.2.2 GPFS Cluster Node Stanza Template for GPFS file system . . . . . . . . . . . . . 96

C Topology Vectors (GPFS 3.5 failure groups) 102

D Quotas 103D.1 New Quota Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

D.1.1 Quota Calculation Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103D.2 Pre-calculated Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

E IBM Machine Type Model Code (MTM) to SAP HANA T-Shirt Size Mapping 106

F Public References 107F.1 IBM External References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107F.2 IBM Corrective Service Tips (IBM ID required) . . . . . . . . . . . . . . . . . . . . . . . . 107F.3 SAP Service Marketplace (SAP Service Marketplace ID required) . . . . . . . . . . . . . . 108F.4 SAP Help Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108F.5 SAP Notes (SAP Service Marketplace ID required) . . . . . . . . . . . . . . . . . . . . . . 108F.6 Novell SUSE Linux Enterprise Server References . . . . . . . . . . . . . . . . . . . . . . . 109

G Copyrights and Trademarks 110

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

IV

Page 6: Hana Operations Guide

Technical Documentation

List of Figures1 Sample check script output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 HDB Admin System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 HDB Admin Landscape tab for services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 HDB Studio: Add System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 HDB Studio: System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 HANA Data Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 SLES OS Upgrade Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

V

Page 7: Hana Operations Guide

Technical Documentation

List of Tables1 Appliance changelog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Three node example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Additional fourth node example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 SAP HANA Unified Installer Template Variables . . . . . . . . . . . . . . . . . . . . . . . 405 IBM High IOPS Firmware / Driver dependencies . . . . . . . . . . . . . . . . . . . . . . . 666 Update IBM High IOPS Driver Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 708 Upgrade IBM High IOPS Driver Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 Upgrade GPFS Portability Layer Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 7310 IBM High IOPS Non-destructive Upgrade Paths . . . . . . . . . . . . . . . . . . . . . . . 7911 GPFS Single Node Disk Descriptor Template for SAP HANA Data . . . . . . . . . . . . . 8912 GPFS Cluster Node Disk Descriptor Template for GPFS file system . . . . . . . . . . . . 9013 Topology Vectors in a 8 node DR-cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10214 Calculated quotas for Single Node Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . 10415 Calculated quotas for HA-clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10516 SAP HANA T-Shirt Size to IBM MTM Mapping . . . . . . . . . . . . . . . . . . . . . . . 106

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

VI

Page 8: Hana Operations Guide

Technical Documentation

List of Abbreviations

ASU IBM Advanced Settings Utility

DR Disaster Recovery (previously SAP Disaster Tolerance)

GPFS IBM General Parallel File System™

IMM IBM System x Managament Module

ISICC IBM SAP International Competence Center

NIC Network Interface Controller

OLAP On Line Analytical Processing

OLTP On Line Transaction Processing

SAP HANA SAP HANA Platform Edition appliance software

SLES SUSE Linux Enterprise Server

SLES for SAP SUSE Linux Enterprise Server for SAP Applications

VLAG Virtual Link Aggregation Group

VLAN Virtual Local Area Network

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

VII

Page 9: Hana Operations Guide

Technical Documentation

1 Preface

Neither this documentation nor any part of it may be copied or reproduced in any form or by any meansor translated into another language, without the prior consent of the IBM Corporation.

IBM makes no warranties or representations with respect to the content hereof and specifically disclaimsany implied warranties of merchantability or fitness for any particular purpose. IBM assumes no respon-sibility for any errors that may appear in this document. The information contained in this document issubject to change without any notice. IBM reserves the right to make any such changes without obliga-tion to notify any person of such revision or changes. IBM makes no commitment to keep the informationcontained herein up to date.

Edition Notice: 11th September 2013

1.1 About this Guide

This guide is a product of a collective effort of the SAP HANA on IBM System x Development Team. Asa "living document" it is under constant rework to improve existing documentation based on experienceand feedback and to document new features and changes to the appliance.

It is a handbook for system administrators that gives advice and guidelines how to operate the IBMSystems Solution for SAP HANA appliance.

This guide requires knowledge in the operation of Linux systems. Knowledge of GPFS1 is recommendedto understand the principles behind the cluster operations with the IBM Systems Solution for SAP HANAappliance.

1.2 Getting the latest version

The latest version of this document can be downloaded from SAP Note 1650046 - IBM SAP HANAAppliance Operations Guide2 or from IBM w3 Connections3. New versions will be published withoutprior notice.

1.3 Conventions

This guide uses several conventions to improve the reader’s experience and the ease of understanding.

1.3.1 Icons Used

The following information boxes indicate important information you should follow according to the levelof importance.

AttentionATTENTION – pay close attention to the instructions given

WarningWARNING – this is something to take into consideration

1IBM General Parallel File System™2https://service.sap.com/sap/support/notes/16500463https://ibm.biz/BdxAra

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

1

Page 10: Hana Operations Guide

Technical Documentation

NoteINFORMATION – extra information describing in detail

1.3.2 Code Snippets

When reading code snippets you have to note the following: Lines of code that are too long to be shownin one line will be automatically broken. This linebreak is indicated by an arrow at the end of the firstand an arrow at the start of the second line:

1 This is a code snippet that is too long to be printed in one single line, therefore ←↩↪→you will see an automatic linebreak.

There are also line numbers at the left side of each code snippet to improve the readability.

Code examples that contain commands that have to be executed on a command line follow these rules:

• Lines beginning with a # indicate commands to be executed by the root user.

• Lines beginning with a $ indicate commands to be executed by an arbitrary user.

1.4 Appliance Versions

During the lifetime of the appliance changes were made to the delivered software or the appliance setupwhich necessitate different handling in some appliance operations.

If parts of this guide are only valid for certain appliance versions, these sections, paragraphs and chapterswill be marked as follows:

• [1.3]+ denotes appliance version 1.3.x and later

• -[1.3] denotes appliance version 1.3.x and earlier

• [1.3] applies only to version 1.3.x

• [1.2]-[1.4] applies to all versions from 1.2.x to 1.4.x

• [DR] applies only to Disaster Resistance enabled clusters

• [HA] applies only to standard High-Availability clusters

• [Single] is only valid for single node installations

In general the information given here is valid for all appliances 1.3.x and later.

1.4.1 Determining Appliance Version

In appliances installed with a release 1.3.x or later, the appliance software version used to install a nodecan be read from the file /etc/opt/ibm/appliance-version.

Appliance version 1.5.53-5 and subsequent will have a version number formatted like 1.5.53-5.690. Thefirst 4 numbers are the appliance version, the last number is an internal build number.

In version 1.3.x the string "-3" was written as appliance version, appliances with "4-4" are version 1.4.x.

If this file does not exist, you can obtain the appliance version by executing1 # rpm -qi ibm-saphana-ipla

Different components like SAP HANA or drivers may have newer versions installed due to updating, butthis does not change the appliance version.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

2

Page 11: Hana Operations Guide

Technical Documentation

1.4.2 Appliance Changelog

Only major changes necessitating different operations are listed.

ApplianceVersion Changes

1.6.60-6 • Quota calculation changed

1.5.53-5• SLES4SAP 11 SP2 support• SLES4SAP 11 SP1 support dropped• GPFS stanza files instead of deprecated disk descriptors• VMWare virtualization supported

1.5.46-4• GPFS 3.5• GPFS FPO License• Host-based Routing as workaround for SAP OSS Note 1780950• Support for DR-enabled clusters

1.4.28-3 • Support for generation 2 models (7147-HAX, 7147-HBX, 7143-HAX,7143-HBX, 7143-HCX)

1.4.28-2 • Support for clusters based on updated SSD Models (7147-H3X with 16SSD drives)

1.4.28-1 • First official support for cluster installations• New File System layout according to SAP’s most recent design

1.3.23-1 • First official support for single node installations

Table 1: Appliance changelog

1.5 Acknowledgments

The authors of this document are:

• Trick Hartman, IBM Development @ SAP LinuxLab, Germany

• Christoph Nelles, IBM Development @ SAP LinuxLab, Germany

• Thorsten Nitsch, IBM GTS @ SAP LinuxLab, Germany

• Florian Bausch, IBM Development @ SAP LinuxLab, Germany

• Volker Fischer, IBM Development @ SAP LinuxLab, Germany

• Martin Bachmaier, IBM Development @ SAP LinuxLab, Germany

• Richard Ott, IBM Systems Lab Services Germany

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

3

Page 12: Hana Operations Guide

Technical Documentation

• Wolfgang Kamradek, IBM Development @ SAP LinuxLab, Germany

• Volker Pense, IBM Development @ SAP LinuxLab, Germany

• Christoph Schabert, IBM University Programs, Mainz/Mannheim, Germany

The authors would like to thank the following IBM Colleagues:

• Herbert Diether, IBM Development @ SAP LinuxLab, Germany

• Gereon Vey, IBM SAP International Competence Center, Germany

• Oliver Rettig, IBM Development @ SAP LinuxLab, Germany

• Keith Frisby, IBM Systems Lab Services, US

• Alexander Trefs, IBM Global Technical Services, Germany

And many people at SAP Development, Walldorf, Germany; specifically:

• Michael Becker, SAP HANA Support Development, Walldorf, Germany

• Oliver Rebholz, SAP HANA Development, Walldorf, Germany

• Helmut Cossmann, SAP HANA Development, Walldorf, Germany

• Stephan Kreitz, SAP HANA Support, Walldorf, Germany

• Abdel Sellami, SAP HANA Support, Walldorf, Germany

• Henning Sackewitz, SAP Development @ SAP LinuxLab

1.6 Feedback

We are interested in your comments and feedback. Please send it to [email protected].

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

4

Page 13: Hana Operations Guide

Technical Documentation

2 System Health Check

This chapter describes different steps to check the appliance’s health status. Section 2.1: Overall Systemstatus on page 5 is very important. At least the script described there should be updated and executed inregular intervals by a system administrator. The other sections present additional information and givedeeper insight into the system.

2.1 Overall System status

IBM provides a script to get an overview over the current system status and the configuration of therunning system.

NoteWe recommend to update and execute this script regularly to prevent system downtimes andother unwanted behaviour of the IBM Systems Solution for SAP HANA appliance.

NotePlease also consider to install the tools described in Section 2.5: Additional Tools for SystemChecks on page 21 to improve the result of the analysis.

-[1.4] Older Installations provide a script called saphana-check-ibm.sh which is preinstalled in /opt/ibm/saphana/bin. To use this script, simply execute

1 # /opt/ibm/saphana/bin/saphana-check-ibm.sh

As this script is very outdated it is recommended to fetch the most recent version from SAP Note16611464. You will then have a file called saphana-support-ibm.sh. Execute

1 # /opt/ibm/saphana/bin/saphana-support-ibm.sh -c

The command line parameters of the script are the same as described for [1.5]+.

[1.5]+ Newer installations have a combined support & check script called saphana-support-ibm.sh.Thetwo modes "collect support data" and "check system" are selected by the parameters -s resp. -c. Tocreate a support data dump when requested by the support team in a SAP OSS ticket and run

1 # saphana-support-ibm.sh -s

You can check your system health with the following command1 # saphana-support-ibm.sh -c

Some longer running and performance impacting checks can be run with the command1 # saphana-support-ibm.sh -c -e

For additional parameters and help use1 # saphana-support-ibm.sh -h

It is also recommend to download a more recent version from SAP Note 1661146. The script may beupdated without prior notice.

Figure 1 on page 6 shows an example output of an IBM System x3690. (The output may vary on yoursystem.)

The various checks should return "OK", and the GPFS status should be active.4https://service.sap.com/sap/support/notes/1661146

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

5

Page 14: Hana Operations Guide

Technical Documentation

Figure 1: Sample check script output

2.2 GPFS

The GPFS cluster file system must be up and running for SAP HANA. So in case that there are problemswith the SAP HANA application it is recommended to check the GPFS status first.

2.2.1 GPFS distributed shell command

GPFS comes with a distributed shell command mmdsh, which allows commands to be run on several nodes(in sequence, not in parallel). It requires a file with all the nodes where a command should be run. Forconvenience, the environment variable WCOLL can be set so the list needs not be specified on every callof mmdsh.5

Example: The file /var/mmfs/config/nodes.list contains these entries:

1 # cat /var/mmfs/config/nodes.list2 gpfsnode013 gpfsnode024 gpfsnode035 gpfsnode046 gpfsnode057 gpfsnode068

9 # export WCOLL=/var/mmfsconfig/nodes.list10

11 # mmdsh date12 gpfsnode01: Wed Jun 13 14:44:16 CEST 201213 gpfsnode02: Wed Jun 13 14:44:16 CEST 2012

5With [1.5] this is configured automatically during installation.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

6

Page 15: Hana Operations Guide

Technical Documentation

14 gpfsnode05: Wed Jun 13 14:44:16 CEST 201215 gpfsnode06: Wed Jun 13 14:44:16 CEST 201216 gpfsnode03: Wed Jun 13 14:44:16 CEST 201217 mmdsh: gpfsnode04 remote shell process had return code 255.18 gpfsnode04: ssh: connect to host gpfsnode04 port 22: No route to host

In this case gpfsnode04 is not running (or at least not reachable on the network)

2.2.2 GPFS Cluster status

The current cluster status is checked with the command "mmgetstate".

1 # mmgetstate -aLs2

3 Node number Node name Quorum Nodes up Total nodes GPFS state Remarks4 ------------------------------------------------------------------------------------5 1 gpfsnode01 1 1 1 active quorum node6

7 Summary information8 ---------------------9 Number of nodes defined in the cluster: 1

10 Number of local nodes active in the cluster: 111 Number of remote nodes joined in this cluster: 012 Number of quorum nodes defined in the cluster: 113 Number of quorum nodes active in the cluster: 114 Quorum = 1, Quorum achieved

The output shows, that the command was run on a single node system. GPFS state is active, and thequorum is achieved. This shows a healthy GPFS cluster on a single node.

In a scaleout configuration it looks like this:1 # mmgetstate -aLs23 Node number Node name Quorum Nodes up Total nodes GPFS state Remarks4 ------------------------------------------------------------------------------------5 1 gpfsnode01 2 3 6 active quorum node6 2 gpfsnode02 2 3 6 active quorum node7 3 gpfsnode03 2 3 6 active quorum node8 4 gpfsnode04 0 0 6 unknown9 5 gpfsnode05 2 3 6 active

10 6 gpfsnode06 2 3 6 active1112 Summary information13 ---------------------14 Number of nodes defined in the cluster: 615 Number of local nodes active in the cluster: 516 Number of remote nodes joined in this cluster: 017 Number of quorum nodes defined in the cluster: 318 Number of quorum nodes active in the cluster: 319 Quorum = 2, Quorum achieved

In this 6-node cluster example node 4 is down, therefore the GPFS state is unknown.

2.2.3 GPFS Cluster configuration

General information about the configured GPFS cluster can be obtained with the following command.

1 # mmlscluster

The output will be similar to this (single node):

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

7

Page 16: Hana Operations Guide

Technical Documentation

1 GPFS cluster information2 ========================3 GPFS cluster name: HANAcluster.gpfsnode014 GPFS cluster id: 138823473860113602115 GPFS UID domain: HANAcluster.gpfsnode016 Remote shell command: /usr/bin/ssh7 Remote file copy command: /usr/bin/scp89 GPFS cluster configuration servers:

10 -----------------------------------11 Primary server: gpfsnode0112 Secondary server: (none)1314 Node Daemon node name IP address Admin node name Designation15 -----------------------------------------------------------------------------------------------16 1 gpfsnode01 192.168.10.101 gpfsnode01 quorum1718 or in a scaleout configuration:1920 GPFS cluster information21 ========================22 GPFS cluster name: HANAcluster.gpfsnode0123 GPFS cluster id: 1388247602887187049024 GPFS UID domain: HANAcluster.gpfsnode0125 Remote shell command: /usr/bin/ssh26 Remote file copy command: /usr/bin/scp2728 GPFS cluster configuration servers:29 -----------------------------------30 Primary server: gpfsnode0131 Secondary server: gpfsnode023233 Node Daemon node name IP address Admin node name Designation34 -----------------------------------------------------------------------------------------------35 1 gpfsnode01 192.168.118.101 gpfsnode01 quorum36 2 gpfsnode02 192.168.118.102 gpfsnode02 quorum37 3 gpfsnode03 192.168.118.103 gpfsnode03 quorum38 4 gpfsnode04 192.168.118.104 gpfsnode0439 5 gpfsnode05 192.168.118.105 gpfsnode0540 6 gpfsnode06 192.168.118.106 gpfsnode06

2.2.4 Verify cluster settings

mmlsconfig shows the GPFS cluster settings and the configured file systems.

1 # mmlsconfig2 Configuration data for cluster HANAcluster.gpfsnode01:3 ------------------------------------------------------4 myNodeConfigNumber 15 clusterName HANAcluster.gpfsnode016 clusterId 138823473860113602117 autoload yes8 minReleaseLevel 3.4.0.79 dmapiFileHandleSize 32

10 dataStructureDump /tmp/GPFSdump11 pagepool 16G12 maxMBpS 204813 maxFilesToCache 400014 adminMode central15

16 File systems in cluster HANAcluster.gpfsnode01:17 -----------------------------------------------18 /dev/sapmntdata

This is for information only, in normal cases it should not be necessary to change any of these settings.Only modify these settings if instructed by IBM support!

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

8

Page 17: Hana Operations Guide

Technical Documentation

2.2.5 File system status

The file system parameters can be listed with the mmlsfs command.1 # mmlsfs sapmntdata23 flag value description4 ------------------- ------------------------ -----------------------------------5 -f 32768 Minimum fragment size in bytes6 -i 512 Inode size in bytes7 -I 32768 Indirect block size in bytes8 -m 1 Default number of metadata replicas9 -M 2 Maximum number of metadata replicas

10 -r 1 Default number of data replicas11 -R 2 Maximum number of data replicas12 -j cluster Block allocation type13 -D nfs4 File locking semantics in effect14 -k all ACL semantics in effect15 -n 32 Estimated number of nodes that will mount file system16 -B 1048576 Block size17 -Q user;group;fileset Quotas enforced18 none Default quotas enabled19 --filesetdf No Fileset df enabled?20 -V 12.10 (3.4.0.7) File system version21 --create-time Thu May 31 18:27:25 2012 File system creation time22 -u Yes Support for large LUNs?23 -z No Is DMAPI enabled?24 -L 4194304 Logfile size25 -E Yes Exact mtime mount option26 -S No Suppress atime mount option27 -K whenpossible Strict replica allocation option28 --fastea Yes Fast external attributes enabled?29 --inode-limit 3000320 Maximum number of inodes30 -P system;hddpool Disk storage pools in file system31 -d data01node01;data02node01;data03node01 Disks in file system32 -A yes Automatic mount option33 -o none Additional mount options34 -T /sapmnt Default mount point35 --mount-priority 0 Mount priority

Again these settings are for your information only. The settings in bold should be the same as in thisexample, but in normal circumstances they shouldn’t have been modified. If in doubt, contact SAP /IBM support.

2.2.6 Disk status

The disk status of the NSD of the GPFS file system status can be verified with mmlsdisk. The modifier"-e" will list any disks in error.

1 # mmlsdisk sapmntdata -e

The output should be:

1 All disks up and ready

If eg. a GPFS node is down, the output in a HANA cluster might look like this:1 disk driver sector failure holds holds storage2 name type size group metadata data status availability pool3 ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------4 MDdata01node04 nsd 512 1004 Yes Yes ready down system5 MDdata02node04 nsd 512 1004 Yes Yes ready down system6 data04node04 nsd 512 1004 No Yes ready down hddpool7 data05node04 nsd 512 1004 No Yes ready down hddpool8 data06node04 nsd 512 1004 No Yes ready down hddpool9 data07node04 nsd 512 1004 No Yes ready down hddpool

10 data08node04 nsd 512 1004 No Yes ready down hddpool11 data09node04 nsd 512 1004 No Yes ready down hddpool

A single node with all disks running will look like this:

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

9

Page 18: Hana Operations Guide

Technical Documentation

1 # mmlsdisk sapmntdata2 disk driver sector failure holds holds storage3 name type size group metadata data status availability pool4 ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------5 data01node01 nsd 512 1001 No Yes ready up hddpool6 data02node01 nsd 512 1001 Yes Yes ready up system7 data03node01 nsd 512 1001 Yes Yes ready up system

It is important that the status of the disks is "ready" and the availability should be "up". If that’s notthe case, investigation of the cause must be performed.

2.2.7 Disk usage

The fill level of the disk are shown when using the GPFS implementation of the df command mmdf.1 # mmdf sapmntdata2 disk disk size failure holds holds free KB free KB3 name in KB group metadata data in full blocks in fragments4 --------------- ------------- -------- -------- ----- -------------------- -------------------5 Disks in storage pool: system (Maximum disk size allowed is 1.7 TB)6 data02node01 191406080 1001 Yes Yes 189366272 ( 99%) 4384 ( 0%)7 data03node01 191406080 1001 Yes Yes 189364224 ( 99%) 7680 ( 0%)8 ------------- -------------------- -------------------9 (pool total) 382812160 378730496 ( 99%) 12064 ( 0%)

1011 Disks in storage pool: hddpool (Maximum disk size allowed is 18 TB)12 data01node01 1943262560 1001 No Yes 1934685184 (100%) 42272 ( 0%)13 ------------- -------------------- -------------------14 (pool total) 1943262560 1934685184 (100%) 42272 ( 0%)1516 ============= ==================== ===================17 (data) 2326074720 2313415680 ( 99%) 54336 ( 0%)18 (metadata) 382812160 378730496 ( 99%) 12064 ( 0%)19 ============= ==================== ===================20 (total) 2326074720 2313415680 ( 99%) 54336 ( 0%)2122 Inode Information23 -----------------24 Number of used inodes: 941025 Number of free inodes: 49439826 Number of allocated inodes: 50380827 Maximum number of inodes: 3000320

2.2.8 Quotas

The next important check that should be done is to verify how the data and log paths of SAP HANAare filled. Since these are in the same file system the applied quotas need to be checked.

1 # mmrepquota -j -v -a -q --block-size=G23 sapmntdata USR quota is on; default quota is off4 sapmntdata GRP quota is on; default quota is off5 sapmntdata FILESET quota is on; default quota is off6 *** Report for FILESET quotas on sapmntdata7 Block Limits | File Limits8 Name type GB quota limit in_doubt grace | files quota limit in_doubt grace ←↩

↪→entryType9 root FILESET 8 0 0 1 none | 5333 0 0 20 none i

10 hanadata FILESET 1 768 768 1 none | 12 0 0 19 none e11 hanalog FILESET 4 256 256 1 none | 24 0 0 19 none e

hanadata and hanalog are implemented as GPFS filesets to allow quotas to be used.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

10

Page 19: Hana Operations Guide

Technical Documentation

2.2.9 SAP HANA Log Files Storage

NoteThis section does not apply to SSD or Gen 2 XS/S models (see Appendix E: IBM MachineType Model Code (MTM) to SAP HANA T-Shirt Size Mapping on page 106) as SAP HANAdata and log files are written to the same GPFS storage pool.

NoteStarting with appliance version [1.6] the problem described in this chapter will no longeroccur if the system is setup and maintained correctly.

2.2.9.1 Check for misplaced SAP HANA log files-[1.5.46] Appliance versions up including to 1.5.46 need to update the preinstalled IBM support scriptin order to get a version capable of detecting misplaced HANA log files. You can fetch the most recentversion from SAP Note 16611466. After updating the script follow the instructions below.

[1.5.53]+ Starting with this appliance version the saphana-support-ibm.sh script can detect misplacedlog files if called with the options check (-c) and exhaustive (-e).

Execute

1 # saphana-support-ibm.sh -c -e

and watch for warnings at the end of the output. If the warning message SAP HANA log files in wrongGPFS storage pool appears, please read sections "Update GPFS Policy File" and "Check manually formisplaced SAP HANA log files".

2.2.9.2 Check manually for misplaced SAP HANA log files

1. [1.3.x]-[1.4.x] Please update the policy file. Follow the instructions in Section 2.2.9.3: UpdateGPFS Policy File on page 12.

2. To get the number of misplaced log segments issue this line

1 # find /sapmnt/log/ -type f |xargs mmlsattr -L |grep "hddpool" |wc -l

The number returned is the number of wrong placed SAP HANA log segments and should be zero.In this is the case no further actions are required.

3. If there are misplaced files you have to free up space in /sapmnt/log by deleting no longer neededlog files. How this can be achieved is described in SAP’s SAP HANA Administration Guide. Toget the size of the system pool use

1 # mmlspool --block-size G sapmntdata

and check the value "Total Data" for the system pool. Note that 5% are not reserved and notusable. You need to decrease the space occoupied by files in /sapmnt/log below this number to beable to migrate all log segments back into the data pool, but it is a good idea to free up as muchspace as possible.

4. You can test if enough space is available with the command

1 # mmapplypolicy /sapmnt/log -I test

Search for the lines

6https://service.sap.com/sap/support/notes/1661146

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

11

Page 20: Hana Operations Guide

Technical Documentation

1 [I] GPFS Policy Decisions and File Choice Totals:2 Chose to migrate 52428800KB: 50 of 50 candidates;

If all of the candidates are chosen, all files will be moved back into the system pool, if not deletemore unneeded log segments and retry.

5. After freeing up enough space migrate the files back:

1 # mmapplypolicy /sapmnt/log

2.2.9.3 Update GPFS Policy File

[1.3.x]-[1.4.x] The preinstalled GPFS file system policy needs to be updated. Open /var/mmfs/config/policy.txt with your favorite text editor and add

1 RULE 'migrate logs back' MIGRATE FROM POOL 'hddpool' TO POOL 'system' LIMIT(95) FOR FILESET(hanalog) as one line.

Update the policy

1 # mmchpolicy sapmntdata /var/mmfs/config/policy.txt

This change is permanent and needs to be done only once on one active node.

2.2.9.4 Background

[1.3]+ Although only one GPFS file system is created, SAP HANA data files and log files are storedin different places. Data is written to the HDDs, log files are stored on either SSDs or IBM High IOPSdrives. This separation is done by using GPFS storage pools, filesets and a GPFS policy installed intothe file system which routes data to the correct pool.

In some combinations of model type, size of the High IOPS drive and the server usage (clustered orsingle) there is a gap between the capacity of the GPFS pool containing the flash-memory-based drivesand the allowed quota for SAP HANA logs. If more log files are stored into this pool than the drives canstore, the excess files will be stored on the HDDs as long as the quota is not exceeded. When the quotais exceeded SAP HANA will stop.

During normal operation, SAP HANA should never fill up this dedicated space as long as the SAPHANA "log_mode" is setup correctly to either reuse old unneeded log files or to backup these log filesand subsequently delete these. More information can be found in SAP Note 1642148 - "FAQ: SAP HANADatabase Backup & Recovery" in question "What general configuration options do I have for saving thelog area?"

Files written to the HDD pool won’t be automatically moved back into the system pool when space getfreed there. This must be done manually.

2.3 SAP HANA Application status

In most cases the persons performing OS specific tasks will not be able to determine the status of theSAP HANA application in detail. However, some checks can be performed to determine at least if theapplication is running from a system point of view.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

12

Page 21: Hana Operations Guide

Technical Documentation

2.3.1 Command line

Check if the SAP Hostagent is running:

1 # /etc/init.d/sapinit status2 saphostexec running (pid = 8305)3 sapstartsrv running (pid = 8322)4 13:59:31 13.06.2012 LOG: Using PerfDir (DIR_PERF) = /usr/sap/tmp5 saposcol running (pid = 8495)6 pid's (8322 8402)7 running

saphostexec and saposcol should be running. In case that the output is only "running", an old version ofthe sapinit script is in use!

Check if the SAP HANA processes are started.

1. User: <SID>adm (this example uses SID TRK and instance number 69)

2. Command: HDB proc

Output:1 <hostname>:HDB:trkadm ~ 13> HDB proc23 USER PID PPID %CPU VSZ RSS COMMAND4 trkadm 2510 2132 25.0 12736 1676 \_ /bin/sh /usr/sap/TRK/HDB69/HDB proc5 trkadm 2538 2510 0.0 12736 664 \_ /bin/sh /usr/sap/TRK/HDB69/HDB proc6 trkadm 8475 1 0.0 43640 1856 sapstart pf=/usr/sap/TRK/SYS/profile/TRK_HDB69_hananode017 trkadm 8489 8475 0.0 484960 152560 \_ /usr/sap/TRK/HDB69/hananode01/trace/hdb.sapTRK_HDB69 -d -nw -f /usr/sap/TRK/←↩

↪→HDB69/hananode01/daemon.ini pf=/usr/sap/TRK/SYS/profile/TRK_HDB69_hananode018 trkadm 8521 8489 0.2 9737268 1611184 \_ hdbnameserver9 trkadm 8645 8489 0.0 8535164 470412 \_ hdbpreprocessor

10 trkadm 8669 8489 1.0 11036400 3541240 \_ hdbindexserver11 trkadm 8676 8489 0.8 10678476 2768400 \_ hdbstatisticsserver12 trkadm 8691 8489 0.3 10972796 2207432 \_ hdbxsengine13 trkadm 9082 8489 0.0 418548 43136 \_ sapwebdisp_hdb pf=/usr/sap/TRK/HDB69/hananode01/wdisp/sapwebdisp.pfl -f /←↩

↪→usr/sap/TRK/HDB69/hananode01/trace/dev_webdisp14 trkadm 8402 1 0.0 287668 106876 /usr/sap/TRK/HDB69/exe/sapstartsrv pf=/usr/sap/TRK/SYS/profile/←↩

↪→TRK_HDB69_hananode01 -D -u trkadm

The hdb processes (hdbnameserver, hdbpreprocessor, hdbindexserver, hdbstatisticsserver and hdbxsen-gine) should be running in a single node environment. If in a scaleout configuration this command shouldbe run on every node of the SAP HANA cluster. This can be done eg. with mmdsh (GPFS distributedshell command)

1 # mmdsh "su - flyadm -c 'HDB info'"

This will return an output similiar to the above for every node. The difference is, that the hdbstatistic-sserver and the hdbxsengine are only running on the master node of the SAP HANA cluster.

1 <hostname>:HDB:flyadm /usr/sap/FLY/HDB59 3> HDB proc2 USER PID PPID %CPU VSZ RSS COMMAND3 flyadm 47047 46972 14.2 12736 1676 \_ /bin/sh /usr/sap/FLY/HDB59/HDB proc4 flyadm 47075 47047 0.0 12736 664 \_ /bin/sh /usr/sap/FLY/HDB59/HDB proc5 flyadm 32579 1 0.0 43632 1828 sapstart pf=/sapmnt/FLY/profile/FLY_HDB59_hananode026 flyadm 32654 32579 0.0 483716 152516 \_ /usr/sap/FLY/HDB59/hananode02/trace/hdb.sapFLY_HDB59 -d -nw -f /usr/sap/FLY/←↩

↪→HDB59/hananode02/daemon.ini pf=/usr/sap/FLY/SYS/profile/FLY_HDB59_hananode027 flyadm 32674 32654 0.5 12628040 588640 \_ hdbnameserver8 flyadm 32701 32654 0.0 10940272 528308 \_ hdbpreprocessor9 flyadm 32726 32654 0.6 16411080 2579248 \_ hdbindexserver

10 flyadm 32500 1 0.0 287880 107056 /sapmnt/FLY/HDB59/exe/sapstartsrv pf=/sapmnt/FLY/profile/FLY_HDB59_hananode02 -D←↩↪→ -u flyadm

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

13

Page 22: Hana Operations Guide

Technical Documentation

2.3.2 HDB Admin tool

The HDB admin tool is a Python based X-application installed on the SAP HANA server. Its look-and-feel is similar to the "TREX Admin tool (Stand-alone)" which is used in SAP TREX and SAP BWaccelerator. It is run as <SID>adm and can be used to get a quick overview of the current SAP HANAlandscape.

1 <hostname>:HDB:trkadm ~ 13> HDB admin

Figure 2: HDB Admin System Overview

The overview section shows the overall status at a glance. Services, memory, CPU and disk should begreen. It is very likely that there are some messages from the alert server, in this case there might be ayellow or red condition in the alert section. They can be investigated in the alert section of the tool.

In the services section the roles of the nodes can be checked and whether the nodes are active or not.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

14

Page 23: Hana Operations Guide

Technical Documentation

Figure 3: HDB Admin Landscape tab for services

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

15

Page 24: Hana Operations Guide

Technical Documentation

2.3.3 SAP HANA Studio

SAP HANA Studio is the tool for the administration of the SAP HANA database. Since SPS 04 it isinstalled on the SAP HANA server as well, so it can be used even by the system administrator to checkthe status of the system if necessary. SAP HANA studio is a complex, Eclipse(c) based application, doc-umentation can be found at http://help.sap.com/hana → SAP HANA Technical Operations Manualor SAP HANA Database Administration Guides

The program is started eg. as user root:

1 # /usr/sap/hdbstudio/hdbstudio

The SAP HANA system needs to be added to the navigation bar. Afterwards, a couple of actions can beperformed. This includes administration, life cycle management and backup & restore.

Figure 4: HDB Studio: Add System

The administration task offers an overview section similar to the one seen in the HDB Admin tool:

Again this gives a general overview about the health status of the SAP HANA system. If all buttons aregreen it indicates that from an infrastructure point of view the system is fine.

2.4 Network

2.4.1 Switch Access

Log on to the G8264R switch with the admin userid per ssh (ssh admin@<switch ip>) or telnet andenter the admin password:

1 login as: admin2 Using keyboard-interactive authentication.3 Enter password:4

5 IBM Networking Operating System RackSwitch G8264.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

16

Page 25: Hana Operations Guide

Technical Documentation

Figure 5: HDB Studio: System Overview

6

7

8 G8264>9 Oct 24 9:54:51 G8264 NOTICE mgmt: admin(admin) login from host a.b.c.d

2.4.2 Privilege mode

Enter privilege mode to access advanced management and troubleshooting commands.

1 G8264>enable2

3 Enable privilege granted.

2.4.3 System information

Use show system to get basic system and health information.

1 G8264#show system2 System Information at 14:28:18 Fri Oct 26, 20123 Time zone: Europe/Germany4 Daylight Savings Time Status: Enabled5

6 IBM Networking Operating System RackSwitch G82647

8 Switch has been up for 14 days, 22 hours, 30 minutes and 28 seconds.9 Last boot: 7:22:03 Fri Feb 3, 2000 (reset from Telnet/SSH)

10

11 MAC address: xx:xx:xx:xx:xx:xx IP (If 1) address: 0.0.0.012 Management Port MAC Address: xx:xx:xx:xx:xx:xx13 Management Port IP Address (if 128): a.b.c.d14 Hardware Revision: 0

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

17

Page 26: Hana Operations Guide

Technical Documentation

15 Hardware Part No: BAC-00065-0016 Switch Serial No: xxxxxxxxxxxx17 Manufacturing date: 12/3018

19 MTM Value: 7309-HC320 ESN: xxxxxxx21 Software Version 7.4.1.0 (FLASH image1), active configuration.22

23

24

25 Temperature Mother Top: 47 C26 Temperature Mother Bottom: 37 C27 Temperature Daughter Top: 44 C28 Temperature Daughter Bottom: 33 C29

30 Warning at 75 C and Recover at 90 C31

32 Fan 1 in Module 1: RPM= 9407 PWM= 15( 5%) Back-To-Front33 Fan 2 in Module 1: RPM= 4787 PWM= 15( 5%) Back-To-Front34 Fan 3 in Module 2: RPM= 9764 PWM= 15( 5%) Back-To-Front35 Fan 4 in Module 2: RPM= 4109 PWM= 15( 5%) Back-To-Front36 Fan 5 in Module 3: RPM= 9694 PWM= 15( 5%) Back-To-Front37 Fan 6 in Module 3: RPM= 4576 PWM= 15( 5%) Back-To-Front38 Fan 7 in Module 4: RPM= 9782 PWM= 15( 5%) Back-To-Front39 Fan 8 in Module 4: RPM= 3837 PWM= 15( 5%) Back-To-Front40

41 System Fan Airflow: Back-To-Front42

43

44 Power Supply 1: OK45 Power Supply 2: OK46

47 Power Faults: ()48 Fan Faults: ()49 Service Faults: ()50

51 ...

2.4.4 FW image / Boot image information

Use show boot to display active FW version and boot options.

1 G8264#show boot2 Currently set to boot software image1, active config block.3 NetBoot: disabled, NetBoot tftp server: , NetBoot cfgfile:4 USB Boot: disabled5 Currently profile is default, set to boot with default profile next time.6 Current CLI mode set to ISCLI with selectable prompt disabled.7 Current FLASH software:8 image1: version 7.4.19 NormalPanel

10 image2: version 7.2.411 NormalPanel

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

18

Page 27: Hana Operations Guide

Technical Documentation

12 boot kernel: version 7.4.113 Currently scheduled reboot time: none

2.4.5 Display Switch configuration

show running-config displays the switch configuration.

1 G8264#show running2 Current configuration:3 !4 version "7.4.1"5 switch-type "IBM Networking Operating System RackSwitch G8264"6 !7 system timezone 2998 ! Europe/Germany9 system daylight

10 !11 ssh enable12 !13

14 !15 !16 line vty length 017 line console length 018 no system dhcp19 no system default-ip20 hostname "G8264"21 system idle 6022

23 ...

2.4.6 Check interfaces

This command lists all ports with its basic configuration info. Important is ’Tag’, ’PVID’ and ’VLAN(s)’which shows all VLANs the port belongs to. The ISL ports need to be configured for the ISL VLANas well as for the GPFS and SAP HANA VLAN in order to support the vLAG configuration for highavailability.

1 G8264>show interface information2 Alias Port Tag Type RMON Lrn Fld PVID NAME VLAN(s)3 ------- ---- --- ---------- ---- --- --- ------ -------------- -------------------4 1 1 n External d e e 1 15 5 5 n External d e e 1 16 9 9 n External d e e 1 17 13 13 n External d e e 1 18 17 17 n External d e e 100 GPFS 1009 18 18 n External d e e 101 HANA 101

10 19 19 n External d e e 100 GPFS 10011 20 20 n External d e e 101 HANA 10112 ...13 62 62 n External d e e 1 114 63 63 y External d d e 100 ISL 100 101 409415 64 64 y External d d e 100 ISL 100 101 409416 MGT 65 n Mgmt d e e 4095 40951718 * = PVID is tagged.19 # = PVID is ingress tagged.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

19

Page 28: Hana Operations Guide

Technical Documentation

2.4.7 Check links

The link status listing gives a quick overview of each link and shows if it is up or down and at whichspeed it operates. In the below example the ports 1-16 are empty, ports 17-48 are occupied with a 10GigEtransceiver serving a vLAG configuration. The ports 63 and 64 are configured as ISLs with a 10GigEtransceiver as well. Port MGT/65 is the dedicated management port with an RJ45 connector.

1 G8264>show interface link2 ------------------------------------------------------------------3 Alias Port Speed Duplex Flow Ctrl Link Name4 ------- ---- ----- -------- --TX-----RX-- ------ ------5 1 1 40000 full no no down 16 5 5 40000 full no no down 57 9 9 40000 full no no down 98 13 13 40000 full no no down 139 17 17 10000 full no no up GPFS

10 18 18 10000 full no no up HANA11 19 19 10000 full no no up GPFS12 20 20 10000 full no no up HANA13 21 21 10000 full no no up GPFS14 ...15 44 44 10000 full no no up HANA16 45 45 10000 full no no up GPFS17 46 46 10000 full no no up HANA18 47 47 10000 full no no up GPFS19 48 48 10000 full no no up HANA20 49 49 1G/10G full no no down 4921 50 50 1G/10G full no no down 5022 ...23 61 61 1G/10G full no no down 6124 62 62 1G/10G full no no down 6225 63 63 10000 full no no up ISL26 64 64 10000 full no no up ISL27 MGT 65 100 full yes yes up MGT

2.4.8 Check vlans

With the ’show vlan’ command you get a quick overview of the configured vlans on the switch.

Following VLANs are defined in the example below:

• VLAN 100 for internal GPFS connections

• VLAN 101 for internal SAP HANA connections

• VLAN 4094 for ISL (Inter Switch Link)

• VLAN 4095 used for the Management.1 G8264>show vlan2 VLAN Name Status Ports3 ---- ------------------------------ ------ -------------------------4 1 Default VLAN ena 1 5 9 13 49-625 100 GPFS ena 17 19 21 23 25 27 29 31 33 356 37 39 41 43 45 47 63 647 101 HANA ena 18 20 22 24 26 28 30 32 34 368 38 40 42 44 46 48 63 649 4094 ISL ena 63 64

10 4095 Mgmt VLAN ena MGT

2.4.9 Check transceivers

The transceiver status and the type of transceiver can be important for problem analysis. E.g. unsup-ported SFPs will be indicated.

1 G8264>show transceiver23 Name TX RXLos TXFlt Volts DegsC TXuW RXuW Media WavLen Approval4 ---------------- --- ----- ----- ----- ----- ----- ----- ------- ------ --------

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

20

Page 29: Hana Operations Guide

Technical Documentation

5 1 QSFP+ 1 < NO Device Installed >6 5 QSFP+ 2 < NO Device Installed >7 9 QSFP+ 3 < NO Device Installed >8 13 QSFP+ 4 < NO Device Installed >9 17 SFP+ 1 Ena LINK no 3.24 42.0 558.4 495.6 SR SFP+ 850nm Approved

10 Blade Network Part:BN-CKM-SP-SR Date:120620 S/N:AD1224A09R211 18 SFP+ 2 Ena LINK no 3.29 39.5 561.2 612.1 SR SFP+ 850nm Approved12 Blade Network Part:BN-CKM-SP-SR Date:120403 S/N:AA1212AM8G013 ...

2.4.10 Check vLAG / LACP information

Check whether the LACP status of all ports is ’up’ and ’selected’. The LACP key is only from localsig-nificance (other side of the link can use a different key). But in order to keep the configuration clear andsimple the same key value is used on both switches.

1 G8264>show lacp information2 port mode adminkey operkey selected prio aggr trunk status minlinks3 ---------------------------------------------------------------------------------4 1 off 1 1 no 32768 -- -- -- 15 5 off 5 5 no 32768 -- -- -- 16 9 off 9 9 no 32768 -- -- -- 17 13 off 13 13 no 32768 -- -- -- 18 17 active 1017 1017 yes 32768 17 69 up 19 18 active 1018 1018 yes 32768 18 67 up 1

10 19 active 1019 1019 yes 32768 19 68 up 111 20 active 1020 1020 yes 32768 20 97 up 112 ...13 61 off 61 61 no 32768 -- -- -- 114 62 off 62 62 no 32768 -- -- -- 115 63 active 200 200 yes 32768 63 65 up 116 64 active 200 200 yes 32768 63 65 up 1

The vLAG state should be ’formed’. State ’local up’ or ’remote up’ indicate that partner switch or theown side has problems to form the link aggregation.

1 G8264>show vlag information2 vLAG Tier ID: 13 vLAG system MAC: 08:17:f4:c3:dd:004 Local MAC 74:99:75:12:50:00 Priority 0 Admin Role SECONDARY (Operational Role PRIMARY)5 Peer MAC 74:99:75:0a:8d:00 Priority 06 Health local 10.21.199.123 peer 10.21.199.124 State UP7 ISL trunk id 658 ISL state Up9 Startup Delay Interval: 120s (Finished)

1011 vLAG 65: config with admin key 1017, associated trunk 69, state formed1213 vLAG 66: config with admin key 1018, associated trunk 67, state formed1415 ...

2.5 Additional Tools for System Checks

2.5.1 IBM Advanced Settings Utility

In some cases it might be useful to check the UEFI settings of the HANA servers. Therefore, thesaphana-support-ibm.sh script uses the IBM Advanced Settings Utility (ASU), if it is installed, andprints out warnings, if there is a misconfiguration.

Download the latest Linux 64-bit RPM from http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-ASU and install the RPM.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

21

Page 30: Hana Operations Guide

Technical Documentation

2.5.2 IBM ServeRAID MegaCli Utility for Storage Management

The saphana-support-ibm.sh script also analyzes the status of the IBM ServeRAID controllers and thecontroller-internal batteries to check whether the controllers are in a working and performing state.

For activation of this feature the IBM ServeRAID MegaCLI (Command Line) Utility for Storage Man-agement software must be installed. Go to https://ibm.com/support/entry/myportal/docdisplay?Indocid=migr=5087144 and download the file locally and install the RPMs.

2.5.3 IBM SSD Wear Gauge CLI utility

For models of the IBM Systems Solution for SAP HANA appliance that come with SSDs, this meansfirst generation SSD and second generation XS and S models, it might be useful to check the state of theSSDs.

Go to http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5090923 and down-load the latest binary of the IBM SSD Wear Gauge CLI utility (ibm_utl_ssd_cli-<version>_linux_32-64.bin). Copy it to the machine to be checked.

Copy the bin file into /opt/ibm/ssd_cli/:1 # mkdir -p /opt/ibm/ssd_cli/2 # cp ibm_utl_ssd_cli-*_linux_32-64.bin /opt/ibm/ssd_cli/3 # chmod u+x /opt/ibm/ssd_cli/ibm_utl_ssd_cli-*_linux_32-64.bin

Execute the binary:1 # /opt/ibm/ssd_cli/ibm_utl_ssd_cli-*_linux_32-64.bin -u

Sample output:1 1 PN:68Y7719-40K6897 SN:50301DEW FW:SA03SB6C2 Percentage of cell erase cycles remaining: 100%3 Percentage of remaining spare cells: 100%4 Life Remaining Gauge: 100%

2.5.4 IBM Systems Solution for SAP HANA Health Checker

The IBM Systems Solution for SAP HANA Health Checker, short Health Checker, is a command linetool to check the state of your IBM Systems Solution for SAP HANA appliance. It reviews all partsand aspects of your system, beginning from the hardware over the GPFS and ending at the SAP HANAappliance. The Health Checker is built into the Linux Health Checker7 framework (lnxhc).

The Health Checker is developed to be executed regularly by system administrators. For every detectedmisconfiguration or outdated software version it will display an explanation and steps to be taken toresolve the issues.

The IBM Systems Solution for SAP HANA Health Checker may only be installed on IBM SystemsSolution for SAP HANA appliances.

Optionally you can decide to install three other tools: IBM ASU (see Section 2.5.1), LSI MegaCLI (seeSection 2.5.2), and IBM SSD Wear Gauge CLI Utility for Linux (see Section 2.5.3). They enable threeadditional checks for your system.

The Linux Health Checker framework comes with a lot of checks, optimized for System z. These checksget disabled by the IBM Systems Solution for SAP HANA Health Checker and only IBM and SAP HANA

7http://lnxhc.sourceforge.net

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

22

Page 31: Hana Operations Guide

Technical Documentation

specific checks are activated. But still you can decide what checks you want to run on your system. Everycheck contains a number of different tests. Every test checks one part of the system. If a test fails itthrows an exception with a certain severity. The severity can be:

• [high] a critical problem that should be solved as soon as possible

• [medium] a problem that can become critical or cost a lot of performance

• [low] a problem that only slightly affects the system

Every exception comes with four different sections:

• [summary] a short summary about the problem’s topic

• [explanation] a short explanation of the problem

• [solution] the next step to solve the problem

• [reference] a guide, a link or contact for further information

There are two different kinds of checks:

• [Short checks] which do not need a lot of time and focus on system configurations and easyaccessible data.

• [Long checks] which consume a lot of time and can slow down your system. They focus on largeamount of data, their consistency and integrity.

2.5.4.1 Installation Download the Linux Health Checker RPM from Sourceforge and execute:

1 rpm -ivh <the lnxhc RPM>

Download the ZIP file from SAP Note 18981038 and execute:

1 unzip ibm-saphana-health_checker.zip2 rpm -ivh ibm-saphana-health_checker-*.rpm

2.5.4.2 Update The Health Checker RPM will be updated in irregular intervals without prior notice.The updated RPM will be available in SAP Note 1898103.

WarningDo not upgrade the RPM using the rpm -U command.

Download the ZIP file from SAP Note 1898103 and execute:

1 rpm -e ibm-saphana-health_checker2 unzip ibm-saphana-health_checker.zip3 rpm -ivh ibm-saphana-health_checker-*.rpm

2.5.4.3 Usage The main function of the Health Checker is to run checks. Before you run a healthcheck you may choose one of two possible detail levels:

• level 1: This detail level runs all activated checks and prints out the result of the check. Theresult can either be SUCCESS for no problems with your system or EXCEPTION if a problemin your system was detected. Every exception comes with a severity level: low, medium or high.Every exception is followed by a short SUMMARY of the detected problem. You start this detaillevel check by entering:

8https://service.sap.com/sap/support/notes/1898103

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

23

Page 32: Hana Operations Guide

Technical Documentation

1 lnxhc run

• level 2: This level of detail gives you much more information about check results and overviewof the entire health check. The exceptions in level 2 are much more detailed than exceptions inlevel 1. It shows in addition to the SUMMARY an EXPLANATION, a SOLUTION and aREFERENCE area for every exception.

There are three sections at the end of a health check:

– check results: A short overview over the number of successful checks, exceptions and totalchecks.

– Exceptions: A summary of the number of exceptions per severity level and exceptions intotal.

– Run-time: A short roundup over the time used to execute the checks.

Start the Health Checker with the following command. (-V stands for verbose.)

1 lnxhc run -V

If you want to review some test results again, replay the health check by adding the -r parameter to thecommand:

1 lnxhc run -r -V

2.5.4.4 Schedule With the Linux scheduling mechanism, crontab, its possible to schedule healthchecks on your system and send the results per mail. Your system must be able to send mails. Thenfollow these steps:

1. Open the crontab file:

1 crontab -e

2. Add two new lines with the following content:

1 MAILTO="[email protected]"2 [minute][hour][day][month][weekday] path run -V

Replace the following parts:

(a) [email protected] → your actual e-mail address

(b) path → the installation path of the Linux Health Checker, most likely /usr/bin/lnxhc

2.5.4.5 Customize Sometimes it is necessary to deactivate or activate certain checks. Simply replacethe hana_* by the check you want to de- or activate. To deactivate a check type in:

1 lnxhc check -S inactive hana_*

To activate a check type in:

1 lnxhc check -S active hana_*

To get a full list of all installed checks and the activation status execute:

1 lnxhc check -l

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

24

Page 33: Hana Operations Guide

Technical Documentation

3 Single Node Operations

3.1 Notice

Single Node installation can be a single Server or a VMware install.

3.2 starting

GPFS and HANA start automatically when you boot the System. HANA has to start after GPFS.Incase you have to start it manually, start GPFS first

1. Start GPFS: call /usr/lpp/mmfs/bin/mmstartup -a (as root)

2. Start HANA: call /etc/rc.d/sapinit start (as root)

3.3 stopping

The best way is to shut down the Server. If you have to shut down HANA or GPFS, for example becauseyou need to make an update, you have to do stop HANA first and then GPFS

1. Stop HANA: call /etc/rc.d/sapinit stop (as root)

2. Stop GPFS: call /usr/lpp/mmfs/bin/mmshutdown -a (as root)

3.4 gpfs failure

Check everything as described in System Health Check -> GPFS Cluster Status. If needed check theGPFS logs in /var/adm/ras/mmfs.log.latest

3.5 HANA failure

Check everything as described in System Health Check -> SAP HANA Application Status.

3.6 Disk failure

Please see Chapter 6.3: Replacing failed hard drives on page 60. When replacing the disk use the use thecorrect topology information as described in Appendix C: Topology Vectors (GPFS 3.5 failure groups)on page 102.

3.7 Network failure

Do the usual routine you do if you have network problems on your other servers. check cable, switch,network cards, IP-Adress, routing, name resolution and if the correct drivers are loaded in the Kernel

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

25

Page 34: Hana Operations Guide

Technical Documentation

4 Cluster Operations

4.1 After server node failure actions

This section is only valid if a server is unavailable for a longer period, permanently unavailable or needsa reinstallation. For shorter outages (server reboots, network outages) see section 4.2: Temporary NodeFailure on page 30.

In the case of a server failure SAP HANA will switch over to one of the SAP HANA standby nodes, sonormal operation will continue. But as the normal GPFS setup stores only two replicas of data, a secondfailing node will cause data loss and service interruption. This makes it important to restore the datareplication by either adding a new node to the GPFS cluster or by shrinking the size of the file system tothe remaining nodes. Shrinking the file system should always be the first step after a node failure. Aftercompleting this step a replacement node can be provided. In both case you have to perform a re-stripeoperation in order to rebuild the replicas so you are protected against future server node failures again.

4.1.1 Recovering the GPFS file system

First you have to make sure that the file system is in a clean state and then you have to rebuild the datareplicas so that the cluster is protected against additional failures. You can issue the given commandsfrom any of the remaining servers within the cluster.

1. (on any active node) Remove the missing devices of the failed system from the GPFS configuration.GPFS will refuse to repair the file system as long as hard drives belonging to the file system aredown, so you have to remove these drives. First find the missing hard drives with the commandmmlsdisk:

1 # mmlsdisk sapmntdata -e

When using the parameter -e only missing drives will be shown. A missing drive entry will looklike this:

1 nsddata01gpfs1 nsd 512 1003 no yes ready down system

Remove the missing hard drives with the command:1 # mmdeldisk sapmntdata "hdd1;hdd2;...;hddX"

replacing hdd1..hddX with the actual drive names as shown by mmlsdisk. mmdeldisk will implicitlyperform a restripe operation and repair the replication.

Output will look like this:1 Deleting disks ...2 Scanning system storage pool3 Scanning file system metadata, phase 1 ...4 Scan completed successfully.5 Scanning file system metadata, phase 2 ...6 Scan completed successfully.7 Scanning file system metadata, phase 3 ...8 Scan completed successfully.9 Scanning file system metadata, phase 4 ...

10 Scan completed successfully.11 Scanning user file metadata ...12 0,07 % complete on Thu Dec 1 10:42:41 2011 ( 86740 inodes 998 MB)13 0,41 % complete on Thu Dec 1 10:43:02 2011 ( 122434 inodes 5965 MB)14 37,49 % complete on Thu Dec 1 10:53:08 2011 ( 494189 inodes 542234 MB)15 100.00 % complete on Thu Dec 1 10:53:18 201116 Scan completed successfully.17 Could not invalidate disk(s).18 Checking Allocation Map for storage pool 'system'19 tsdeldisk completed.20 mmdeldisk: Propagating the cluster configuration data to all21 affected nodes. This is an asynchronous process.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

26

Page 35: Hana Operations Guide

Technical Documentation

At this point the data is fully replicated and protected against future server failures.

2. (on any active node) Check the file system for consistency and correct any errors

Issue the command1 # mmfsck sapmntdata -o

If lost blocks are found answer y to the question "Correct the allocation map?". Output of mmfsckmay look like:

1 Checking "sapmntdata"2 Checking inodes3

4 Lost blocks were found.5 Correct the allocation map? yes6

7 1071957376 subblocks8 373914484 allocated9 63 unreferenced

10 0 deallocated11

12 4036662 addresses13 0 suspended14

15 File system contains lost blocks.16 Exit status 0:10:8.

3. (on any active node) Remove the failed disks from the cluster

The GPFS cluster still tries to use the missing drives. Remove them with the command1 # mmdelnsd "hdd1;hdd2;..;hddX"

and replace hdd1..hddX with the actual NSD names. Check afterwards with command1 # mmlsnsd

that the disks have been removed from the cluster.

4. (on any active node) Transfer server license

In a cluster with GPFS 3.5.0-7 and later there are server licenses and FPO licenses. Each GPFScluster needs at least one valid server license on one of the (running) servers.

The FPO license is incompatible to the server roles "quorum node" and "configuration server". Ifyou want to put these roles on an active server with an FPO license, you need to move the serverlicense to this node. Check the licenses with the command

1 # mmlslicense -L

Give a "server license" to the designated node. Do this only when the failed node was either aquorum node or a configuration server (see next step).

1 # mmchlicense server --accept -N <node>

5. (on any active node) Demote GPFS node

This step is only needed if the server was the primary or secondary server of the GPFS cluster.You can skip this step if this is not the case. You can obtain the names of the configuration serverswith the command

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

27

Page 36: Hana Operations Guide

Technical Documentation

1 # mmlscluster

If the server is one of the configuration servers, you have to promote a different server within thecluster to the given role. You can set the primary server with command

1 # mmchcluster -p <node>

and the command for the secondary server is

1 # mmchcluster -s <node>

with <node> being the name of the GPFS node to take over that role. Sample Output:

1 mmchcluster: GPFS cluster configuration servers:2 mmchcluster: Primary server: gpfsnode013 mmchcluster: Secondary server: gpfsnode034 mmchcluster: Propagating the new server information to the rest of the nodes.5 mmchcluster: Command successfully completed

6. (on any active node) Remove the failed node from the cluster

Remove the node from the GPFS cluster with

1 # mmdelnode -N gpfsnodeXX

where gpfsnodeXX is the name of the failed cluster node. Make sure that the failed server is down(the cluster cannot ping it) or the command will fail.

7. (on any active node) Recalculate Quota

Please see Appendix D.2: Pre-calculated Quotas on page 104 to find the correct values. Sinceappliance version 1.6, there is a helper script called saphana-quota-calculator.sh. In a standardsingle node or cluster install you can simply execute it and use the calculated values. When indoubt or in any other case, please read Appendix D: Quotas on page 103.

Set the quotas with the commands

1 mmsetquota -j hanalog -h (logsize)G -s (logsize)G sapmntdata2 mmsetquota -j hanadata -h (datasize)G -s (datasize)G sapmntdata

[DR] In an DR-enabled cluster, please set only the quota for the HANA logs (hanalog). For moreinformation please refer to D: Quotas on page 103.

4.1.2 Removing the SAP HANA node

After protecting the GPFS file system against additional failures you can remove the failed SAP HANAnode in a clean fashion.

WarningThe method described here is only valid for SPS05 and later.

1. Verify that all SAP HANA volumes are assigned to active servers.

Start the HDBAdmin tool and go to landscape. The columns on the right side of the table showthe SAP HANA data volumes and the servers assigned to them. Verify that none of them has ared icon indicating a lost volume (volume without active server) and that the failed SAP HANAnode is no longer an active server. (See Figure 6: HANA Data Volumes on page 29.)

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

28

Page 37: Hana Operations Guide

Technical Documentation

Figure 6: HANA Data Volumes

2. Remove the server from the SAP HANA landscape.

In the HDBAdmin tool goto Configuration → Topology and expand the tree entry "Host". Right-click on the failed host and select Delete Node.

4.1.3 Remove SSH Keys of failed node

If a node that is no longer accessible is to be removed from the cluster, you only need to delete the knownhosts entries on all remaining cluster nodes:

Execute on all nodes

1 # sed -i '/^gpfsnodeXX/d;/^hananodeXX/d' /root/.ssh/known_hosts

with XX being the node number of the removed node. If gpfsnode and hananode are not used ashostnames, delete the corresponding lines manually in all /root/.ssh/known_hosts on all servers of thecluster.

4.1.4 Installing replacement node

Refer to section 4.3: Adding a cluster node on page 31 and add the new node as a standby node regardlessof which configured type the failed node was.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

29

Page 38: Hana Operations Guide

Technical Documentation

4.2 Temporary Node Failure

If there was only a temporary server failure like a server reboot or other maintenance downtimes and thenode was recovered without reinstallation, you can enable the server node again in the GPFS and SAPHANA cluster without going through a remove and add node cycle.

4.2.1 GPFS

The server will stay in the GPFS cluster until removed by the user. After the error condition is solvedthe server should automatically resume operation in the cluster. However disks shared by this server willstay in the state "down" and must be started manually.

1. (on any active node) Verify Cluster status

1 # mmgetstate -a

should show that the failed node is either in the active or arbitrating state. Wait until the node isactive before continuing.

2. (on any active node) Activate Disks

If a disk is not accessible for some time GPFS will set the availability of disks into the status "down".Use the command

1 # mmlsdisk sapmntdata -e

to determine if disks are unavailable to the cluster. Any disk shown is unavailable.

If disks are marked as down, start them with the command

1 # mmchdisk sapmntdata start -a

If GPFS cannot start all disks at once, try to start as many NSDs as possible with the command

1 # mmchdisk sapmntdata start -d "nsdname"

3. (on any active node) Check file system integrity

As soon as the disks are available, run the file system test and correct all errors

1 # mmfsck sapmntdata -o

When asked whether to correct file system errors, answer y.

4. (on any active node) Repair data replication

Repair the data replication using

1 # mmrestripefs sapmntdata -r

This will ensure that the data is protected again against future disk or node failure.

WarningCurrently the FPO feature used in the appliance is not compatible with file systemrebalancing. Do not use the -b parameter!

5. (on any active node) Mount sapmntdata file system

If the sapmntdata file system is not mounted in /sapmnt/, mount it manually with

1 # mmmount sapmntdata -a

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

30

Page 39: Hana Operations Guide

Technical Documentation

or

1 # mmmount sapmntdata -N gpfsnodeXX

At this point the GPFS file system is protected against future failures.

4.2.2 SAP HANA

When the node failed, a standby SAP HANA node took over. The failed node is still part of the clusterand will be available again when the SAP HANA services are started, so no further actions are necessaryexcept starting SAP HANA.

1. (on the failed node) Start SAP HANA

Logon as <SID>adm and start SAP HANA on the server with

1 # HDB start

Note that this SAP HANA node may become a standby node even if it was configured as a workernode, this is a normal SAP HANA operation.

4.3 Adding a cluster node

4.3.1 Server Source (Existing Cluster, Existing Single Node, New Server)

The following steps describe how to add either a freshly installed server or an existing server previouslyused in a different cluster. For a new server, a certified HANA installation technician needs to install theserver without starting the GPFS configuration and HANA installation. This means that all necessarydrivers and GPFS is installed and that the firmware of the different components is on the correct level.

Alternatively you can add a server used previously in another cluster as long as the server was removedcleanly from the old cluster.

Please make sure that the appliance version of the new server matches the appliance version of the existingcluster nodes and the installed software versions (SLES for SAP9, Firmware, RAID driver, High IOPS,GPFS and HANA) are the same.

Execute the following command on a server that is already in the HANA cluster to check for yourappliance version:

1 rpm -q ibm-saphana-ipla | cut -d'-' -f4-

Adding a new server to the cluster is a two phase process, first you have to add the server to the GPFScluster and then you can add it to the SAP HANA cluster.

4.3.2 GPFS

When not noted otherwise, all commands can be run from any active node within the cluster.

1. (on any active node) Distribute host names

On the freshly installed server edit the file /etc/hosts and add entries for all nodes in the cluster.Each line has the following format:

1 192.168.10.101 gpfsnode01 gpfsnode01

9SUSE Linux Enterprise Server for SAP Applications

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

31

Page 40: Hana Operations Guide

Technical Documentation

The first column is the IP address followed by the plain host name twice. Note that there is nodomain set for these special names as these IP addresses and names are used only in the internalnetworks of the appliance. For each host in the cluster create two entries

1 192.168.10.1XX gpfsnodeXX gpfsnodeXX2 192.168.20.1XX hananodeXX hananodeXX

and replace XX with the two-digit number of the node within the cluster.

2. Verify that there are entries for the new server on all other nodes within the cluster and entries forall existing nodes on the new server.

3. SSH-Key Exchange

The following commands use the default data for the fourth node as an example, adapt the ad-dresses with real addresses. A new node with the designated internal hostname gpfsnode04 &hananode04 and the corresponding IPv4 addresses 192.168.10.104 and 192.168.20.104 will be addedto the cluster.

On all active nodes in the cluster execute1 # ssh-keygen -R gpfsnode042 # ssh-keygen -R 192.168.10.1043 # ssh-keygen -R hananode044 # ssh-keygen -R 192.168.20.1045 # ssh-keyscan gpfsnode04,192.168.10.104 >> /root/.ssh/known_hosts6 # ssh-keyscan hananode04,192.168.20.104 >> /root/.ssh_known_hosts

If mmdsh is configured and working you need only to execute the comamands once on only onenode.

1 # mmdsh 'ssh-keygen -R gpfsnode04'2 # mmdsh 'ssh-keygen -R 192.168.10.104'3 # mmdsh 'ssh-keygen -R hananode04'4 # mmdsh 'ssh-keygen -R 192.168.20.104'5 # mmdsh 'ssh-keyscan gpfsnode04,192.168.10.104 >> /root/.ssh/known_hosts'6 # mmdsh 'ssh-keyscan hananode04,192.168.20.104 >> /root/.ssh_known_hosts'

One one active node, copy the authorized_keys, id_rsa & id_rsa.pub file to the new node1 # scp /root/.ssh/authorized_keys /root/.ssh/id_rsa /root/.ssh/id_rsa.pub ←↩

↪→root@gpfsnode04:/root/.ssh/

You will be asked once for the root password. Test if passwordless login works:1 # ssh root@gpfsnode04 date

If it works without a password question, passwordless root login within the cluster is possible. Makeall cluster nodes known to the new node:

1 # ssh gpfsnode04 "cat >> /root/.ssh/known_hosts" < /root/.ssh/known_hosts

4. (on any active node) Add GPFS cluster node

Run1 # mmaddnode -N <node>

5. (on any active node) Add license

The new cluster node needs either a GPFS server license or a GPFS FPO license. The limited FPOlicense is only available in GPFS 3.5.0-7 and later and is only valid for non-quorum non-configurationservers.

This means that, if you want to add a new quorum node or a configuration node, you have to setthe server license, else you have to set the FPO license.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

32

Page 41: Hana Operations Guide

Technical Documentation

To mark the server as possessing a server license execute

1 # mmchlicense server -N <node>

For a FPO license execute

1 # mmchlicense fpo -N <node>

and confirm that you are in possession of a valid GPFS license.

6. (on any active node) Set configuration server

Check if a primary and a secondary configuration server exist in the GPFS cluster. Execute

1 # mmlscluster | grep "[Primary|Secondary] server"

If one of the configuration servers is not set up, it is likely that you want to designate the newserver as the regarding configuration server.

• Primary configuration server:

1 # mmchcluster -p <node>

• Secondary configuration server:

1 # mmchcluster -s <node>

7. (on the new server) Start GPFS

1 # mmstartup

8. (on the new node) Create the Disk descriptor file

[HA] If the server is a new node then create the disk descriptor file /var/mmfs/config/disk.list.data.gpfsnodeXX. You can find the correct file content in Appendix B: GPFS Disk Descriptor Fileson page 89. Replace the XX in the file name and in the file content with the node number.

Delete all other /var/mmfs/config/disk.list.data.gpfsnodeXX files on the new new node ifthere are any.

[DR] Follow the instructions for HA, but replace the failure group number in the fifth column(10XX) with the correct topology vector as described in Appendix C: Topology Vectors (GPFS 3.5failure groups) on page 102. A short explanation of these topology vectors can also be found there.

9. (on the new node) Now create the NSDs with the command

1 # mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnodeXX

The command may fail if the disks were previously used in a GPFS cluster and were not removedin a clean fashion. In this case add the parameter -v no to force using these disks. Notice that thedescriptor file is modified by this command and that the modified file will be needed in the future.If you have to repeat this step, as the command report reusage of disks, also repeat step 7.

10. (on the new node) Add disks to file system

The freshly created NSDs need to be added to the shared sapmntdata file system. The easiest wayis to use the modified disk description file from the last step:

1 # mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnodeXX

As in the previous step add the parameter "-v no" if the command fails when GPFS finds oldconfiguration data on the drives.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

33

Page 42: Hana Operations Guide

Technical Documentation

11. (on the new node) Mount the file system

If the file system is not set to automount, you have to mount it yourself.

1 # mmmount sapmntdata

12. (on any active node) Recalculate Quotas

[DR] For DR-enabled clusters, currently only quotas for the HANA log area hanalog, /sapmnt/logis recommended. Ignore any quota for the HANA data area (hanadata, /sapmnt/data). For thecalculation see D: Quotas on page 103.

Please see Appendix D.2: Pre-calculated Quotas on page 104 to find the correct values. Sinceappliance version 1.6, there is a helper script called saphana-quota-calculator.sh. In a standardsingle node or cluster install you can simply execute it and use the calculated values. When indoubt or in any other case, please read D: Quotas on page 103.

13. Set the quotas with the commands

1 # mmsetquota -j hanalog -h (logsize)G -s (logsize)G sapmntdata2 # mmsetquota -j hanadata -h (datasize)G -s (datasize)G sapmntdata

4.3.3 SAP HANA

Depending on the current condition of the SAP HANA cluster and the desired node type you have todecide between a new standby node and a new worker node10.

• Standby node:

Choose standby node if you want to add a new standby node or replace a failed worker node. In thelatter case a standby node became an active worker node during failover and will keep this role.The new node will be the new standby node to regain the HA status.

• Worker node:

Choose worker node if you want to expand the cluster by one additional active worker node. Beforeadding a new worker node you should always reassess your HA strategy and make sure you have atleast one standby node in the cluster. In any case the new host will be an worker node with a newadditional data volume.

4.3.3.1 Network configuration

-[1.4] This chapter is only valid for Appliance 1.5 and later. Skip this chapter.

[1.5]+ As a temporary fix for the problem described in SAP Note 178095011 (Connection problems dueto host name resolution) the IBM appliance uses so called "host based routing" to allow installation ona public external network interface. This allows to use an external address reachable from clients routedthrough the high-speed internal 10Gbps network dedicated for SAP HANA.

[DR] This solution is not valid for DR-enabled clusters. Skip this chapter.

Example:

A fourth node is to be added or replaced:

Actions:10https://cookbook.experiencesaphana.com/bw/operating-bw-on-hana/performance/assuring-high-availability/

scale-out-standby-server-configuration/11https://service.sap.com/sap/support/notes/1780950

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

34

Page 43: Hana Operations Guide

Technical Documentation

# External network, eth4 GPFS network, bond0 SAP HANA network, bond11 hanatest1.domain 10.1.2.1 gpfsnode01, 192.168.10.101 hananode01, 192.168.20.1012 hanatest2.domain 10.1.2.2 gpfsnode02, 192.168.10.102 hananode02, 192.168.20.1023 hanatest3.domain 10.1.2.3 gpfsnode03, 192.168.10.103 hananode03, 192.168.20.103

Table 2: Three node example configuration

# External network, eth4 GPFS network, bond0 SAP HANA network, bond14 hanatest4.domain 10.1.2.4 gpfsnode04, 192.168.10.104 hananode04, 192.168.20.104

Table 3: Additional fourth node example

1. [HA] (on the new server) Add routes to config

On the new node, the static routes configuration file /etc/sysconfig/network/routes needs tobe adjusted. Given that the server is connected to the internal SAP HANA network via bond1,which is the default, you must add a line

1 <node external ip> 0.0.0.0 255.255.255.255 bond1

for each other node in the GPFS cluster with <node external ip> an assigned IP address of theother node. Using the example above the following lines need to be added to the file:

1 10.1.2.1 0.0.0.0 255.255.255.255 bond12 10.1.2.2 0.0.0.0 255.255.255.255 bond13 10.1.2.3 0.0.0.0 255.255.255.255 bond1

After changing the file activate the routes with the command

1 # /etc/sysconfig/network/scripts/ifup-route bond1

2. [HA] (on all other servers) Add host route to config

If the new server is a replacement you can skip this step. Nevertheless you should verify the settings.

On all nodes add an entry

1 [IP address] 0.0.0.0 255.255.255.255 bond1

to /etc/sysconfig/network/routes if this entry is missing. (Replace "[IP address]" by theactual IP address.) Using the example above the entry to add is:

1 10.1.2.4 0.0.0.0 255.255.255.255 bond1

Activate the route with the command

1 # /etc/sysconfig/network/scripts/ifup-route bond1

on each server node.

3. [HA] (on all other servers) Add host route to config

If the new server is a replacement you can skip this step. Nevertheless you should verify the settings.

All cluster nodes should have an entry

1 [IP address] 0.0.0.0 255.255.255.255 bond1

in /etc/sysconfig/network/routes. Add this line if there’s no entry.

Using the example above, the entry to add is:

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

35

Page 44: Hana Operations Guide

Technical Documentation

1 10.1.2.4 0.0.0.0 255.255.255.255 bond1

Activate the route with the command

1 # /etc/sysconfig/network/scripts/ifup-route bond1

on each server node.

4.3.3.2 Installing SAP HANA on the new node

1. If not already installed, install the SAP hostagent

1 # cd /var/tmp/install/saphana/DATA_UNITS/SAP_HOST_AGENT_LINUX_X642 # rpm -ihv saphostagent.rpm

As recommended by the RPM installation, a password for sapadm may be set.

2. Deactivate automatic startup through sapinit at startup.

Running SAP’s startup script during system boot must be deactivated as it will will be executedby a GPFS startup script after cluster start. Execute:

1 # chkconfig sapinit off

3. Copy the file /var/mmfs/etc/mmfsup from another node to /var/mmfs/etc.

1 # scp -p <hananode>:/var/mmfs/etc/mmfsup /var/mmfs/etc

4. Install the HDB client software

1 # cd /var/tmp/install/saphana/DATA_UNITS/HDB_CLIENT_LINUX_X86_642 # ./hdbinst --batch --hostname=<hananodeXX> --sapmnt=/sapmnt/hdbclient

5. Add server to the cluster

Make sure the cluster is running as the new node needs to communicate with the other nodes.

Navigate to directory /sapmnt/<SID>/global/hdb/install/bin/ and run the command

1 # ./hdbaddhost -H <hananodeXX>

-[1.4] The parameter <hananodeXX> is the name of the node within the internal SAP HANAnetwork. If you omit this parameter, SAP HANA might use the external network for internalcommunication, which is not supported.

[1.5]+ Since Appliance version 1.5 SAP HANA will be installed on an external reachable hostname/IP address with internal traffic being routed through the internal SAP HANA network.<hananodeXX> is the chosen external host name or IP address. Please attend Section 4.3.3.1:Network configuration on page 34.

Answer the questions shown by hdbaddhost and make sure you choose the right role (see 4.3.3).SAP HANA will be started automatically by hdbaddhost.

6. (on any node) Check and/or modify the "Autostart" setting in all profiles for the newly installedHANA system

1 # cd /sapmnt/<SID>/profile2 # sed -i 's/^Autostart = 0/Autostart = 1/g' *_HDB*

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

36

Page 45: Hana Operations Guide

Technical Documentation

4.4 Removing a node

NoteThis section is for still working servers, if you want to remove a node that is no longeraccessible, refer to section 4.1: After server node failure actions on page 26

Removing an active node from a SAP HANA cluster is a 2 step process. First you have to shrink theHANA cluster by removing the node, then you can remove the server from the GPFS cluster. Whendone in this order it is possible to remove multiple nodes at the same time while maintaining GPFS HAcapability during the process.

4.4.1 Reorganizing HANA cluster

Please refer to SAP’s official documentation in the SAP HANA Administration Guide12, section 11.7.2"Redistributing Tables Before Removing a Host". After any tables have moved away from the host whichis indicated by the removal status of "Reorg finished" or in the Landscape/Configuration tab HANAstudio you can uninstall the node(s).

4.4.2 Removing HANA Software

It is recommended to remove only one host at time, so first execute hdbremovehost and let it finish onone node before moving to the next.

1. Login to the node as root and navigate to HANA system directory on the shared file system:

1 # cd /sapmnt/<SID>/global/hdb/install/bin

2. Remove the host from the cluster

1 # ./hdbremovehost

4.4.3 Remove host(s) from GPFS cluster

After you uninstalled hana on all nodes to be removed from the cluster, you can cleanly remove thesenodes from the GPFS cluster. You can remove multiple hosts at the same time.

1. Unmount GPFS file system

Unmount the shared GPFS file system on the nodes going to be removed. Either execute

1 # mmumount all

on each of these nodes or use the clustered version on one node

1 # mmumount all -N <node list>

and replace <node list> with a comma separated list of the nodes to be removed (and use thegpfsnodeXX hostnames)

2. Suspend Disks The disks of affected nodes need to be suspended before you can savely remove them.Obtain the list of disks with

1 # mmlsdisk sapmntdata

12Obtainable from http://help.sap.com/hana_appliance

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

37

Page 46: Hana Operations Guide

Technical Documentation

and suspend the these disks :1 # mmchdisk sapmntdata suspend -d "<nsdlist>"

with <nsdlist> as semicolon separated list of the NSDs on these nodes. Use the quotes aroundthe list or the shell will interpret them as command separators.

A proper formatted list of NSDs can be obtained with the following command:1 # mmlsdisk sapmntdata |grep -E "node03|node04" | awk '{print $1}' |tr '\n' ';'

The grep string is a pipe (|) separated list of nodeXX names (omit the gpfs of gpfsnodeXX as it isnot part of the NSD name). You can also feed the generated list directly into mmchdisk:

1 # mmlsdisk sapmntdata |grep -E "node03|node04" | awk '{print $1}' |tr '\n' ';'2 |xargs mmchdisk sapmntdata suspend -d

3. Restripe GPFS file system

After suspending the disks, restripe the file system1 # mmrestripefs sapmntdata -r

WarningCurrently the FPO feature used in the appliance is not compatible with file systemrebalancing. Do not use the -b parameter!

4. Delete disks and NSDs1 # mmdeldisk sapmntdata "<nsdlist>"

with <nsdlist> as semicolon separated list of the NSDs on these nodes. Use the quotes aroundthe list or the shell will interpret them as command separators.

A proper formatted list of NSDs can be obtained with the following command:1 # mmlsdisk sapmntdata |grep -E "node03|node04" | awk '{print $1}' |tr '\n' ';'

The grep string is a pipe (|) separated list of nodeXX names (omit the gpfs of gpfsnodeXX as it isnot part of the NSD name).

Then delete the NSDs of these nodes, you can reuse the list from the last command:1 # mmdelnsd "nsdlist"

5. Stop GPFS

Either execute1 # mmshutdown

on every node to be removed or run1 # mmshutdown -N <node list>

with <node list> being a comma separated list of the nodes to be removed.

6. Change license

Optionally you may move the license (type "FPO" or "Server") to other node(s):1 # mmchlicense server -N <node list>2 # mmchlicense fpo -N <node list>

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

38

Page 47: Hana Operations Guide

Technical Documentation

Please take of your actual licensing. Newer clusters will be sold with server licenses for three nodesand the remaining will get the cheaper but restricted FPO licenses. If you split a cluster into twocluster you will need two times 3 server licenes, but the former cluster had only 3 such licenseswhich it will keep using. You have to buy 3 additional licenses for the new cluster or the newcluster will be unlicensed.

7. Expel nodes from GPFS cluster

1 # mmdelnode -N "node list"

with "node list" a comma-separated list of the nodes to be removed.

8. Remove SSH keys

As a security measure you may want to wipe the SSH keys on the removed nodes

1 # rm ~/.ssh/*

When deleting the ssh all password-less logins to the server will removed.

9. Cleanup You may want to cleanup:

• The /etc/hosts file on the removed and remaining nodes

• The node list in /var/mmfs/config/nodes.list if existing on the cluster nodes

• Wipe the /var/mmfs directory on the removed nodes

• Delete the /var/mmfs/config/disk.list.data.gpfsnodeXX for the removed nodes on themaster node (which was used for the cluster installation)

4.5 Reinstalling a SAP HANA cluster

It may happen that you need to install the SAP HANA cluster again (e.g. after a failed SAP HANAupgrade).

AttentionBefore reinstalling the SAP HANA Cluster you have to backup the database, since you willdelete ALL DATA of the SAP HANA instance during the reinstallation.

4.5.1 Delete SAP HANA

Before reinstalling SAP HANA with the same SID please ensure that the system which needs to bereinstalled is completely deleted. The best way to clean the systems is to uninstall each node. On eachnode go to directory /sapmnt/data/<SID>/global/hdb/install/bin and run

1 # ./hdbremovehost

The uninstallation may fail for many reasons, therefore you can force the uninstallation with the parameter--force.

If there are nodes registered in SAP HANA which are not available anymore you can force the final deleteof SAP HANA by deleting /sapmnt/data/<SID>/profile/<SID>_* in the GPFS file system.

You should not delete these files before uninstalling SAP HANA on all servers still active, otherwise youwill have to clean up the remaining systems manually.

Before continuing with the next step (Section 4.5.2: Installing first SAP HANA node on page 40) ensurethat all directories /sapmnt/data/<SID>, /sapmnt/log/<SID>, /sapmnt/<SID>, /usr/sap/<SID>, the

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

39

Page 48: Hana Operations Guide

Technical Documentation

<SID>adm users and all <SID>-related entries in /usr/sap/sapservices have been deleted on allnodes. Restart all running SAP processes (SAP hostagent, saposcol etc.) as they may have the old SIDin memory.

4.5.2 Installing first SAP HANA node

Go to the installation directory (/var/tmp/install/saphana/DATA_UNITS/HANA_IM_LINUX__X86_64/)and manually edit the installation template file setuphana.slmodel before starting the automated install.

The following parameters (with default values identified) need to be modified:

1 <StringParameter name="dataPath" value="${DATAPATH}"/>2 <StringParameter name="logPath" value="${LOGPATH}"/>3 <StringParameter name="sapmntPath" value="/hanamnt"/>4 <StringParameter name="instanceNumber" value="${INSTANCENUMBER}"/>5 <StringParameter name="sid" value="${SID}"/>6 <StringParameter name="hdbHost" value="${HDBHOST}"/>

Copy the file setuphana.slmodel to setuphana.slmodel.ibm and update the values in the Listing aboveaccording to the values in the following table.

Parameter ValuedataPath /sapmnt/data/<SID>logPath /sapmnt/log/<SID>sapmntPath /sapmntinstanceNumber <provided by customer>sid <provided by customer>hdbHost -[1.4] hananodenn

[1.5]+ <external hostname> see 4.3.3.1

Table 4: SAP HANA Unified Installer Template Variables

1. Before the installation you have to create and change the permissions of the two directories dataPathand logPath or the installation will fail:

1 # mkdir /sapmnt/data/<SID>2 # chmod 777 /sapmnt/data/<SID>3 # mkdir /sapmnt/log/<SID>4 # chmod 777 /sapmnt/log/<SID>

Make sure this directories are empty and do not contain data from prior installations.

2. Install SAP HANA

In general, we recommend creating a temporary installation subdirectory in the /sapmnt/datadirectory, and then executing the unified installer as follows:

1 # mktemp -d /sapmnt/data/install.nnnnnnnnnn2 # cd /var/tmp/install/saphana/DATA_UNITS/HANA_IM_LINUX__X86_64/3 # setup.sh /sapmnt/data/install.nnnnnnnnnn setuphana.slmodel.ibm

3. Deactivate automatic startup through sapinit at startup

This will be done by a GPFS startup script at cluster start.

1 # chkconfig sapinit off

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

40

Page 49: Hana Operations Guide

Technical Documentation

4. Create the file /var/mmfs/etc/mmfsup. This file will be called by GPFS whenever the GPFS clusteris completely up and running with the GPFS file systems mounted.

1 # cat <<CREATE_MMFSUP >/var/mmfs/etc/mmfsup2 #!/bin/ksh3 # we assume that the GPFS cluster is up and running, but that does not mean ←↩

↪→that the file systems are already mounted4 GPFS_PATH=/usr/lpp/mmfs/bin5 MAX_RETRIES=56 TIMEOUT=307 NEED2WAIT=08

9 while [[ \$MAX_RETRIES > 0 ]]10 do11 while read line12 do13 GPFS_DEVICE=\$(echo \$line|awk '{ print \$1 }')14 if \$GPFS_PATH/mmlsmount \$GPFS_DEVICE|grep "is not mounted" >/dev/null 2>&115 then16 logger -t HANA "file system \$GPFS_DEVICE not yet mounted. Waiting \$TIMEOUT ←↩

↪→seconds."17 NEED2WAIT=118 fi19 done < <(grep gpfs /etc/fstab)20

21 if [[ \$NEED2WAIT == 1 ]]22 then23 NEED2WAIT=024 sleep \$TIMEOUT25 else26 break27 fi28 ((MAX_RETRIES--))29 done30

31 if [[ \$MAX_RETRIES != 0 ]]32 then33 service sapinit start34 else35 logger -t HANA "GPFS file system(s) missing. Could not start HANA"36 fi37

38 return 039 CREATE_MMFSUP

5. Change the permissions on that file.

1 # chmod 754 /var/mmfs/etc/mmfsup

6. Check and/or modify the "Autostart" setting in the profile

1 # cd /sapmnt/<SID>/profile2 # sed -i 's/^Autostart = 0/Autostart = 1/g' <SID>_HDB<InstNr>_hananode<XX>

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

41

Page 50: Hana Operations Guide

Technical Documentation

4.5.3 Install other SAP HANA nodes

For each other node in the cluster, repeat these steps:

1. If not already installed, install the SAP hostagent

1 # cd /var/tmp/install/saphana/DATA_UNITS/SAP_HOST_AGENT_LINUX_X642 # rpm -ihv saphostagent.rpm

As recommended by the RPM installation, a password for sapadm may be set.

2. Deactivate automatic startup through sapinit at startup.

This will be done by a GPFS startup script at cluster start.

1 # chkconfig sapinit off

3. Copy the file /var/mmfs/etc/mmfsup from t node to /var/mmfs/etc.

1 # scp -p <hananode>:/var/mmfs/etc/mmfsup /var/mmfs/etc

4. Install the HDB client software

1 # cd /var/tmp/install/saphana/DATA_UNITS/HDB_CLIENT_LINUX_X86_642 # ./hdbinst --batch --hostname=<hananodeXX> --sapmnt=/sapmnt/hdbclient3 \item Add Server to the cluster

Make sure the cluster is running as the new node needs to communicate with the other nodes.

Navigate to directory /sapmnt/<SID>/global/hdb/install/bin/ and run the command

1 # ./hdbaddhost -H <hananodeXX>

The parameter <hananodeXX> is the name of the node within the internal SAP HANA network.If you omit this parameter, SAP HANA might use the external network for internal communication,which is not supported. See 4.3.3.1: Network configuration on page 34 for an explanation.

Answer the questions and make sure you choose the right role (see above). SAP HANA will bestarted automatically during the installation.

5. Check and/or modify the "Autostart" setting in the profile

1 # cd /sapmnt/<SID>/profile2 # sed -i 's/^Autostart = 0/Autostart = 1/g' \3 <SID>_HDB<InstNr>_hananode<XX>

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

42

Page 51: Hana Operations Guide

Technical Documentation

5 DR Cluster Operations

5.1 Notice

This section applies only to multi-site Disaster-Resilience (DR) clusters. For maintenance of normalHA-enabled cluster please refer to Section 4: Cluster Operations on page 26.

In the DR solution a new GPFS 3.5 feature called "Topology Vectors" is used, please see Appendix C:Topology Vectors (GPFS 3.5 failure groups) on page 102 for an explanation. In a DR-solution, pleaseuse these topology vectors instead of the failure groups even when following instructions in the generalsections of these guide.

5.2 Terminology

The terms site A, primary site, and active site are used interchangeably in this document. Similarly,backup site, site B, and passive site all mean the same as well. When developing a solution for thecustomer, you will start with two sites and one of this sites will run a productive HANA system. Thissite is called site A, primary site or active site. The other site will then be of course the backup site,standby site or simply site B.

After a failover the naming of these two sides may be swapped, depending whether the customer wantsto switch back as soon as possible or keep using the former backup site as the primary site.

Site C will always be called site C or the quorum/tiebreaker site.

SAP also uses the terms Disaster Recovery (DR) and Disaster Tolerant (DT) interchangeably.

5.3 Common operations deviating from the SAP HANA Operations Guide

5.3.1 System shutdown

There are no known deviations from the usual shutdown procedure in a non-DR clustered environment.

5.3.2 System startup

In a DR scenario automated procedures are not desired in most cases, because the decision if a disaster iscurrently happening or the current situation is normal should be made by the operational staff. Hence,in this setup a couple of automatically starting services have been intentionally disabled.

The GPFS daemon itself is still configured to start automatically upon system boot. This means a servertries to rejoin the GPFS cluster after it is rebooted/powered-on.

The result of the ’join cluster’ process can be checked with

1 # mmgetstate -aLs

It is still necessary to check that all NSDs of a node are operational.

Missing disks can be identified with

1 # mmlsdisk sapmntdata -e

They are restarted with

1 # mmchdisk sapmntdata start -d "disk1;disk2;...;diskN"

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

43

Page 52: Hana Operations Guide

Technical Documentation

or

1 # mmchdisk sapmntdata start -a

which includes all disks (no influence on already started disks).

The manual steps after a restart of a cluster node are

1. Mount the GPFS file system

1 # mmmount sapmntdata

Check with

1 # mmlsmount sapmntdata -L

that the file system is mounted on all nodes.

2. Start the SAP Hostagent on the currently active site

1 # /etc/init.d/sapinit start

Check with

1 # /etc/init.d/sapinit status

that the service is running

3. Start SAP HANA

1 # su - <SID>adm -c "HDB start"

Check with the HDB Admintool or SAP HANA Studio that all necessary processes are running.

5.4 Planned failover/failback procedures

There might be situations when a planned failover to the backup site is required, for example in case ofa planned power outage on the primary site. Chapter 5.5: Site failover procedures with tiebreaker nodeon page 47 and 5.6: Site failover procedures without tiebreaker node or loss of site A & tiebreaker nodeon page 51 deal with procedures in case of a disaster, so in this chapter we describe the steps to fail overin a controlled and planned manner.

5.4.1 Failover to secondary site

1. Make sure that the state of the cluster is healthy, that means all nodes acive and all disks up

1 # mmgetstate -aLs2

All nodes should be "active"

1 # mmlsdisk sapmntdata -e2

This should come up with "All disks up and ready"

The following command will list the current state of the SAP HANA database. Replace "<Instan-cenumber>" with the corresponding number of your database, eg. 01

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

44

Page 53: Hana Operations Guide

Technical Documentation

1 # /usr/sap/hostctrl/exe/sapcontrol -nr <Instancenumber> -function ←↩↪→GetSystemInstanceList

2. Stop all SAP HANA instances on both sites (productive and development / testing if applicable)To stop the SAP HANA database on all nodes of an instance the command

1 # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function StopSystem HDB

can be used.

3. Stop the SAP hostagent on all nodes on both sides

This can be done with

1 # /etc/init.d/sapinit stop

on every node in the cluster. If there is a file with all nodes available, mmdsh can be used to issuethe command in parallel.

1 # mmdsh -F /var/mmfs/config/nodes.list /etc/init.d/sapinit stop

4. If your environment includes a tiebreaker node in an independant site, stop GPFS (via mmshut-down) on the site which is scheduled to go down.

If your environment does not include a dedicated tiebreaker node, make sure that the majority ofthe quorum nodes will be on the secondary site.

Verify with

1 # mmlscluster

If the majority of quorum nodes is currently on your primary site this must be changed. Example:If there are 3 quorum nodes on the primary side and two on the backup site, we suggest that youadd to more quorum nodes on the backup site. This eases the process, as it is necessary to changethe GPFS license type on these nodes as well. The license for the additional quorum nodes mustbe "server" instead of "fpo".

Adjust with

1 # mmchlicense server --accept -N <nodes>2 # mmchnode --quorum -N <nodes>

As long as GPFS is active, you need to change the quorum nodes one at a time, it’s not possibleto use a list of nodes.

5. Umount the filesystem

1 # mmumount sapmntdata -a

6. Move the primary configuration server to the backup site

1 # mmchcluster -p <Server on backup site>

7. Exclude all GPFS NSDs on the primary side This is necessary to make sure that the filesystemdescriptor quorum will be satisfied when the primary side is shut down. The following exampleassumes that a list of all disks exists in the file disks.prod. This file should be created in advance.Alternatively all disks can be specified on the command line.

1 # mmfsctl sapmntdata exclude -F /var/mmfs/config/disks.prod

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

45

Page 54: Hana Operations Guide

Technical Documentation

8. Shutdown GPFS on the primary site The following example assumes that there is a file "nodes.prod"which contains all GPFS nodes of the primary site

1 # mmshutdown -N /var/mmfs/config/nodes.prod

9. Mount the GPFS file system on the backup site The following examples assumes that there is a file"nodes.dr" which contains all GPFS nodes of the backup site

1 # mmstartup -N /var/mmfs/config/nodes.dr

The primary site is now ready to be taken offline

10. Start SAP hostagent on all nodes on backup site This can be done eg. with mmdsh and a nodesfile

1 # mmdsh -F /var/mmfs/config/nodes.dr "/etc/init.d/sapinit start"

11. Start SAP HANA on backup site

1 # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function StopSystem HDB

AttentionIt is strongly recommended to not make any configuration changes to the GPFS cluster aslong as the primary site is not available!

5.4.2 Failback to primary site

1. Stop all running SAP HANA instances on the active site.

2. Stop the SAP hostagent on all nodes of the active site

3. Start all servers and GPFS daemons on the site which was taken offline

1 # mmstartup -N <nodeX>,<nodeX>,...

4. Verify that all GPFS nodes are up and active again

1 # mmgetstate -a

5. Synchronize the GPFS cluster configuration to make sure that all nodes have the latest configuration

1 # mmchconfig -p LATEST

6. Move the primary configuration server back to the primary site

1 # mmchcluster -p <node on primary site>

7. Unmount the GPFS file system

1 # mmumount sapmntdata -a

8. Re-integrate the excluded NSDs in the filesystem

1 # mmfsctl sapmntdata include -F /var/mmfs/config/disks.prod

9. Mount the GPFS file system on the primary site and on the primary site only!

1 # mmmount sapmntdata -N <nodeX>,<nodeX>,..

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

46

Page 55: Hana Operations Guide

Technical Documentation

10. Remove the additional quorum nodes and change the license back to fpo

1 # mmchnode --nonquorum -N <nodeZ>2 <repeat for every added quorum node>3 # mmchlicense fpo --accept -N <nodeX>,<nodeY>

WarningMake sure that you don’t lose the node quorum! Always check with "mmlscluster" or"mmgetstate -aLs" what the current quorum distribution is.

11. Start SAP hostagent on all nodes of the primary site

12. Start SAP HANA

13. Check that everything is up and running again The same commands as listed in 5.4.1: Failover tosecondary site on page 44 can be used.

5.5 Site failover procedures with tiebreaker node

This procedure should be used in case the active site fails completely.

The goal is to restart the productive SAP HANA instance in the disaster site B. Any non-productiveinstances running on the disaster environment must be stopped now.

The HA capabilities of the new active site will be restored where necessary.

5.5.1 Stop non-productive systems

If on the current backup site non-productive HANA systems are running, these systems must be stoppedbefore the productive HANA can be switched over to this system. It is recommended to unmount theadditional expansion box file system

1 # mmumount sapmntext -a

5.5.2 Check current cluster configuration

Use "mmlscluster" to determine the configured primary configuration server and which nodes are quorumnodes. This information is needed in the next steps.

5.5.3 Change Configuration servers

Move any missing configuration server (primary or secondary) to the surviving site, e.g.

1 # mmchcluster -p gpfsnode07

to move the primary server to gpfsnode07. Choosing the first node in the other failure group on the samesite is recommended.

5.5.4 Relax Cluster Node Quorum

Set any missing node with with quorum attribute to non-quorum. e.g.

1 # mmchnode --nonquorum -N gpfsnode01,gpfsnode02

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

47

Page 56: Hana Operations Guide

Technical Documentation

5.5.5 Delete disks of failed sites

Delete all failed disks from the global file system. Get the list of failed disks and save this list for lateruse:

1 # mmlsdisk sapmntdata -e | tee /var/mmfs/config/removed_disks.list

This list contains the failure groups for each NSD which is needed for the failback procedure. Verify thatonly disks from the failing sites are listed. To get a list ready for mmdeldisk use

1 # mmlsdisk sapmntdata -e |grep down | awk '{print $1}' | tr '\n' ';'

Delete the disks from the file system1 # mmdeldisk sapmntdata 'disk1;disk2;..;diskn' -a -c

Ignore any "no space left on device messages" and any messages complaining about the number of failuregroups going down to 2. This is a GPFS code bug displaying wrong information only and will be fixedin a later release.

5.5.6 Restoring HA capabilities on failover Site

Though the system is now running on the disaster site, the HA capabilities of the solution are stillavailable and should be streamlined.

For HA we need two replicas. Execute1 # mmchfs sapmntdata -r 2 -m 2

to set the new replication factor 2 and then run1 # mmrestripefs sapmntdata -R

this will set the new replication level for all existing files and restore any missing replicas for this level.The executation may take a while. After this finishes standard HA failure resilience is achieved.

WarningCurrently the FPO feature used in the appliance is not compatible with file system rebalanc-ing. Do not use the -b parameter!

5.5.7 Mount global shared file system

The GPFS global shared file system is not mounted on the passive failover site. (Stop and unmount anymounted file system used by non-productive SAP HANA systems.) Mount the file system on every nodeeither by issuing

1 # mmmount sapmntdata

on every node or1 # mmmount sapmntdata -N gpfsnode0x,gpfsnode0y,..,gpfsnode0z

on one node.

Notemmmount sapmntdata -a should not be used here as it will try to mount the GPFS file systemon the failed nodes.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

48

Page 57: Hana Operations Guide

Technical Documentation

5.5.8 Start SAP HANA on failover site

Following the above procedure SAP HANA should be able to start now. Start the SAP hostagent andSAP HANA on the new active site.

5.5.9 (Optional) Delete nodes from failed site

If the failing site is not recoverable, execute the following commands to clean up the cluster.

Before you can delete the nodes, you have to delete their NSDs:

1 # mmlsnsd

To get a list usable in the mmdelnsd command, use

1 # mmlsnsd |grep '(free disk)' | awk '{print $3}' | tr '\n' ';'2 # mmdelnsd 'nsd1;nsd2;...;nsdn'

Then you can delete the failed nodes, e.g.

1 # mmdelnode -N <node1>,<node2>,..

5.5.10 Restripe file system

Repair any broken replication

1 # mmrestripefs sapmntdata -r

WarningCurrently the FPO feature used in the appliance is not compatible with file system rebalanc-ing. Do not use the -b parameter!

5.5.11 Failback with tiebreaker node

Current status: Site B is up and running, file system is mounted. Site A is operational, but the serversare currently powered off.

A failback to the former production site requires a downtime of the running instance.

1. Update configuration

1 # mmchcluster -p LATEST

This ensures that all active nodes have the latest configuration.

2. Migrate the primary configuration server back to site A.

This can be done though the server on site A are still powered off.

1 # mmchcluster -p gpfsnode01

3. Restore node quorum

1 # mmchnode --quorum -N "former Quorum nodes in site A and C"

4. Start site A again by powering them on or start GPFS if the servers are up but GPFS is down

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

49

Page 58: Hana Operations Guide

Technical Documentation

1 # mmstartup -N <recovered node list>

Result: GPFS cluster is up and running again, file system not yet ready.

5. Stop non-productive systems

If on the current backup site non-productive HANA systems are running, these systems must bestopped before the productive HANA can be switched over to this system. It is recommended tounmount the additional expansion box file system

1 # mmumount sapmntext -a

6. As the failed site may come up with the old configuration, the configuration should now be syn-chronised thoughout the cluster.

1 # mmchcluster -p LATEST

Verify on the formerly failed nodes if eg. mmlscluster shows the "failover" configuration with theconfiguration server on the backup site and so on.

7. Add disks back to file system

1 # mmlsnsd | grep '(free disk)'

will list all the NSD which do not currently belong to the file system The command requires thefailure group, so a simple "mmadddisk sapmntdata "diskX" -v no" is not sufficient. Instead itshould look like this:

1 # mmadddisk sapmntdata "data01node1:::dataAndMetadata:1,0,1::system" -v no

The failure group needs to be modified so that it matches the original information.

The disk configuration which was used during during installation can be found in the directory /var/mmfs/config. Check for files with the name "disk.list.data.gpfsnodeXY" and use the informationfound inside.

8. Start the missing disks

1 # mmchdisk sapmntdata start -a

9. Restore the replication level to 3

1 # mmchfs sapmntdata -r 3 -m 32 # mmrestripefs sapmntdata -R

WarningCurrently the FPO feature used in the appliance is not compatible with file systemrebalancing. Do not use the -b parameter!

Execute the following steps if you want to move SAP HANA back to the recovered site.

10. Stop SAP HANA

Stop SAP HANA and SAP Host Agent on backup site and unmount file system by calling

1 # mmumount sapmntdata

on each node on the backup site.

11. Mount file system on recovered site

Execute

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

50

Page 59: Hana Operations Guide

Technical Documentation

1 # mmmount sapmntdata

on each node on the recovered site.

12. Start SAP Host Agent and SAP HANA on failback site.

5.6 Site failover procedures without tiebreaker node or loss of site A & tie-breaker node

In case that GPFS looses its quorum, the file system and the GPFS cluster will become unavailable.

All processes relying on the GPFS file system will fail. Manual intervention is necessary to restoreoperation. No data will be lost.

In this example it is assumed that site A was the active site in which a disaster occurred and site B isthe surviving backup.

5.6.1 Bring the file system on surviving site online

1. Stop non-productive systems

If on the current backup site non-productive HANA systems are running, these systems must bestopped before the productive HANA can be switched over to this system. It is recommended tounmount the additional expansion box file system

1 # mmumount sapmntext -a

2. Shutdown GPFS on nodes on surviving site

This is necessary, otherwise the status "arbitrating" of the nodes can not be resolved.1 # mmshutdown -N "gpfsnode05;...."

3. Set the configurations servers to surviving site (eg. in another failure group)

Run1 # mmlscluster

to check which nodes are configuration servers. If you need to move the primary server use1 # mmchcluster -p <newnode>

and for the secondary configuration use1 # mmchcluster -s <newnode>

Now both configuration server should reside on the surviving site. This can be checked with thecommand mmlscluster.

4. Relax the node quorum1 # mmchnode --nonquorum -N <siteA>[ ,<siteC> ]

There should be no more quorum nodes on site A (and site C when a tiebreaker node is used). Thiscan be checked with mmlscluster.

5. Relax file system descriptor quorum Exclude all disks from the failing nodes. There should be alist with all disks of the primary site in /var/mmfs/config/disks.prod. If this is not the case, thelist of disks can be obtained with "mmlsnsd".

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

51

Page 60: Hana Operations Guide

Technical Documentation

1 # mmfsctl sapmntdata exclude -d "<disks>"2 - or -3 # mmfsctl sapmntdata exclude -F /var/mmfs/config/disks.prod

<disks> is a list of NSDs seperated by a ";". Exclude only lost disks.

6. Start GPFS on surviving site

1 mmstartup -N <nodes of siteB>

Use mmgetstate -aLs to check that the nodes on site B are active again. Because the nodes on siteA are down, the command may take some time to complete.

5.6.2 Delete disks of failed sites

In case that the loss of the primary site is permanant or it will take a longer time until it will be upagain, it’s recommended to delete the disks of the failing site. In case that there are no physical damageson the primary site and it will be up again in the near future it is sufficient to just exclude them as donein the step before. Get the list of failed disks and save this list for later use:

1 # mmlsdisk sapmntdata -e | tee /var/mmfs/config/removed_disks.list

This list contains the failure groups for each NSD which is needed for the failback procedure. Verify thatonly disks from the failing sites are listed. To get a list ready for mmdeldisk use:

1 # mmlsdisk sapmntdata -e |grep down | awk '{print $1}' | tr '\n' ';'

Delete the disks from the file system:

1 # mmdeldisk sapmntdata 'disk1;disk2;..;diskn' -a -c

Ignore any "no space left on device messages" and any messages complaining about the number of failuregroups going down to 2. This is a GPFS code bug displaying wrong information only and will be fixedin a later release.

5.6.3 Restoring HA capabilities on failover Site

Though the system is now running on the disaster site, the HA capabilities of the solution are stillavailable and should be streamlined.

For HA we need two replicas. Execute

1 # mmchfs sapmntdata -r 2 -m 2

to set the new replication factor 2 and then run

1 # mmrestripefs sapmntdata -R

this will set the new replication level for all existing files and restore any missing replicas for this level.The executation may take a while. After this finishes standard HA failure resilience is achieved.

WarningCurrently theq FPO feature used in the appliance is not compatible with file system rebal-ancing. Do not use the -b parameter!

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

52

Page 61: Hana Operations Guide

Technical Documentation

5.6.4 Mount the file system

1 # mmmount sapmntdata -a

The mounting of the file system might take more than 5 minutes to complete.

The process can be monitored to some extent by looking at the logfile /var/adm/ras/mmfs.log.latest.

5.6.5 Start SAP (Host Agent & SAP HANA)

As soon as the file system is mounted again, the SAP components can be started again:

1 # /etc/init.d/sapinit start2 # /usr/sap/hostctrl/exe/sapcontrol -nr <Instance number> -function StartSystem HDB

5.6.6 Failback to site A

Follow the procedure as described in the "GPFS Advanced Administraion Guide: Failback with temporaryloss and configuration changes". The procedure only differs slightly if the optional Tiebreaker node isused, if different handling is necessary, it will be noted.

Current status: Site B is up and running, file system is mounted.

1. Bring site A back online If the servers where powered on, the following is done during systemstartup. In other cases it might be necessary to start the GPFS subsystem manually.

1 # mmstartup -N <recovered node list>

Result: GPFS cluster is up and running again, file system not yet ready.

2. Update configuration

It is best to make sure that all active nodes in the current cluster have a recent copy of the clusterconfiguration.

1 # mmchcluster -p LATEST

3. Migrate the primary configuration server back to site A

1 # mmchcluster -p gpfsnode01

Check with mmlscluster that the configuration servers are distributed over site A and B again.

4. Restore node quorum

In a setup with a tiebreaker node, there must be an equal amount of quorum nodes on site A andB. Additionally the tiebreaker node in site C must be a quorum node, too.

In a non-tiebreaker node setup, the majority of the quorum nodes must reside on site A. Otherwisea failure of the backup site may affect the production site.

1 # mmchnode --quorum -N <designated quorum nodes in site A>

Check with mmgetstate -aLs that all nodes are active again, and that the node quorum is dis-tributed as described.

5. Stop non-productive systems

If on the current backup site non-productive HANA systems are running, these systems must bestopped before the productive HANA can be switched over to this system. Stop them via yourprefered way. It is recommended to unmount the additional expansion box file system:

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

53

Page 62: Hana Operations Guide

Technical Documentation

1 # mmumount sapmntext -a

6. Add disks back to file system If disks were deleted as described earlier, they must be added back.If they were only excluded, this step can be skipped.

Execute

1 mmlsnsd | grep '(free disk)'

to get a list of all NSDs not currently belonging to the GPFS sapmntdata file system. The commandrequires the failure group, so a simple "mmadddisk sapmntdata "diskX" -v no" is not sufficient.Instead it should look like this:

1 # mmadddisk sapmntdata "data01node1:::dataAndMetadata:1,0,1::system" -v no

The failure group needs to be modified so that it matches the original information.

The disk configuration which was used during during installation can be found in the directory /var/mmfs/config. Check for files with the name "disk.list.data.gpfsnodeXY" and use the informationfound inside. You can also use this file for readding the disks:

1 # mmadddisk sapmntdata -F /var/mmfs/config/disk.list.fs.gpfsnodXX -v no

You can also use the information from /var/mmfs/config/removed_disks.list created in 5.6.2:Delete disks of failed sites on page 52.

7. Restore file system descriptor quorum

All formerly excluded disks must be added back to the filesystem. "mmlsdisk sapmntdata -L" showsthe excluded disks ("excl" in the remark field). These disks must be added back to the file system.

1 # mmfsctl sapmntdata include -d "<disks>"2 - or if there is a list with all the excluded disks available, eg:3 # mmfsctl sapmntdata include -F /var/mmfs/config/nodes.prod

8. Check for missing disks

1 # mmlsdisk sapmntdata -e

If there are missing disks start them with:

1 # mmchdisk sapmntdata start -a

9. Restore the replication level to 3

1 # mmchfs sapmntdata -r 3 -m 32 # mmrestripefs sapmntdata -R

WarningCurrently the FPO feature used in the appliance is not compatible with file systemrebalancing. Do not use the -b parameter!

5.7 Node failure

Please see Chapter 4.1: After server node failure actions on page 26. When replacing the node use thecorrect topology information as described in Appendix C: Topology Vectors (GPFS 3.5 failure groups)on page 102.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

54

Page 63: Hana Operations Guide

Technical Documentation

5.8 Disk failure

Please see Chapter 6.3: Replacing failed hard drives on page 60. When replacing the disk use the use thecorrect topology information as described in Appendix C: Topology Vectors (GPFS 3.5 failure groups)on page 102.

5.9 Network Failure

5.9.1 Inter-Site link failure

In the case that both sites loose connection to each other, the primary sites stays operational while onthe secondary site all nodes will go into the state "arbitrating" making the shared file system unavailableon this site.

After the connectivity between the two sites has been restored the following steps must be executed toupdate the backup site and to replicate data changes:

1. Verify that all nodes are up1 # mmgetstate -a

Start any nodes that are down1 # mmstartup -a

2. Update GPFS cluster configuration1 # mmchcluster -p LATEST

3. Start all disks1 # mmchdisk sapmntdata start -a

4. Verify all disks have started1 # mmlsdisk sapmntdata -e

There should be no disks listed as missing.

5. Restripe the file system

A restripe is necessary to update any changed data on the backup site.1 # mmrestripefs sapmntdata -r

WarningCurrently the FPO feature used in the appliance is not compatible with file systemrebalancing. Do not use the -b parameter!

6. Move the cluster manager to a node on the backup site1 # mmchmgr -c <nodeXY>

5.9.2 Single switch failure

Failure of a single switch on a site is covered by the HA-capability on each site. In case that the inter-site-link is not redundant or not connected redundantly to the GPFS-internal switches on both sites, ainter-site-link-failure may happen. Refer to the previous section.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

55

Page 64: Hana Operations Guide

Technical Documentation

5.10 Expansion Box (non-productive Systems on Backup Site) Operations

AttentionRunning multiple systems or instances (Multi-SID/Multi-Instance) is not allowed. Improperconfiguration of the non-productive HANA instances can stop GPFS on the affected nodescausing DR to fail and cause shutdown of the running productive systems on theactive site. Do not install multiple Instances or Systems on any primary nodes.

Running multiple systems on the expansion units on the backup site is possible, but only one instancemay run on any node. Installation of more instances is allowed but no more than one may be started onany node.

5.10.1 Description

In order to run additional non-productive HANA systems the backup site nodes of a DR cluster canbe extended with additional IBM M5025 SAS controllers and IBM EXP2524 Storage Expansion Unitscontaining each 9 or 17 HDDs. These additional drives will configured into an additional GPFS filesystem "sapmntext" which will be mounted on the secondary site. Additional HANA instances can beinstalled on this file system.

Note that this additional file system has only a simplified configuration e.g. no quotas and no storagepools. There are no performance guarantees for this file system.

If you plan to be able to switch active/backup site at will, with switching the non-productive systemsover to the new passive site you need to prepare both sites beforehand by installing the M5025 SAScontroller, expansion boxes and HDDs in each node on both sides.

Do:

• Mount the file system on a separate mount point (/sapmntext)

• Mount the file system on all nodes on the inactive site

• Treat the expansion unit file system like a regular HA file system

Do not:

• Mount the file system in /sapmnt or on the active site.

• Use disks/NSDs in the external storage expansion unit in the global DR file system (/sapmnt)

• Use internal disks or IBM High IOPS drive/NSDs in the external file system

• Apply the DR file system settings to the expansion file system and vice versa

• Use disks from both sites in the non-productive file system (inter-site spanning file system)

5.10.2 Site Failure Operation

During normal operations, your non-productive SAP HANA instances run on the DR-site. If disasterstrikes, your production SAP HANA instances will fail over to those DR-site nodes.

Before starting the HANA instances on the backup site, you have to shut down the non-productive HANAinstances

1. Shutdown all HANA instances on the backup site

1 # service sapinit stop

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

56

Page 65: Hana Operations Guide

Technical Documentation

2. Unmount the expansion unit file system

1 # mmumount sapmntext -a

When the failed site is recovered or reinstalled you can either move your productive SAP HANA instanceback to this site or decide to migrate your non-productive instances to this site. The procedure for thelatter is described in the next chapter 5.10.3: Site to Site Migration on page 57. In any case the non-productive SAP HANA instances will be unavailable until either productive or non-productive instancesare moved to the other site.

5.10.3 Site to Site Migration

The migration of non-productive HANA instances between sites requires that both sites have beenequipped with the expansion storage units. There are a few ways to migrate the data but we rec-ommend the following solution. It is convenient and does not require the sapmntext file system to bemounted.

Note that performing the migration this way will put load on the DR inter-site link which impactsperformance of the productive SAP HANA instance during the migration. So this should be done duringmaintenance windows or periods of low utilization.

A physical move of the expansion units or of the HDDs in these units between sites is not supported.

The general approach is to move data between the corresponding nodes on both sites by adding theexpansion unit’s disks of the target node to the file system and then to remove the disks of the sourcenode. This will force the data of the source node to be transferred to the target node. Do this for everynode and data should be stored in the same distribution and layout as it was on the source site.

The easiest way to determine each source & target node couples is to look on both sites for the nodeswith the same hananodeXX host name and use the gpfsnodeXX names assigned to them.

1. For each source/target node couple in the DR cluster do

2. Create NSDs in the target node

Get the devices names of the expansion unit’s HDDs

1 # lsscsi |grep M5025

Create file /var/mmfs/disk.list.fs.ext.gpfsnodeXX and add a line similar to

1 /dev/sdc:gpfsnode01::dataAndMetadata:1001:ext01node01:system

for each device. Replace /dev/sdc with the device name, gpfsnode01 with the correct GPFS hostname. 1001 is the failuregroup, use 10XX for the Site A and 20XX for the Site B and replace XXwith the GPFS host name. In ext01node01 the first 01 is the number of each drive starting with 1and node01 the GPFS node number again.

The nsds will be created with the command

1 # mmcrnsd -F /var/mmfs/disk.list.fs.ext.gpfsnodeXX

If the HDDs were used previously in a GPFS cluster, GPFS will refuse to reinitialize them, add -vno to the command to force GPFS to use them.

3. Add disks to expansion file system

After creating the NSDs add these to the expansion file system. Execute on the target node.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

57

Page 66: Hana Operations Guide

Technical Documentation

1 # mmadddisk sapmntext /var/mmfs/disk.list.fs.ext.gpfsnodeXX2 The following disks of sapmntext will be formatted on node gpfsnode04:3 ext01node02: size 292968750 KB4 Extending Allocation Map5 Checking Allocation Map for storage pool system6 Completed adding disks to file system sapmntext.7 mmadddisk: Propagating the cluster configuration data to all8 affected nodes. This is an asynchronous process.

4. Delete disks from source node

Get the NSD names of used disks:

1 # mmlsnsd |grep sapmntexp |grep gpfsnodeXX

the NSD name is in the middle column. Delete those disks

1 # mmdeldisk sapmntext "disk1;disk2;...;diskn"

During the delete the data is moved to the target node’s disks. This can be a lengthy operation.Output may look like:

1 Deleting disks ...2 Scanning file system metadata, phase 1 ...3 Scan completed successfully.4 Scanning file system metadata, phase 2 ...5 Scan completed successfully.6 Scanning file system metadata, phase 3 ...7 Scan completed successfully.8 Scanning file system metadata, phase 4 ...9 Scan completed successfully.

10 Scanning user file metadata ...11 8.22 % complete on Thu Feb 21 15:05:20 2013 ( 503808 inodes 1757 MB)12 26.94 % complete on Thu Feb 21 15:05:42 2013 ( 503808 inodes 5757 MB)13 40.97 % complete on Thu Feb 21 15:06:04 2013 ( 503808 inodes 8757 MB)14 55.01 % complete on Thu Feb 21 15:06:25 2013 ( 503808 inodes 11757 MB)15 69.05 % complete on Thu Feb 21 15:06:46 2013 ( 503808 inodes 14757 MB)16 100.00 % complete on Thu Feb 21 15:06:50 201317 Scan completed successfully.18 Checking Allocation Map for storage pool system19 tsdeldisk completed.20 mmdeldisk: Propagating the cluster configuration data to all21 affected nodes. This is an asynchronous process.

While data gets migrated, disk status is shown as ’being emptied’.1 # mmlsdisk sapmntext -d ext01node062 disk driver sector failure holds holds storage3 name type size group metadata data status availability pool4 ------------ -------- ------ ----------- -------- ----- ------------------------- --------5 ext01node06 nsd 512 6 Yes Yes being emptied up system6 Attention: Due to an earlier configuration change the file system7 may contain data that is at risk of being lost.

5. Repeat steps 2-4 for all other node couples.

Afterwards the NSDs are shown as free disks and may be deleted. Data is moved. Start HANA:

1. Mount file system

1 # mmmount sapmntext /sapmntext -N gpfsnode01,gpfsnode02...

2. Start HANA

On each node execute

1 # service sapinit start

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

58

Page 67: Hana Operations Guide

Technical Documentation

6 Hard Drive Operations

6.1 MegaCli

MegaCLI is a publicly available command line tool to configure IBM ServeRAID controllers. It is neces-sary to remove failed drives and add new drives before they can be added into the GPFS file system.

The MegaCLI tool can be obtained for free from http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5082327. The MegaCLI tool installs into the path /opt/MegaRAID/MegaCli.

You either need to add this path to the PATH variable or execute all commands from that directory. Weassume in the following commands that you have installed the MegaCli tool.

NoteThe following MegaCLI commands will work only on the first IBM ServeRAID controllerin the system. To work on a different controller, please change the -a0 parameter to theparticular controller number, e.g. -a1 for the second controller. Alternatively you can use-aAll to work on all controllers at the same time.

6.2 Collecting Information

6.2.1 Read out controller configuration

To read the complete configuration of the controller execute following command.

1 # /opt/MegaRAID/MegaCli/MegaCli64 -CfgDsply -a0

Output example:

1 ===================================================================2 Adapter: 03 Product Name: ServeRAID M5015 SAS/SATA Controller4 Memory: 512MB5 BBU: Present6 Serial No: SV125088867 ===================================================================8 Number of DISK GROUPS: 79

10 DISK GROUPS: 0 Number of Spans: 1 ....

A SAP HANA Scale-out configuration has 7 disk groups:

1 # /opt/MegaRAID/MegaCli # ./MegaCli64 -CfgDsply -a0 | grep "DISK GROUPS:"2 Number of DISK GROUPS: 73 DISK GROUPS: 04 DISK GROUPS: 15 DISK GROUPS: 26 DISK GROUPS: 37 DISK GROUPS: 48 DISK GROUPS: 59 DISK GROUPS: 6

The first disk groups consists of 2 hard drives which are configured as a RAID1. This RAID holds theoperating systems root partition and the swap partition. The rest of the disk groups are single hard driveRAID0 configurations which are used by GPFS.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

59

Page 68: Hana Operations Guide

Technical Documentation

To display the configuration for a single disk group select the requested disk group and list the following50 lines:

1 # /opt/MegaRAID/MegaCli/MegaCli64 -CfgDsply -a0 | grep "DISK GROUPS: 6" -A2 50

6.3 Replacing failed hard drives

6.3.1 Remove failed disk from GPFS

This section only applies to disks which are configured as RAID-0 devices. Failing disks in a RAID-1,RAID-5 etc. configuration are are not seen as single devices to the operating system, so there is no needto remove them from GPFS. In this case, continue with section Check physical drive status.

1. As soon as a failing hard drive is detected, it should be stopped to avoid any further attempts toaccess this disk.

1 # mmchdisk sapmntdata stop -d "<hdiskXY>"

2. Delete the disk from the file system

1 # mmdeldisk sapmntdata "<hdiskXY>"

3. Delete the NSD

1 # mmdelnsd "<hdiskXY>"

6.3.2 Check physical drive status

First step when replacing a hard drive is to find out which drive it is from a controller perspective.

1 # /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aAll | grep "Firmware state" -B 15

The output lists details for every physical disk, e.g. disk 8 (the slot-number uses zero-based indexing):

1 Enclosure Device ID: 2522 Slot Number: 73 Drive's postion: DiskGroup: 6, Span: 0, Arm: 04 Enclosure position: 05 Device Id: 156 WWN: 50000393680AC79D7 Sequence Number: 28 Media Error Count: 09 Other Error Count: 0

10 Predictive Failure Count: 011 Last Predictive Failure Event Seq Number: 012 PD Type: SAS13 Raw Size: 558.911 GB [0x45dd2fb0 Sectors]14 Non Coerced Size: 558.411 GB [0x45cd2fb0 Sectors]15 Coerced Size: 557.861 GB [0x45bb9000 Sectors]16 Firmware state: Online, Spun Up

The last line shows the status of the disks. In case a disk has failed it will show:

1 Firmware state: Unconfigured(bad)

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

60

Page 69: Hana Operations Guide

Technical Documentation

The first line shows the device ID of the adapter. Default is 252. The second line indicates the disknumber. Number starts with 0 so that disk 7 is the last disk (top most disk in the server).

6.3.3 Determine next steps

After identifying the failed drive it can be physically removed from the server. When adding a new drivethe following steps depend on its status. The drive itself might have a configuration which needs to becleared first. The controller cache might have a cache region locked for the failed drive. In this case thelock has to be broken. The necessary steps for these tasks are described now.

6.3.4 Remove old config

In case a defect hard drive is being replaced by an unused new hard drive, there is no existing configurationon the hard drive. No further steps are necessary to prepare the hard drive for being added to a VolumeGroup. In case a hard drive, which was used in a differnt RAID before, is used for replacement , oldconfiguration data must be cleared of the hard drive before it can be added to a Volume Group. Firststep is to change the firmware state to ’Unconfigured-Good’:

1 # /opt/MegaRAID/MegaCli/MegaCli64 -PDMakeGood -PhysDrv[252:5] -a02 Adapter: 0: EnclId-252 SlotId-5 state changed to Unconfigured-Good.3 Exit Code: 0x00

In the above command the physical drive is represented by the Enclosure device ID and the slot number-PhysDrv[<Enclosure device ID>:<slot number>]. Use the slot number of the failed disk that hadbeen determined before.

The second step is to clear any existing foreign configuration on the disks:

1 # /opt/MegaRAID/MegaCli/MegaCli64 -CfgForeign -Clear -a02 Foreign configuration 0 is cleared on controller 0.3 Exit Code: 0x00

6.3.5 Clear controller cache

When hard drives fail it can happen that the controller still holds a lock for the cache region that hadbeen assigned to the failed hard drive. In order to add a new hard drive, it is necessary to release thiscache region.

1 # /opt/MegaRAID/MegaCli/MegaCli64 -DiscardPreservedCache -L5 -a02 Adapter #03 Virtual Drive(Target ID 05): Preserved Cache Data Cleared.4 Exit Code: 0x00

6.3.6 Add new disk and configure raid

After inserting a new hard drive this needs to be configured.

1 # /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -R0[252:5] WB NORA Direct NoCachedBadBBU←↩↪→ -a0 Adapter 0: Created VD 4

2 Adapter 0: Configured the Adapter!!3 Exit Code: 0x00

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

61

Page 70: Hana Operations Guide

Technical Documentation

In the above command the physical drive is represented by the Enclosure device ID and the slot number-R0[<Enclosure device ID>:<slot number>]. R0 defines the RAID level. Use the slot number of thenew disk.

6.3.7 Rescan SCSI bus to detect new drives

After writing the new RAID configuration, check if the hard drive has been discovered by the operatingsystem:

1 # fdisk -l

In case there is no device for the new hard drive the SCSI bus must be rescanned.1 # rescan-scsi-bus.sh

In case your appliance lacks this command, use the following commands as a work around:1 # sync2 # for hba in /sys/class/scsi_host/host?/scan3 > do4 > echo "- - -" > $hba5 > done

NoteAll appliances before appliance version 1.6.x lack this rescan-scsi-bus.sh script. You caneither install the standard SLES package sg3_utils or use the given workaround.

6.3.8 Add disk to GPFS file system

Please continue with step 6.5: Add new disk to GPFS on page 67.

6.4 Replacing failed IBM High IOPS drives

This section does not apply to x3690 based models (T-Shirt sizes "XS" ,"S" & "SSD").

6.4.1 General Information on IBM High IOPS drives (Fusion-io)

The x3950 based models are using IBM High IOPS drive (Fusion-io ioDrive) add-in cards for improvedperformance. These PCI Express cards are based on Solid-State Flash Memory and can be used like anordinary hard disk after driver installation. In "S+" and "M" models there is one of these cards installedwhile there are two of them in a "L"-sized server (one in the base M model and one in the ScalabilityKit). Pre-Autumn 2012 models feature a so called "duo card" which provides two drives with each halfof the total capacity (e.g. 2x 320GB = 640GB total in a M model) while Post-Autumn 2012 IBM HighIOPS cards are so called "mono cards" which have the whole capacity in one drive.

In Autumn 2012 the old duo cards went out of production so replacement cards may be the new monomodel. These cards may be used interchangeably and even mixed configurations in a L model are possible.When switching between card generations special care has to be taken of the correct firmware and driverversions, this is even more important when running with mixed cards in one system.

IBM High IOPS drives can be identified in the OS by the device schema /dev/fio*, the first drive is/dev/fioa. Depending on model, model generation and card generation you may have 1 to 4 drives,XXXL models have 6 cards and drives.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

62

Page 71: Hana Operations Guide

Technical Documentation

Please read the section about driver and card versions before replacing a card.

6.4.2 Failed High IOPS drive in a Single Node Server

In Single Node installation, for improved performance metadata is written non-redundant onto the HighIOPS drives. As a consequence a failed High IOPS is a disaster requiring recreation of the file systemand a restore of the user’s data from a backup. A complete reinstallation is recommended over manualrecreation of the GPFS file system.

6.4.3 Removing the failed High IOPS drive

These steps can be done before or after the physical removal of the failed card.

Preferably these are done before the removal, especially if only one of the two drives provided by thePCIe card failed and the other one is still operational.

1. Find the drives

First find the failed drive(s). Use the command

1 # mmlsnsd -m

to get a list of all NSD. The High IOPS drives are named /dev/fio[abcd]. If unsure which drivesfailed, check which NSD are marked down in the file system

1 # mmlsdisk sapmntdata -e

A current NSD to device mapping can be obtained with

1 # mmlsnsd -M

2. Check mono or duo card

Remember that on a duo card there are two drives are on card, so if you need to replace one ofthese cards, remove also the second NSD in the next step. Use

1 # fio-status

to get information about the card types and linked drives.

3. Suspend disks

1 # mmchdisk sapmntdata suspend -d "nsd1;nsd2"

and replace nsd1 and nsd2 with the failed NSD names. Omit the ;nsd2 if there is only one drive onthe affected card.

4. Migrate Data off these disks

1 # mmrestripefs sapmntdata -r

WarningCurrently the FPO feature used in the appliance is not compatible with file systemrebalancing. Do not use the -b parameter!

5. Remove disks from the GPFS file system

1 # mmdeldisk sapmntdata "nsd1;nsd2" -m

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

63

Page 72: Hana Operations Guide

Technical Documentation

Formatting with reserved capacity increases steady-state write performance and is for SAP HANAperformance and stability reasons; therefore, required. For details see http://kb.fusionio.com/KB/a51/tuning-techniques-for-writes.aspx. Replace nsd1 and nsd2 with the failed NSDnames. Omit the ;nsd2 if there is only one drive on the affected card.

6. Delete the NSDs

1 # mmdelnsd "nsd1;nsd2"

and replace nsd1 and nsd2 with the failed NSD names. Omit the ;nsd2 if there is only one drive onthe affected card.

7. The server can now be brought offline. Follow this sequence:

(a) stop SAP HANA, e.g. using the command service sapinit stop

(b) stop GPFS on this node with mmshutdown

(c) shutdown the system

6.4.4 Installing a replacement card

1. Check fio-status and identify drives

1 # fio-status

On a "mono card" you should have one drive, on a "duo card" two drives should be reported. Ifno drives are reported or messages regarding wrong driver or wrong firmware are shown, please seethe following sections about driver and/or firmware update.

2. Format the drive Format the new drive(s):

1 # fio-detach /dev/fctX2 # fio-format /dev/fctX -f -s <format size>3 # fio-attach /dev/fctX

Replace /dev/fctX with the correct Fusion-IO control device and remember that you have toperform these three steps also for the second drive if you installed a duo card. The format size is87% for appliance versions prior to -[1.6]. Starting with [1.6]+ the format size is 95%. Pass thepercentage value with the percentage sign, e.g.:

1 # fio-format /dev/fct0 -f -s 95%

3. Create a disk descriptor file

Create a disk descriptor file for the new drives, e.g. /var/mmfs/config/disk.list.fs.fio andadd the respective lines for the new drive(s). You can either get this lines from an old disk descriptorfile in /var/mmfs/config/ (please use the respective lines starting with the hash mark (#). Removethe hash and the following space. Delete all other lines). Or look into Appendix B: GPFS DiskDescriptor Files on page 89 for the correct lines. If the lines differ, take the one from the existingdisk descriptor file.

4. Create NSD Generate the NSD

1 # mmcrnsd -F /var/mmfs/config/disk.list.fs.fio

/var/mmfs/config/disk.list.fs.fio is the path of disk descriptor file and replace it with theactual file you created if you choose a different file.

5. Add NSDs to GPFS file system

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

64

Page 73: Hana Operations Guide

Technical Documentation

1 # mmadddisk sapmntdata -F /var/mmfs/config/disk.list.fs.fio

/var/mmfs/config/disk.list.fs.fio is the path of the disk descriptor file. Replace it with theactual file you created if you choose a different file.

6. Repair replication

1 # mmrestripefs sapmntdata -R

Afterwards all data is fully replicated again.

WarningCurrently the FPO feature used in the appliance is not compatible with file systemrebalancing. Do not use the -b parameter!

6.4.5 Driver & Firmware Upgrade

6.4.5.1 Multiple High IOPS adapters installed in a single system

When multiple High IOPS adapters are installed in the same server, all devices must operate with the sameversion of the device driver. High IOPS adapters require matching firmware, drivers and utilities. Thisis a very important consideration when adding a new second generation High IOPS adapter in a serverwhere legacy adapters are deployed. When upgrading adapters operating with a previous generation ofsoftware, you must back up the data on the adapter before upgrading to prevent data loss.

After upgrading the ioMemory VSL (drivers) to the latest version, the legacy adapters will not logicallyattach to the system until the firmware is also updated. Detailed instructions for upgrading software isprovided in the external IBM High IOPS user guide13 section Appendix F (Linux) - Upgrading Devices.

6.4.5.2 Upgrading the High IOPS software

Upgrading legacy adapters software to a newer version offers a number of significant changes and improve-ments, however there are some important considerations. When performing an upgrade from 1.2.x to 3.x,for example, you must perform a staged upgrade (upgrade to the 2.x software and firmware before up-grading to 3.x). The device driver name has also changed from fio-driver (version 1.2.x) to iomemory-vsl(2.x and above).

The upgrade process from 2.x to 3.x will require the adapter to be formatted. Formatting will removeall existing data from the card and the data must be restored after the update completes. You can dothis with the replacement instructions above. The firmware upgrade process as of version 3.x updatesand modifies important hardware settings that are not compatible with 1.2.x or 2.x versions of software.Once updated, the card cannot be back leveled to any previous versions of the software. Please see the"change history" documentation for a complete list of new features, enhancements, and fixes.

6.4.5.3 Replacing a failed legacy High IOPS card and "mandatory" update requirements

As the supply of legacy adapters diminishes from inventory, it becomes more likely that warranty replace-ment cards will transition to the newer versions of the High IOPS adapters. Replacement High IOPScards may require firmware updates to support the new or existing cards in the server. Any situation

13Obtainable from http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5091557

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

65

Page 74: Hana Operations Guide

Technical Documentation

where mixing the IBM High IOPS cards occurs, the minimum version of software supported by the lat-est generation of hardware prevails. A mandatory upgrade of software is required to support the latestgeneration of hardware with backward compatibility to legacy cards in the server.

In order to update your HIGH IOPS adapter’s firmware proceed with the following instructions:

1. Get the latest information at the IBM High IOPS software matrix.14 Log on as the System Ad-ministrator to the server.

2. Download the latest supported driver zip file to a temporary directory on the server, e.g. /var/tmp/install. If you have a newer card than is supported by SAP HANA, please open an SAPOSS Customer message immediately and ask for assistance. Build the kernel driver for the requireddriver version. The driver version depends on the firmware version and revision of the installedIBM High IOPS card.

Firmware version Firmware revision Driver version<currently unknown> 43246 1.2.7

5.0.6 101583 2.2.36.0.x 107004 (or higher) 3.1.17.1.x 109322 (or higher) 3.2.3

Table 5: IBM High IOPS Firmware / Driver dependencies

1 # cd /var/tmp/install2 # unzip -d highiop_ssd-<version> -j ibm_dd_highiop_ssd-<x.y.z>_sles11_x86-64.←↩

↪→zip3 # cd highiop_ssd-<version>4 # rpmbuild --rebuild iomemory-vsl-<driver version>.src.rpm5 # rpm -ivh /usr/src/packages/RPMS/x86_64/iomemory-vsl -<kernel version>-0.2-←↩

↪→default-<driver version>.x86_64.rpm6 # rpm -ivh fio-firmware-highiops-<firmware version>.noarch.rpm7 # rpm -ivh fio-util-<driver version>.x86_64.rpm

3. Install the IBM High IOPS SSD PCIe Adapter driver:1 # modprobe iomemory-vsl

4. Check the status of the IBM High IOPS SSD PCIe drives:1 # fio-status

5. If the firmware is still marked as "outdated", update to the latest version. Make sure that, if youhave more than one IBM High IOPS card installed, there is no mix of firmware revisions. The IBMHigh IOPS driver does not support a mix of firmware levels. For driver versions starting with 2.2.3:

1 # fio-update-iodrive /usr/share/fio/firmware/highiops_<fw#>.fff

Do not downgrade existing firmware levels to older versions if not explicitly directed by IBM support!

Depending on the previous version and updated version of the IBM High IOPS cards you may have toformat these drives. The fio-update-iodrive will tell you if you need to format the drives, in general afirmware update to and from firmware version 5.0.6 needs a reformat.

For format a drive use these three commands:1 # fio-detach /dev/fctX2 # fio-format /dev/fctX -f -s <format percentage>3 # fio-attach /dev/fctX

14https://www.ibm.com/support/entry/myportal/docdisplay?lndocid=migr-5083174

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

66

Page 75: Hana Operations Guide

Technical Documentation

and replace /dev/fctX with the correct device. The format perceentage is 95% for appliance versions[1.6] and later and 87% for the previous versions.

If all drives need to be formatted, run this command:

1 # fio-detach /dev/fctX2 # fio-format /dev/fctX -f -s <format percentage>3 # fio-attach /dev/fctX

Formatting with reserved capacity increases steady-state write performance and is for SAP HANA per-formance and stability reasons; therefore, required.15

6.5 Add new disk to GPFS

6.5.1 Create disk descriptor file

After the new disk has been discovered by Linux it needs to be added to the GPFS file system.

1 # /var/mmfs/config # cat disk.list.data.fs.repair2 /dev/sdh:gpfsnode<x>::dataOnly:100<y>:nsddata<z>gpfs<x>:storagepool

Refer to the table GPFS Disk Descriptor Files in the Appendix for the correct values depending on theserver model.

6.5.2 Create new disk in GPFS

Use the mmcrnsd command to create a new disk:

1 # /var/mmfs/config # mmcrnsd -F disk.list.data.fs.repair -v no

6.5.3 Add new disk to file system

After creating the new disk it has to be added to the file system

1 # /var/mmfs/config # mmadddisk sapmntdata -F disk.list.data.fs.repair -v no

The process of adding the new disk can take some time depending on the used capacity of the file systemand the status of the replication.

15http://kb.fusionio.com/KB/a51/tuning-techniques-for-writes.aspx

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

67

Page 76: Hana Operations Guide

Technical Documentation

7 Software Updates/Upgrades

7.1 Warning

Please be careful with updates of the software stack. Please update the software and driver componentsonly with a good reason, either because you are affected by a bug or have a security concern and onlyafter IBM or SAP support advised you to upgrade or after requesting approval from support via the SAPOSS Ticket System on the queue BC-OP-LNX-IBM. Be defensive with updates as updates may affect theproper operation of your SAP HANA appliance and the IBM System x SAP HANA Development teamdoes not test every released patch or update.

Before performing a rolling update (non-disruptive one node at a time update) in a cluster environmentmake sure that your cluster is in good health and all server nodes and storage devices are running.

WarningIf the Linux kernel is updated, it is mandatory to update the Fusion-io driver kernel moduleand the GPFS portability layer kernel module. Otherwise the system will not work anymore!

7.2 Linux Kernel Update

7.2.1 Kernel Update Methods

There are multiple methods to update a SLES 4 SAP Installation. Possible update sources includeupdating by using kernel RPMs copied onto the target server, using a corporate-internal installed SLESupdate server/repository or by using Novell’s update server via the internet (requires registration of theinstallation). Possible methods include commmand line based tools like rpm -Uvh or CLI/X11 basedGUI tools like SuSe’s YAST2.

Please refer to Novell’s official SLES documentation. A good starting point is the chapter "Installing orRemoving Software" (Chapter 9 in SP1 & SP2 Guides) in the SLES 11 Deployment guides obtainablefrom https://www.suse.com/documentation/sles11/.

If you decide to update from RPM files, you need to update at least the following files:

• kernel-default-<kernel version>.x86_64.rpm

• kernel-default-base-<kernel version>.x86_64.rpm

• kernel-default-devel-<kernel version>.x86_64.rpm

• kernel-source-<kernel version>.x86_64.rpm

• kernel-syms-<kernel version>.x86_64.rpm

• kernel-trace-devel-<kernel version>.x86_64.rpm

• kernel-xen-devel-<kernel version>.x86_64.rpm

Updating using YAST is recommended over updating from files.

7.2.2 Updating IBM Workload Optimized System x3950 X5

The System x3950 X5 contains IBM High IOPS SSD PCIe adapter cards which require an update ofthe driver to match the newest linux kernel. Likewise, a rebuild of the GPFS portability layer is alsonecessary.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

68

Page 77: Hana Operations Guide

Technical Documentation

Step Title 3

1 Stop SAP HANA2 Stop GPFS3 Unload the IBM High IOPS driver4 Update Kernel Packages5 Rebuild the new High IOPS portability layer6 Load the High IOPS Driver7 Build new GPFS portability layer8 Update cluster and file system information9 Restart GPFS, mount GPFS file systems10 Check Status of GPFS, High IOPS11 Start SAP HANA

Table 6: Update IBM High IOPS Driver Checklist

1. Stop SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal16 or SAP Service Marketplace17.

2. Unmount GPFS file systems, stop GPFS

1 # mmumount all -a2 # mmshutdown -a

3. Ensure High IOPS driver is unloaded

1 # modprobe -r iomemory-vsl

4. Update Kernel Packages

Please update now the kernel by your preferred method.

5. Rebuild the High IOPS driver

Find out which version (e.g. 2.2.3.66-1.0) of the High IOPS driver software is currently installed:

1 # rpm -qa | grep iomemory-vsl | cut -d"-" -f6-

Then execute with this information (replace <ioversion> by the version you looked up):

1 # cd /var/tmp/install/highiop_ssd/sles11/2 # rpmbuild --rebuild iomemory-vsl-<ioversion>.src.rpm

6. Install the High IOPS driver

Find out the kernel version:

1 uname -r

and install the updated driver package with this information:

1 # rpm -Uvh /usr/src/packages/RPMS/x86_64/iomemory-vsl-<kernel version>-<←↩↪→ioversion>.x86_64.rpm

7. Load the High IOPS driver

1 # modprobe iomemory-vsl

16https://help.sap.com/hana17https://service.sap.com/hana

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

69

Page 78: Hana Operations Guide

Technical Documentation

8. Ensure High IOPS devices exist. (Should return at least one High IOPS device.)

1 # ls /dev/fio?

9. Build new portability layer

1 # cd /usr/lpp/mmfs/src/2 # make Autoconfig3 # make World4 # make InstallImages

10. Update cluster and file system information to current GPFS version

Do not perform this step until all nodes are updated to the same GPFS version level!

1 # mmchconfig release=LATEST2 # mmstartup -a3 # mmchfs <file system> -V full4 # mmmount all -a

11. Check Status of GPFS and IBM High IOPS cards

1 # mmgetstate -a2 # mmlsmount all -L3 # fio-status

12. Start SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal or SAP Service Marketplace.

7.2.3 Updating IBM Workload Optimized System x3960 X5

NoteThe System x3690 X5 does not contain any IBM High IOPS SSD PCIe adapter cards; and,therefore, requires only a rebuild of the GPFS portability layer after a kernel update.

Step Title 3

1 Stop SAP HANA2 Unmount GPFS file systems, stop GPFS4 Update Kernel Packages3 Build new GPFS portability layer4 Restart GPFS, Update & mount file system5 Check Status of GPFS6 Start SAP HANA

Table 7: Upgrade GPFS Portability Layer Checklist

1. Stop SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal18 or SAP Service Marketplace19.

2. Unmount GPFS file systems, stop GPFS

1 # mmumount all -a2 # mmshutdown -a

18https://help.sap.com/hana19https://service.sap.com/hana

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

70

Page 79: Hana Operations Guide

Technical Documentation

3. Update Kernel Packages

Please update now the kernel by your preferred method.

4. Build new portability layer

1 # cd /usr/lpp/mmfs/src/2 # make Autoconfig3 # make World4 # make InstallImages

5. Restart GPFS, Update cluster and file system information to current GPFS version

1 # mmchconfig release=LATEST2 # mmstartup -a3 # mmchfs sapmntdata -V full4 # mmmount all -a

6. Check Status of GPFS

1 # mmgetstate -a2 # mmlsmount all -L3 # mmlsconfig | grep minReleaseLevel

7. Start SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal or SAP Service Marketplace.

7.3 Updating High IOPS Drivers

WarningAn update of the IBM High IOPS driver may require upgrading the firmware of the IBMHigh IOPS card as well. Upgrading the firmware may require a reformat of the card whichwill destroy all stored data. Do not try to format the IBM High IOPS in a running systemunless advised by IBM support. See Section 6.4.5: Driver & Firmware Upgrade on page 65.

NoteUpdating the IBM High IOPS driver requires a rebuild of the driver software. The sameapplies if the Linux kernel was upgraded.

Step Title 3

1 Stop SAP HANA2 Stop GPFS3 Unload the High IOPS Driver4 Rebuild & install the new High IOPS portability layer6 Restart GPFS, mount GPFS file systems7 Check Status of GPFS, High IOPS8 Start SAP HANA

Table 8: Upgrade IBM High IOPS Driver Checklist

1. Stop SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal20 or SAP Service Marketplace21.

20https://help.sap.com/hana21https://service.sap.com/hana

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

71

Page 80: Hana Operations Guide

Technical Documentation

2. Unmount GPFS file systems, stop GPFS

1 # mmumount all -a2 # mmshutdown -a

3. Ensure High IOPS driver is unloaded

1 # modprobe -r iomemory-vsl

4. Rebuild the new High IOPS driver

Execute (replace <newioversion> by the version of the new driver’s RPM):

1 # cd /var/tmp/install/highiop_ssd/sles11/2 # rpmbuild --rebuild iomemory-<newioversion>.src.rpm

5. Install the High IOPS driver

Find out the kernel version:

1 uname -r

and install the new driver package with this information:

1 # rpm -Uvh /usr/src/packages/RPMS/x86_64/iomemory-vsl-<kernel version>-<←↩↪→newioversion>.x86_64.rpm

6. Load the High IOPS driver

1 # modprobe iomemory-vsl

7. Ensure High IOPS devices exist. (Should return at least one High IOPS device.)

1 # ls /dev/fio?

8. Start GPFS

1 # mmstartup -a2 # mmmount sapmntdata /sapmnt -a

9. Check Status of GPFS

1 # mmgetstate -a2 # mmlsmount all -L

10. Start SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal or SAP Service Marketplace.

7.4 Updating GPFS

7.4.1 Disruptive Cluster Update

NoteUpgrading GPFS requires a rebuild of the portability layer. The same applies if the Linuxkernel was upgraded.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

72

Page 81: Hana Operations Guide

Technical Documentation

Step Title 3

1 Stop SAP HANA2 Unmount GPFS file systems, stop GPFS3 Upgrade to new GPFS Version4 Build new GPFS portability layer5 Update cluster and file system information6 Restart GPFS, mount GPFS file systems7 Check Status of GPFS, High IOPS8 Start SAP HANA

Table 9: Upgrade GPFS Portability Layer Checklist

1. Stop SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal22 or SAP Service Marketplace23.

2. Unmount GPFS file systems, stop GPFS

1 # mmumount all -a2 # mmshutdown -a

3. Upgrade to new GPFS version. This step may be skipped if only the portability layer needs to bere-compiled due to a Linux kernel update. (Replace <newgpfsversion> with GPFS version numberof the update.)

1 # rpm -Uvh gpfs.base-<newgpfsversion>.x86_64.update.rpm2 # rpm -Uvh gpfs.docs-<newgpfsversion>.noarch.rpm3 # rpm -Uvh gpfs.gpl-<newgpfsversion>.gpl.noarch.rpm4 # rpm -Uvh gpfs.msg.en_US-<newgpfsversion>.noarch.rpm

4. Build new portability layer

1 # cd /usr/lpp/mmfs/src/2 # make Autoconfig3 # make World4 # make InstallImages

5. Update cluster and file system information to current GPFS version

1 # mmchconfig release=LATEST2 # mmstartup -a3 # mmchfs sapmntdata -V full4 # mmmount all -a

6. Check Status of GPFS

1 # mmgetstate -a2 # mmlsmount all -L3 # mmlsconfig | grep minReleaseLevel

7. Start SAP HANA as documented in the SAP HANA administration guidelines at the SAP HelpPortal or SAP Service Marketplace.

22https://help.sap.com/hana23https://service.sap.com/hana

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

73

Page 82: Hana Operations Guide

Technical Documentation

7.4.2 Full Cluster Rolling Update

WarningContrary to the description in chapter 8 "Migration, coexistence and compatibility" in theofficial GPFS guide "Concepts, Planning, and Installation Guide", a rolling upgrade fromGPFS 3.4 to GPFS 3.5 is not allowed for IBM SAP HANA Appliance servers. The upgradefrom GPFS 3.4 to 3.5 must be performed on all nodes at the same time necessitating acomplete cluster downtime. The procedure is described in the "GPFS: Concepts, Planning,and Installation Guide", chapter 8 , section "Migrating to GPFS 3.5 from GPFS 3.3, GPFS3.2, GPFS 3.1, or GPFS 2.3".A rolling upgrade within a GPFS release branch (e.g. 3.4.0-12 to 3.4.0-20) is possible andsupported.

This update procedure is only necessary when you are performing updates which either need a serverrestart like Linux kernel updates or need a restart of the GPFS server software on the affected nodes.Besides the possibility to shutdown the whole cluster and consequent application of updates it is alsopossible to perform a rolling update which does not disturb the operation of the whole cluster. The ideais to update only one server at a given time and after the server is back online in the cluster to proceedwith the next node in the same way.

For updating the SAP HANA Software in a SAP HANA cluster, please refer to the SAP HANA TechnicalOperations Manual. This can be done independently of other updates.

7.4.2.1 Rolling GPFS Upgrade per node procedure

To minimize downtimes, please distribute the GPFS update package (GPFS-3.X.0.xx-x86_64-Linux.tar.gz)on all nodes and extract the tarball before starting. Check that the cluster is in good health (see Section5 System health check)

1. Check GPFS cluster health

Before performing any updates on a any node, verify that the cluster is in a sane state, first checkthat all nodes are running with the command

1 # mmgetstate -a

and check that all nodes are active, then verify that all disks are active:

1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that allother disks are up. Warning: If disks of more than one server node are down, the file system willbe shut down causing all other SAP HANA nodes to fail.

2. Shutdown SAP HANA

Shutdown the SAP HANA and the sapstartsrv daemon via

1 # service sapinit stop

Verify that SAP HANA, sapstartsrv and any other process accessing /sapmnt are not runninganymore:

1 # lsof /sapmnt

No processes should be found, if any processes are found please retry stopping SAP HANA and anyother process accessing /sapmnt.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

74

Page 83: Hana Operations Guide

Technical Documentation

3. Unmount the GPFS file system

Unmount locally the shared file system

1 # mmumount sapmntdata

and take care that no open process is preventing the file system from umounting. If that happensuse

1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.) closethem and retry. Other Nodes within the cluster can still mount the shared file system.

4. Shutdown GPFS

1 # mmshutdown

GPFS should unload its kernel modules during its shutdown, so check the output of this command.

5. Update GPFS Software

Change to the directory where you extracted the GPFS Update package GPFS-3.X.0.xx-x86_64-Linux.tar.gz where X and xx denote the desired target GPFS version. Execute the following com-mands

1 # rpm -Uvh gpfs.base-3.X.0-xx.x86_64.update.rpm2 # rpm -Uvh gpfs.docs-3.X.0-xx.noarch.rpm3 # rpm -Uvh gpfs.gpl-3.X.0-xx.gpl.noarch.rpm4 # rpm -Uvh gpfs.msg.en_US-3.X.0-xx.noarch.rpm

Afterwards the GPFS Linux kernel module must be recompiled:

1 # cd /usr/lpp/mmfs/src/2 # make Autoconfig3 # make World4 # make InstallImages

6. Restart GPFS

1 # mmstartup

Verify that the node started up correctly

1 # mmgetstate

During the startup phase the node is shown in the state Arbitrating, this changes to active whenGPFS completed startup.

7. Mount file systems

Mount the file system after starting GPFS:

1 # mmmount sapmntdata /sapmnt

8. (on the target node) Start SAP HANA

1 # service sapinit start

9. (on any node) Verify GPFS disks

Verify all GPFS disks are active again:

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

75

Page 84: Hana Operations Guide

Technical Documentation

1 # mmlsdisk sapmntdata -e

If any disks are shown as down, restart them with the command

1 # mmchdisk sapmntdata start -a

and check the disk status again.

10. (on any node) GPFS Restripe

Start a restripe so that all data is replicated proper again

1 # mmrestripefs sapmntdata -r

WarningWarning: Currently the FPO feature used in the appliance is not compatible with filesystem rebalancing. Do not use the -b parameter!

11. Continue with the next node

After all nodes are updated you can update the GPFS cluster configuration and the GPFS "on diskformat" (the data structures written to disk) to the newer version. Not all updates require this updatesteps but it is safe to do them in any case. This update is non-disruptive and can be performed whilethe cluster is active.

1. Update the cluster configuration with the newest settings

1 # mmchconfig release=LATEST

2. Update the file system’s on disk format to activate new functionality

1 # mmchfs sapmntdata -V full

Notice that a successful upgrade of the GPFS on disk format to a newer version will make adowngrade to previous GPFS versions impossible. You can verify the minimum needed GPFSversion with the command

1 # mmlsfs sapmntdata -V

7.4.2.2 General per node update procedure

This is the generic version for any kind of updates which require a system restart. Only step 4 and 5differ from the previous instructions.

1. (on the target node) Check GPFS cluster health

Before performing any updates on a any node, verify that the cluster is in a sane state, first checkthat all nodes are running with the command

1 # mmgetstate -a

and check that all nodes are active, then verify that all disks are active

1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that allother disks are up. Warning: If disks of more than one server node are down, the file system willbe shut down causing all other SAP HANA nodes to fail.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

76

Page 85: Hana Operations Guide

Technical Documentation

2. (on the target node) Shutdown SAP HANA

Shutdown the SAP HANA and the sapstartsrv daemon via1 # service sapinit stop

Verify that SAP HANA and sapstartsrv are not running anymore:1 # ps ax |grep sapstart2 # ps ax |grep hdb

No processes should be found, if any processes are found please retry stopping SAP HANA.

3. (on the target node) Unmount the GPFS file system

Unmount locally the shared file system1 # mmumount sapmntdata

and take care that no open process is preventing the file system from umounting. If that happensuse

1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.). OtherNodes within the cluster can still mount the shared file system.

4. Shutdown GPFS1 # mmshutdown

5. Perform Upgrades

Do now the necessary updates.

6. Restart the system

Restart the server if necessary. GPFS & SAP HANA should start automatically during reboot.Skip step 7.

7. Restart GPFS

If you did not restart the whole server in step 6, start GPFS1 # mmstartup

8. Mount the file system if not already mounted.

You may mount the file system after starting GPFS1 # mmount sapmntdata /sapmnt

9. Start SAP HANA1 # service sapinit start

10. (on any node) Verify GPFS disks

Verify all GPFS disks are active again1 # mmlsdisk sapmntdata -e

If any disks are down, restart them with the command1 # mmchdisk sapmntdata start -a

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

77

Page 86: Hana Operations Guide

Technical Documentation

and check the disk status again.

11. (on any node) GPFS Restripe

Start a restripe so that all data is replicated proper again

1 # mmrestripefs sapmntdata -r

WarningWarning: Currently the FPO feature used in the appliance is not compatible with filesystem rebalancing. Do not use the -b parameter!

12. Continue with the next node

7.5 SLES for SAP 11 SP1 Upgrade to SLES for SAP 11 SP2

7.5.1 Prerequisites

You are running IBM Systems Solution for SAP HANA appliance system and want to to upgrade theSUSE Linux Enterprise Server for SAP Applications (SLES for SAP) 11 Service Pack 1 (SP1) operatingsystem to SLES for SAP 11 Service Pack 2 (SP2). Your appliance was installed with installer image olderthan 1.4.28-1 the SP2 upgrade is not possible and you need to reinstall the system in order to upgradeto SLES for SAP 11 SP2. You can either install an SP1 with a newer "Non OS content for IBM Systemssolution for SAP HANA appliance additional software stack" DVD or alternatively you can install SP2directly with the Non OS content DVD version 1.5.53-5 or later (FRU part number 46W8234).

You have already upgraded your system to GPFS 3.5. If your system is still using GPFS 3.4 you need toupgrade GPFS before upgrading SLES, see Section 7.4: Updating GPFS on page 72

For the Upgrade the following DVDs or images are needed:

• SUSE Linux Enterprise Server for SAP applications 11 Service Pack 1

• SUSE Linux Enterprise Server for SAP applications 11 Service Pack 2

• "Non OS content for IBM Systems solution for SAP HANA appliance additional software stack"(SAP HANA FRU Pkg version 1.5.53-5 or later, FRU part number 46W8234)

Other ways of providing the images to the Server (e.g. locally, FTP, SFTP, etc) are possible but notexplained as part of this guide.

7.5.2 Upgrade Notes

For the upgrade a maintenance downtime is needed with a least one reboot of the servers. If you haveinstalled software that was not part of the initial installation from IBM, please make sure that thissoftware is compatible with SLES for SAP 11 SP 2 and Linux kernel 3.0.

NoteTesting in a non-productive environment before upgrading productive systems is highly rec-ommended. As always backuping the system before performing changes is also highly recom-mended.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

78

Page 87: Hana Operations Guide

Technical Documentation

7.5.3 IBM High IOPS Card Notes (x3950 only)

NoteThe following paragraphs apply only to x3950 based models, x3690 based models are notaffected.

In the SP1 to SP2 upgrade scenario, multiple IBM High IOPS driver versions are possible, while a freshinstallation will always use the driver version 3.2.3. In this scenario, in addition to the driver 3.2.3the driver version 2.3.10 is supported in order to allow the upgrade from driver version 2.2.1 without afirmware upgrade resp. without a reformat of the IBM High IOPS card. A HANA cluster with mixedIBM High IOPS driver versions is also supported, but each node can only use one driver. The choice ofthe best driver options also depends on currently used driver version. You can check the currently usedversion with the command fio-status. Please refer to the following table for possible upgrade paths:

SP1 SP2 upgrade ToCurrent 2.2.1 2.3.10 3.1.1 3.232.2.1 7 3 7 7

2.3.10 7 3 7 7

3.1.1 7 7 7 3(FW upgrade)3.2.3 7 7 7 3

Table 10: IBM High IOPS Non-destructive Upgrade Paths

If you are using driver version 3.1.1 you need to upgrade to driver 3.2.3. Upgrading to 3.2.3 requiresupdating the firmware from version 6.x.x used by 3.1.1 to version 7.x.x required by driver 3.2.3. Afterthe firmware upgrade the server cannot be installed any longer with any SLES for SAP 11 SP1 basedappliance image due to driver incompatibility. Downgrading the firmware to a lower firware version isnot supported.

Please note that the IBM High IOPS driver 2.3.10 was never part of an appliance version and needs to bedownloaded separately for free from http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5083174.

7.5.4 Rolling Upgrade

In a cluster environment a rolling upgrade (one node at a time) is possible as long as you are running aHA environment with GPFS 3.5 and with at least a standby node. A rolling upgrade from GPFS 3.4 toGPFS 3.5 is not supported and a rolling upgrade from GPFS 3.4 to 3.5 during the SP2 upgrade is alsonot supported. See section 6.4 GPFS for information on the GPFS upgrade. When a rolling upgrade isnot possible, you still can perform a non-rolling upgrade, taking all nodes down for maintenance.

7.5.5 Upgrade Overview

The following tested and recommended upgrade steps require one reboot. The tasks are mostly the samefor cluster and single node systems, if there is an operational difference between these two types, it willbe noted. Figure 7: SLES OS Upgrade Flow Chart on page 80 shows the flow of the upgrade.

7.5.6 Upgrade Steps

7.5.6.1 Shutting down services

When doing a rolling upgrade or the upgrade of a single node, do this only on the server being updated.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

79

Page 88: Hana Operations Guide

Technical Documentation

Figure 7: SLES OS Upgrade Flow Chart

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

80

Page 89: Hana Operations Guide

Technical Documentation

When updating all nodes in a cluster, do theses commands on all nodes and do step 1 on all nodes, thenstep 2 on all nodes and then step 3 on all nodes.

1. Shutdown HANA

Shutdown HANA and all other SAP software running in the whole cluster or on the single nodecleanly. Login in as root on each node and execute

1 # service sapinit stop

Older versions of the appliance may not have this script, so please stop HANA and other SAPsoftware manually. Make sure no process has files open on /sapmnt, you can test that with thecommand:

1 # lsof /sapmnt

2. Unmount the GPFS file system Unmount the GPFS file system /sapmnt by issuing

1 # mmumount sapmntdata

3. Shutdown GPFS

1 # mmshutdown -a

to shutdown the GPFS software on all cluster nodes.

7.5.6.2 Upgrade of IBM High IOPS Drivers

NoteThis section only applies to x3950 based models. Skip it for x3690 models. And skip in caseyou can keep the current driver (see section Notes).

NoteFor L-sized server models or larger server models, please take care of updating the IBM HighIOPS drivers and firmware before performing the SLES upgrade. Due to a bug in the HighIOPs software updating the firmware afterwards is more complicated. If you forgot to keepthis sequence, you must compile and load the IBM High IOPS drivers (any version) beforeyou are able to update the firmware or to use fio-status.

NoteMake sure GPFS is stopped (mmgetstate).

1. Unload IBM High IOPS kernel module

1 # rmmod iomemory-vsl

2. Uninstall old files

Remove the utils and firmware packages:

1 # rpm -e fio-util fio-firmware-highiops

Find the iomemory rpm

1 # rpm -qa | grep iomemory

This will find a package named like iomemory-vsl-3.0.13-0.27-default-3.1.1.172-1.0. Remove it

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

81

Page 90: Hana Operations Guide

Technical Documentation

1 # rpm -e <found package>

3. Install new packages

Copy the new driver package to the server (/var/tmp/install recommended) and unzip thearchive, e.g

1 # unzip ibm_dd_highiop_ssd-3.1.1_sles11_x86-64.zip

This should extract a directory containing files and subdirectories. Change directory to the ex-tracted dir and install packages:

1 # rpm -ivh Firmware/fio-firmware-highiops*rpm Utilities/fio-util*rpm

4. Upgrade firmware

When choosing to upgrade from driver version 3.1.1 to 3.2.3 new firmware must be installed to theIBM High IOPS drives. Do this upgrade now:

1 # fio-update-iodrive /usr/share/fio/firmware/highiops_*.fff

The reboot at this point is not needed, the system will be rebooted after the SP2 upgrade.

7.5.6.3 Upgrading SLES for SAP OS

1. Install the "Online Migration Tool"

Mount both the SLES for SAP SP1 DVD and the SLES for SAP SP2 DVD via the IMM or byloading the physical DVD into the integrated DVD drive. To use the IMM, log in to the IMM webinterface in your local browser, navigate to "Remote Control" and select "Start Remote Controlin Multi-User-Mode". In the "Video Viewer", select Tools->Launch Virtual Media. Then "AddImage", choose the SLES for SAP 11 SP1 media and repeat that with the SLES for SAP 11 SP2DVD. When both images appear n the list ,select the "Map" checkbox before both image and thenclick on "Mount Selected". The images are now available to the server.

2. Install the yast2-wagon package:

1 # zypper install yast2-wagon

3. Start the "Online Migration Tool"

1 # YaST2 wagon

The X11 version is recommended, either start the tool in a KVM session via the IMM or use a localX11 server and use remote X11.

The Yast module will guide you through the upgrade. In the dialog "Update method" you maychoose "Customer Center" if you have registered your SLES installation and have access to theSLES Update site, otherwise choose "Custom URL" for installing from either a physical SLES forSAP 11 SP2 DVD media or from a SLES for SAP 11 SP2 DVD image, which is recommended anddescribed in the following steps.

4. Create SLES for SAP 11 SP2 Software Repository

In the Yast "Online Migration" Tool, click on "Add", then choose "DVD". After adding the SLESfor SAP 11 SP2 DVD as a new repository, click on "OK".

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

82

Page 91: Hana Operations Guide

Technical Documentation

5. Start SP2 Upgrade

On the next screen an overview for the upgrade is being shown. Revise the options or simply clickon "Next" and click then on "Start Update".

The update will take a while.

The packages lsi-megaraid_sas-kmp-default-00.00.04.XXX.x86_64 and sap-media-changercan be removed during the upgrade. They are not required with SLES for SAP 11 SP2.

6. Finish the upgrade by confirming the remaining wizard screens. After closing the "Online Migration"asisstant, reboot the machine by issuing the command:

1 # reboot

At this the operating system has been upgraded to SLES for SAP 11 SP2. GPFS and HANA are notoperational in this state until the Linux kernel modules have been recompiled for the new Linux kernel.

It is recommended to install SLES for SAP 11 SP2 updates like Linux kernel updates now.

7.5.6.4 Recompile Linux Kernel Modules GPFS and the IBM High IOPS cards need self-compiled (so called "out-of-tree" drivers) Linux kernel modules to operate properly. If you have a x3690based model you do not need to recompile the IBM High IOPS driver, skip steps 1 to 3.

1. Recompile IBM High IOPS kernel module

Navigate to the previously unzipped IBM High IOPS driver directory and rebuild the kernel modulepackage

1 # rpmbuild --rebuild Software\ Source/iomemory-vsl-*.src.rpm

In the last few output lines there are lines starting with "Wrote:", the first lines contains the path tothe generated Kernel module packages, e.g. /usr/src/packages/RPMS/x86_64/iomemory-vsl-3.0.13-0.27-default-3.1.1.172-1.0.x86_64.rpm.

2. Install Kernel module

Install the generated package from the last step:1 # rpm -ivh <path to rpm>

e.g.1 # rpm -ivh /usr/src/packages/RPMS/x86_64/iomemory-vsl-3.0.13-0.27-default←↩

↪→-3.1.1.172-1.0.x86_64.rpm

3. Load Kernel module1 # modprobe iomemory-vsl

Verify that the driver is loaded and the drives are attached with the command1 # fio-status

4. Compile GPFS kernel module

Execute the following commands1 # cd /usr/lpp/mmfs/src2 # make Autoconfig3 # make World4 # make InstallImages

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

83

Page 92: Hana Operations Guide

Technical Documentation

7.5.6.5 Adapting Configuration For SLES for SAP 11 SP2 some configuration settings changed

1. Change /etc/sysctl.conf

Add the following lines to /etc/sysctl.conf if not already set:

1 net.core.rmem_max = 83886082 net.core.wmem_max = 83886083 net.ipv4.tcp_rmem = 4096 262144 83886084 net.ipv4.tcp_wmem = 4096 262144 83886085 net.core.netdev_max_backlog=2500

Listing 1: /etc/sysctl.conf changes for SLES for SAP 11 SP2

2. Change /etc/rc.d/boot.local

Replace the file contents below the starting comments with the following lines:1 QUEUESIZE=2560002 for i in /sys/block/sd* ; do3 if [ -d $i ]; then4 echo $QUEUESIZE > $i/queue/nr_requests5 fi6 done7 # disable CPU power saving, harms HANA performance8 bios_vendor=$(/usr/sbin/dmidecode -s bios-vendor)9 # Phoenix Technologies LTD means we are running in a VM and governors are not available

10 if [ $? -eq 0 -a ! -z "${bios_vendor}" -a "${bios_vendor}" != "Phoenix Technologies LTD" ]; then11 /sbin/modprobe acpi_cpufreq12 for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor13 do14 echo performance > $i15 done16 fi17 # SLES 11 SP2 needs huge pages disabled18 echo never > /sys/kernel/mm/transparent_hugepage/enabled

Listing 2: /etc/rc.d/boot.local changes for SLES for SAP 11 SP2

3. Change GPFS settings

Run the following command on any node

1 # mmchconfig maxMBpS=2048,readReplicaPolicy=local

4. Activate changes

To activate the changes without reboot, execute the following lines

1 # sysctl -p2 # /etc/rc.d/boot.local

7.5.6.6 Start HANA

Start GPFS and HANA by either rebooting the machine (recommended) or starting the daemons man-ually:

1. Start GPFS

1 # mmstartup

Verify that all disks are up and the file system is mounted:

1 # mmlsdisk sapmntdata -e

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

84

Page 93: Hana Operations Guide

Technical Documentation

2. Start HANA

1 # service sapinit start

7.6 SAP HANA

WarningMake sure that the packages listed in Appendix A.2.5: FAQ #5: Missing RPMs on page 87are installed on your appliance. An upgrade may fail without them.

Please refer to the official SAP HANA documentation for further steps.

8 TSM Backups for HANA

IBM offers the IBM Tivoli Storage Manager (TSM) for SAP ERP which also supports SAP HANA.Please refer to http://www-03.ibm.com/software/products/us/en/tivostormanaforenteresoplan/and contact your IBM sales representative for more information.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

85

Page 94: Hana Operations Guide

Technical Documentation

Appendices

A Support Script Troubleshooting

A.1 Check Script Usage

See section 2.1: Overall System status on page 5.

A.2 FAQ

A.2.1 FAQ #1: SAP HANA Memory Limits

Problem: If left unconfigured, each installed and running HANA instance may use up to 90% of thesystem’s memory. If multiple unconfigured HANA systems or misconfigured HANA systems are runningon the same machine(s) "Out of Memory" situations may occur. In this case the so called "OOM Killer"of linux gets triggered which will terminate running processes at random and in most cases will killSAP HANA or GPFS first, leading to service interruption. An unconfigured HANA system is a systemlacking a global_allocation_limit setting in the HANA system’s global.ini file. Misconfigured SAP HANAsystems are multiple systems running at the same time with a combined memory limit over 90% of thephysical installed memory.

Solution: Please configure the global allocation limit for all systems running at the same time. Thiscan be done by setting the global_allocation_limit parameter in the systems’ global.ini configurationfiles. Please calculate the combined memory allocation for HANA so that at least 25GB are free for otherprograms. Please use only the physically installed memory for your calculation.

More information on the parameter global_allocation_limit can be found in the "HANA AdministrationGuide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as describedthere.

A.2.2 FAQ #2: GPFS parameter readReplicaPolicy

Problem: Older cluster installations do not have the GPFS parameter "readReplicaPolicy" set to "local"which may improve performance in certain cases. Newer cluster installations have this value set andsingle nodes are not affected by this parameter at all. It is recommended to configure this value.

Solution: Execute the following command on any cluster node at any time:

1 # mmchconfig readReplicaPolicy=local

This can be done during normal operation and the change becomes effective immediately for the wholeGPFS cluster and is persistent over reboots.

A.2.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines

Problem: For a general description of the SAP HANA memory limit see Appendix A.2.1: FAQ #1:SAP HANA Memory Limits on page 86. XS sized servers have only 128GB RAM installed of which evena single SAP HANA system will use up to 90% equalling 115GB if no lower memory limit is configured.This leaves too little memory for other processes which may trigger Out-Of-Memory situations causingcrashes.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

86

Page 95: Hana Operations Guide

Technical Documentation

Solution: Please configure the global allocation limit for the installed SAP HANA system to 100GBor less. If multiple systems are are running at the same time, please calculate the combined memoryallocation for HANA so that at least 25GB are free for other programs. Please use only the physicallyinstalled memory for your calculation.

More information on the parameter global_allocation_limit can be found in the "HANA AdministrationGuide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as describedthere.

A.2.4 FAQ #4: Overlapping NSDs

Problem: Under some rare conditions single node SSD or XS/S gen 2 models may be installed withoverlapping NSDs. Overlapping means that the whole drive (e.g. /dev/sdb) as well as a partition on thesame device (e.g. /dev/sdb2) may be configured as NSDs in GPFS. As GPFS is writing data on bothNSDs, each NSD will overwrite and corrupt data on the other NSD. In the end at some point the wholedevice NSD will overwrite the partition table and the partition NSD is lost and GPFS will fail. This isthe most common situation where the problem will be noticed.

Consider any data stored in /sapmnt to be corrupted even if the file system check finds no errors.

Solution: The only solution is to reinstall the appliance from scratch. To prevent installing with thesame error again, the single node installation must be completed in phase 2 of the guided installation.Do not deselect "Single Node Installation".

A.2.5 FAQ #5: Missing RPMs

Problem: An upgrade of SAP HANA or another software component of the appliance, or the installationof HANA Lifecycle Manager (HLM) fails because of missing dependencies.

Solution: Ensure that the packages listed below are installed on your appliance. Missing packages canbe installed from the SLES for SAP DVD shipped with your appliance or from a SUSE update server.

• libuuid

• libssh2-1 – This package is required for HANA Lifecycle Manager (HLM) and SAP HANA SPS06(and later). It was not part of the standard installation until non-OS component DVD 1.5.53-5.

A.2.6 FAQ #6: CPU Governor set to ondemand

Problem: Linux is using a technology for power saving called "CPU governors" to control CPU throttlingand power consumption. By default Linux uses the governor "ondemand" which will dynamically throttleCPUs up and down depending on CPU load. SAP advised to use the governor "performance" as theondemand governor will impact HANA performance due to too slow CPU upscaling by this governor.

Since appliance version 1.5.53-5 (or simply SLES4SAP 11 SP2 based appliances) we changed the CPUgovernor to performance. When following the upgrade instructions in section 7.5: SLES for SAP 11 SP1Upgrade to SLES for SAP 11 SP2 on page 78 you will also change the governor setting. If you are stillrunning SLES4SAP 11 SP1 based appliances, you may also change this setting to trade in power savingfor performance. This performance boost was not quantified by the development team.

Solution: On all nodes append the following lines to the file /etc/rc.d/boot.local:

1 bios_vendor=$(/usr/sbin/dmidecode -s bios-vendor)2 # Phoenix Technologies LTD means we are running in a VM and governors are not ←↩

↪→available

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

87

Page 96: Hana Operations Guide

Technical Documentation

3 if [ $? -eq 0 -a ! -z "${bios_vendor}" -a "${bios_vendor}" != "Phoenix Technologies ←↩↪→LTD" ]; then

4 /sbin/modprobe acpi_cpufreq5 for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor6 do7 echo performance > $i8 done9 fi

The setting will change on the next reboot. You can also change safely the governor settings immediatelyby executing the same lines at the shell. Copy & paste all the lines at once, or type them one by one.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

88

Page 97: Hana Operations Guide

Technical Documentation

B GPFS Disk Descriptor Files

GPFS 3.5 introduced a new disk descriptor format called stanzas. The old disk descriptor format isdeprecated since GPFS 3.5. The appliance introduced the new format with appliance version 1.5.53-5,before that version the old disk descriptor format is used. Systems upgraded from SLES4SAP 11 SP1 toSLES4SAP 11 SP2 may also use the old format. Both formats may be used interchangeably, but it isrecommended to use the new format.

B.1 Old Disk Descriptor Format

The following tables show the current NSD definitions as they would be generated on a node namedgpfsnode01. When using these tables take care of adapting the hostnames and failure group number(1001) of each line and choose the right one for single node and cluster installations.

Model Content /var/mmfs/config/disk.list.data.gpfsnode01

7147-H1X7147-H2X

/dev/sda3:gpfsnode01::dataOnly:1001:data01node01:hddpool/dev/sdb:gpfsnode01::dataAndMetadata:1001:data02node01:system/dev/sdc:gpfsnode01::dataAndMetadata:1001:data03node01:system

7147-H3X7147-HAX7147-HBX

/dev/sda2:gpfsnode01::dataAndMetadata:1001:data01node01:system/dev/sdb2:gpfsnode01::dataAndMetadata:1001:data02node01:system

7143-H1X7143-HAX

/dev/sda3:gpfsnode01::dataOnly:1001:data01node01:hddpool/dev/fioa:gpfsnode01::dataAndMetadata:1001:data02node01:system/dev/fiob:gpfsnode01::dataAndMetadata:1001:data03node01:system

7143-H2X7143-HBX

/dev/sda3:gpfsnode01::dataOnly:1001:data01node01:hddpool/dev/fioa:gpfsnode01::dataAndMetadata:1001:data02node01:system/dev/fiob:gpfsnode01::dataAndMetadata:1001:data03node01:system

(7143-H2X + 7143-H3X)(7143-HBX + 7143-HCX)

/dev/sda3:gpfsnode01::dataOnly:1001:data01node01:hddpool/dev/sdb:gpfsnode01::dataOnly:1001:data02node01:hddpool/dev/fioa:gpfsnode01::dataAndMetadata:1001:data02node01:system/dev/fiob:gpfsnode01::dataAndMetadata:1001:data03node01:system/dev/fioc:gpfsnode01::dataAndMetadata:1001:data04node01:system/dev/fiod:gpfsnode01::dataAndMetadata:1001:data05node01:system

Table 11: GPFS Single Node Disk Descriptor Template for SAP HANA Data

Please note that the number of Fusion-io drives may be different in your server. See 6.4.1: GeneralInformation on IBM High IOPS drives (Fusion-io) on page 62 for an explanation. Omit lines withnon-existing /dev/fio* devices.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

89

Page 98: Hana Operations Guide

Technical Documentation

Model Content /var/mmfs/config/disk.list.data.gpfsnodeXX

7147-H3X7147-HBX

/dev/sdb:gpfsnodeXX::dataAndMetadata:10XX:data01nodeXX:system/dev/sdc:gpfsnodeXX::dataAndMetadata:10XX:data02nodeXX:system/dev/sdd:gpfsnodeXX::dataAndMetadata:10XX:data03nodeXX:system/dev/sde:gpfsnodeXX::dataAndMetadata:10XX:data04nodeXX:system/dev/sdf:gpfsnodeXX::dataAndMetadata:10XX:data05nodeXX:system/dev/sdg:gpfsnodeXX::dataAndMetadata:10XX:data06nodeXX:system/dev/sdh:gpfsnodeXX::dataAndMetadata:10XX:data07nodeXX:system/dev/sdi:gpfsnodeXX::dataAndMetadata:10XX:data08nodeXX:system/dev/sdj:gpfsnodeXX::dataAndMetadata:10XX:data09nodeXX:system/dev/sdk:gpfsnodeXX::dataAndMetadata:10XX:data10nodeXX:system/dev/sdl:gpfsnodeXX::dataAndMetadata:10XX:data11nodeXX:system/dev/sdm:gpfsnodeXX::dataAndMetadata:10XX:data12nodeXX:system/dev/sdn:gpfsnodeXX::dataAndMetadata:10XX:data13nodeXX:system/dev/sdo:gpfsnodeXX::dataAndMetadata:10XX:data14nodeXX:system

7143-H2X7143-HBX

/dev/sdb:gpfsnodeXX::dataOnly:10XX:data01nodeXX:hddpool/dev/sdc:gpfsnodeXX::dataOnly:10XX:data02nodeXX:hddpool/dev/sdd:gpfsnodeXX::dataOnly:10XX:data03nodeXX:hddpool/dev/sde:gpfsnodeXX::dataOnly:10XX:data04nodeXX:hddpool/dev/sdf:gpfsnodeXX::dataOnly:10XX:data05nodeXX:hddpool/dev/sdg:gpfsnodeXX::dataOnly:10XX:data06nodeXX:hddpool/dev/fioa:gpfsnodeXX::dataAndMetadata:10XX:data06nodeXX:system/dev/fiob:gpfsnodeXX::dataAndMetadata:10XX:data07nodeXX:system

(7143-H2X + 7143-H3X)(7143-HBX + 7143-HCX)

/dev/sdb:gpfsnodeXX::dataOnly:10XX:data01nodeXX:hddpool/dev/sdc:gpfsnodeXX::dataOnly:10XX:data02nodeXX:hddpool/dev/sdd:gpfsnodeXX::dataOnly:10XX:data03nodeXX:hddpool/dev/sde:gpfsnodeXX::dataOnly:10XX:data04nodeXX:hddpool/dev/sdf:gpfsnodeXX::dataOnly:10XX:data05nodeXX:hddpool/dev/sdg:gpfsnodeXX::dataOnly:10XX:data06nodeXX:hddpool/dev/sdh:gpfsnodeXX::dataOnly:10XX:data07nodeXX:hddpool/dev/sdi:gpfsnodeXX::dataOnly:10XX:data08nodeXX:hddpool/dev/sdj:gpfsnodeXX::dataOnly:10XX:data09nodeXX:hddpool/dev/sdk:gpfsnodeXX::dataOnly:10XX:data10nodeXX:hddpool/dev/sdl:gpfsnodeXX::dataOnly:10XX:data11nodeXX:hddpool/dev/sdm:gpfsnodeXX::dataOnly:10XX:data12nodeXX:hddpool/dev/sdn:gpfsnodeXX::dataOnly:10XX:data13nodeXX:hddpool/dev/sdo:gpfsnodeXX::dataOnly:10XX:data14nodeXX:hddpool/dev/fioa:gpfsnodeXX::dataAndMetadata:10XX:data15nodeXX:system/dev/fiob:gpfsnodeXX::dataAndMetadata:10XX:data16nodeXX:system/dev/fioc:gpfsnodeXX::dataAndMetadata:10XX:data17nodeXX:system/dev/fiod:gpfsnodeXX::dataAndMetadata:10XX:data18nodeXX:system

Table 12: GPFS Cluster Node Disk Descriptor Template for GPFS file system

Please note that the number of Fusion-io drives may be different in your server. See 6.4.1: GeneralInformation on IBM High IOPS drives (Fusion-io) on page 62 for an explanation. Omit lines withnon-existing /dev/fio* devices.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

90

Page 99: Hana Operations Guide

Technical Documentation

B.2 New Disk Descriptor Format (Stanzas)

The following tables show the current NSD definitions as they would be generated on a node namedgpfsnode01. When using these tables take care of adapting the hostnames and failure group number(1001) of each line and choose the right one for single node and cluster installations.

B.2.1 GPFS Single Node Stanta File Template for SAP HANA Data

• 7147-H1X, 7147-H2X. Content /var/mmfs/config/disk.list.data.gpfsnode01:

1 %nsd: device=/dev/sda32 nsd=data01node013 servers=gpfsnode014 usage=dataOnly5 failureGroup=10016 pool=system7 %nsd: device=/dev/sdb8 nsd=data02node019 servers=gpfsnode01

10 usage=dataAndMetadata11 failureGroup=100112 pool=hddpool13 %nsd: device=/dev/sdc14 nsd=data03node0115 servers=gpfsnode0116 usage=dataAndMetadata17 failureGroup=100118 pool=hddpool19 %pool:20 pool=system21 blockSize=1M22 usage=dataAndMetadata23 layoutMap=cluster24 allowWriteAffinity=yes25 writeAffinityDepth=126 blockGroupFactor=127 %pool:28 pool=hddpool29 blockSize=1M30 usage=dataOnly31 layoutMap=cluster32 allowWriteAffinity=yes33 writeAffinityDepth=134 blockGroupFactor=1

• 7147-H3X, 7147-HAX, 7147-HBX. Content /var/mmfs/config/disk.list.data.gpfsnode01:

1 %nsd: device=/dev/sda22 nsd=data01node013 servers=gpfsnode014 usage=dataAndMetadata5 failureGroup=10016 pool=system7 %nsd: device=/dev/sdb2

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

91

Page 100: Hana Operations Guide

Technical Documentation

8 nsd=data02node019 servers=gpfsnode01

10 usage=dataAndMetadata11 failureGroup=100112 pool=system13 %pool:14 pool=system15 blockSize=1M16 usage=dataAndMetadata17 layoutMap=cluster18 allowWriteAffinity=yes19 writeAffinityDepth=120 blockGroupFactor=1

• 7143-H1X, 7143-HAX. Content /var/mmfs/config/disk.list.data.gpfsnode01:

1 %nsd: device=/dev/fioa2 nsd=MDdata01node013 servers=gpfsnode014 usage=dataAndMetadata5 failureGroup=10016 pool=system7 %nsd: device=/dev/fiob8 nsd=MDdata02node019 servers=gpfsnode01

10 usage=dataAndMetadata11 failureGroup=100112 pool=system13 %nsd: device=/dev/sda314 nsd=data05node0115 servers=gpfsnode0116 usage=dataOnly17 failureGroup=100118 pool=hddpool19 %pool:20 pool=system21 blockSize=1M22 usage=dataAndMetadata23 layoutMap=cluster24 allowWriteAffinity=yes25 writeAffinityDepth=126 blockGroupFactor=127 %pool:28 pool=hddpool29 blockSize=1M30 usage=dataOnly31 layoutMap=cluster32 allowWriteAffinity=yes33 writeAffinityDepth=134 blockGroupFactor=1

• 7143-H2X, 7143-HBX. Content /var/mmfs/config/disk.list.data.gpfsnode01:

1 %nsd: device=/dev/fioa2 nsd=MDdata01node01

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

92

Page 101: Hana Operations Guide

Technical Documentation

3 servers=gpfsnode014 usage=dataAndMetadata5 failureGroup=10016 pool=system7 %nsd: device=/dev/fiob8 nsd=MDdata02node019 servers=gpfsnode01

10 usage=dataAndMetadata11 failureGroup=100112 pool=system13 %nsd: device=/dev/sda314 nsd=data05node0115 servers=gpfsnode0116 usage=dataOnly17 failureGroup=100118 pool=hddpool19 %pool:20 pool=system21 blockSize=1M22 usage=dataAndMetadata23 layoutMap=cluster24 allowWriteAffinity=yes25 writeAffinityDepth=126 blockGroupFactor=127 %pool:28 pool=hddpool29 blockSize=1M30 usage=dataOnly31 layoutMap=cluster32 allowWriteAffinity=yes33 writeAffinityDepth=134 blockGroupFactor=1

• (7143-H2X + 7143-H3X), (7143-HBX + 7143-HCX). Content /var/mmfs/config/disk.list.data.gpfsnode01:

1 %nsd: device=/dev/fioa2 nsd=MDdata01node013 servers=gpfsnode014 usage=dataAndMetadata5 failureGroup=10016 pool=system7 %nsd: device=/dev/fiob8 nsd=MDdata02node019 servers=gpfsnode01

10 usage=dataAndMetadata11 failureGroup=100112 pool=system13 %nsd: device=/dev/fioc14 nsd=MDdata03node0115 servers=gpfsnode0116 usage=dataAndMetadata17 failureGroup=100118 pool=system

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

93

Page 102: Hana Operations Guide

Technical Documentation

19 %nsd: device=/dev/fiod20 nsd=MDdata04node0121 servers=gpfsnode0122 usage=dataAndMetadata23 failureGroup=100124 pool=system25 %nsd: device=/dev/sda326 nsd=data05node0127 servers=gpfsnode0128 usage=dataOnly29 failureGroup=100130 pool=hddpool31 %nsd: device=/dev/sdb32 nsd=data06node0133 servers=gpfsnode0134 usage=dataOnly35 failureGroup=100136 pool=hddpool37 %nsd: device=/dev/sdbc38 nsd=data07node0139 servers=gpfsnode0140 usage=dataOnly41 failureGroup=100142 pool=hddpool43 %pool:44 pool=system45 blockSize=1M46 usage=dataAndMetadata47 layoutMap=cluster48 allowWriteAffinity=yes49 writeAffinityDepth=150 blockGroupFactor=151 %pool:52 pool=hddpool53 blockSize=1M54 usage=dataOnly55 layoutMap=cluster56 allowWriteAffinity=yes57 writeAffinityDepth=158 blockGroupFactor=1

• 7143-HDX. Content /var/mmfs/config/disk.list.data.gpfsnode01:

1 %nsd: device=/dev/fioa2 nsd=MDdata01node013 servers=gpfsnode014 usage=dataAndMetadata5 failureGroup=10016 pool=system7 %nsd: device=/dev/fiob8 nsd=MDdata02node019 servers=gpfsnode01

10 usage=dataAndMetadata11 failureGroup=1001

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

94

Page 103: Hana Operations Guide

Technical Documentation

12 pool=system13 %nsd: device=/dev/fioc14 nsd=MDdata03node0115 servers=gpfsnode0116 usage=dataAndMetadata17 failureGroup=100118 pool=system19 %nsd: device=/dev/fiod20 nsd=MDdata04node0121 servers=gpfsnode0122 usage=dataAndMetadata23 failureGroup=100124 pool=system25 %nsd: device=/dev/fioe26 nsd=MDdata05node0127 servers=gpfsnode0128 usage=dataAndMetadata29 failureGroup=100130 pool=system31 %nsd: device=/dev/fiof32 nsd=MDdata06node0133 servers=gpfsnode0134 usage=dataAndMetadata35 failureGroup=100136 pool=system37 %nsd: device=/dev/sda338 nsd=data07node0139 servers=gpfsnode0140 usage=dataOnly41 failureGroup=100142 pool=hddpool43 %nsd: device=/dev/sdb44 nsd=data08node0145 servers=gpfsnode0146 usage=dataOnly47 failureGroup=100148 pool=hddpool49 %nsd: device=/dev/sdc50 nsd=data09node0151 servers=gpfsnode0152 usage=dataOnly53 failureGroup=100154 pool=hddpool55 %nsd: device=/dev/sdd56 nsd=data09node0157 servers=gpfsnode0158 usage=dataOnly59 failureGroup=100160 pool=hddpool61 %nsd: device=/dev/sdf62 nsd=data10node0163 servers=gpfsnode0164 usage=dataOnly

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

95

Page 104: Hana Operations Guide

Technical Documentation

65 failureGroup=100166 pool=hddpool67 %pool:68 pool=system69 blockSize=1M70 usage=dataAndMetadata71 layoutMap=cluster72 allowWriteAffinity=yes73 writeAffinityDepth=174 blockGroupFactor=175 %pool:76 pool=hddpool77 blockSize=1M78 usage=dataOnly79 layoutMap=cluster80 allowWriteAffinity=yes81 writeAffinityDepth=182 blockGroupFactor=1

B.2.2 GPFS Cluster Node Stanza Template for GPFS file system

• 7147-H3X, 7147-HBX. Content /var/mmfs/config/disk.list.data.gpfsnodeXX:

1 %nsd: device=/dev/sdb2 nsd=data01nodenn3 servers=gpfsnodenn4 usage=dataAndMetadata5 failureGroup=10nn6 pool=system7 %nsd: device=/dev/sdc8 nsd=data02nodenn9 servers=gpfsnodenn

10 usage=dataAndMetadata11 failureGroup=10nn12 pool=system13 %nsd: device=/dev/sdd14 nsd=data03nodenn15 servers=gpfsnodenn16 usage=dataAndMetadata17 failureGroup=10nn18 pool=system19 %nsd: device=/dev/sde20 nsd=data04nodenn21 servers=gpfsnodenn22 usage=dataAndMetadata23 failureGroup=10nn24 pool=system25 %nsd: device=/dev/sdf26 nsd=data05nodenn27 servers=gpfsnodenn28 usage=dataAndMetadata29 failureGroup=10nn30 pool=system

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

96

Page 105: Hana Operations Guide

Technical Documentation

31 %nsd: device=/dev/sdg32 nsd=data06nodenn33 servers=gpfsnodenn34 usage=dataAndMetadata35 failureGroup=10nn36 pool=system37 %nsd: device=/dev/sdh38 nsd=data07nodenn39 servers=gpfsnodenn40 usage=dataAndMetadata41 failureGroup=10nn42 pool=system43 %nsd: device=/dev/sdi44 nsd=data08nodenn45 servers=gpfsnodenn46 usage=dataAndMetadata47 failureGroup=10nn48 pool=system49 %nsd: device=/dev/sdj50 nsd=data09nodenn51 servers=gpfsnodenn52 usage=dataAndMetadata53 failureGroup=10nn54 pool=system55 %nsd: device=/dev/sdk56 nsd=data10nodenn57 servers=gpfsnodenn58 usage=dataAndMetadata59 failureGroup=10nn60 pool=system61 %nsd: device=/dev/sdl62 nsd=data11nodenn63 servers=gpfsnodenn64 usage=dataAndMetadata65 failureGroup=10nn66 pool=system67 %nsd: device=/dev/sdm68 nsd=data12nodenn69 servers=gpfsnodenn70 usage=dataAndMetadata71 failureGroup=10nn72 pool=system73 %nsd: device=/dev/sdn74 nsd=data13nodenn75 servers=gpfsnodenn76 usage=dataAndMetadata77 failureGroup=10nn78 pool=system79 %nsd: device=/dev/sdo80 nsd=data14nodenn81 servers=gpfsnodenn82 usage=dataAndMetadata83 failureGroup=10nn

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

97

Page 106: Hana Operations Guide

Technical Documentation

84 pool=system85 %pool:86 pool=system87 blockSize=1M88 usage=dataAndMetadata89 layoutMap=cluster90 allowWriteAffinity=yes91 writeAffinityDepth=192 blockGroupFactor=1

• 7143-H2X, 7143-HBX. Content /var/mmfs/config/disk.list.data.gpfsnodeXX:

1 %nsd: device=/dev/fioa2 nsd=MDdata01nodenn3 servers=gpfsnodenn4 usage=dataAndMetadata5 failureGroup=10nn6 pool=system7 %nsd: device=/dev/fiob8 nsd=MDdata02nodenn9 servers=gpfsnodenn

10 usage=dataAndMetadata11 failureGroup=10nn12 pool=system13 %nsd: device=/dev/sdb14 nsd=data03nodenn15 servers=gpfsnodenn16 usage=dataOnly17 failureGroup=10nn18 pool=hddpool19 %nsd: device=/dev/sdc20 nsd=data04nodenn21 servers=gpfsnodenn22 usage=dataOnly23 failureGroup=10nn24 pool=hddpool25 %nsd: device=/dev/sdd26 nsd=data05nodenn27 servers=gpfsnodenn28 usage=dataOnly29 failureGroup=10nn30 pool=hddpool31 %nsd: device=/dev/sde32 nsd=data06nodenn33 servers=gpfsnodenn34 usage=dataOnly35 failureGroup=10nn36 pool=hddpool37 %nsd: device=/dev/sdf38 nsd=data07nodenn39 servers=gpfsnodenn40 usage=dataOnly41 failureGroup=10nn42 pool=hddpool

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

98

Page 107: Hana Operations Guide

Technical Documentation

43 %nsd: device=/dev/sdg44 nsd=data08nodenn45 servers=gpfsnodenn46 usage=dataOnly47 failureGroup=10nn48 pool=hddpool49 %pool:50 pool=system51 blockSize=1M52 usage=dataAndMetadata53 layoutMap=cluster54 allowWriteAffinity=yes55 writeAffinityDepth=156 blockGroupFactor=157 %pool:58 pool=hddpool59 blockSize=1M60 usage=dataOnly61 layoutMap=cluster62 allowWriteAffinity=yes63 writeAffinityDepth=164 blockGroupFactor=1

• (7143-H2X + 7143-H3X), (7143-HBX + 7143-HCX). Content /var/mmfs/config/disk.list.data.gpfsnodeXX:

1 %nsd: device=/dev/fioa2 nsd=MDdata01nodenn3 servers=gpfsnodenn4 usage=dataAndMetadata5 failureGroup=10nn6 pool=system7 %nsd: device=/dev/fiob8 nsd=MDdata02nodenn9 servers=gpfsnodenn

10 usage=dataAndMetadata11 failureGroup=10nn12 pool=system13 %nsd: device=/dev/fioc14 nsd=MDdata03nodenn15 servers=gpfsnodenn16 usage=dataAndMetadata17 failureGroup=10nn18 pool=system19 %nsd: device=/dev/fiod20 nsd=MDdata04nodenn21 servers=gpfsnodenn22 usage=dataAndMetadata23 failureGroup=10nn24 pool=system25 %nsd: device=/dev/sdb26 nsd=data05nodenn27 servers=gpfsnodenn28 usage=dataOnly

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

99

Page 108: Hana Operations Guide

Technical Documentation

29 failureGroup=10nn30 pool=hddpool31 %nsd: device=/dev/sdc32 nsd=data06nodenn33 servers=gpfsnodenn34 usage=dataOnly35 failureGroup=10nn36 pool=hddpool37 %nsd: device=/dev/sdd38 nsd=data07nodenn39 servers=gpfsnodenn40 usage=dataOnly41 failureGroup=10nn42 pool=hddpool43 %nsd: device=/dev/sde44 nsd=data08nodenn45 servers=gpfsnodenn46 usage=dataOnly47 failureGroup=10nn48 pool=hddpool49 %nsd: device=/dev/sdf50 nsd=data09nodenn51 servers=gpfsnodenn52 usage=dataOnly53 failureGroup=10nn54 pool=hddpool55 %nsd: device=/dev/sdg56 nsd=data10nodenn57 servers=gpfsnodenn58 usage=dataOnly59 failureGroup=10nn60 pool=hddpool61 %nsd: device=/dev/sdh62 nsd=data11nodenn63 servers=gpfsnodenn64 usage=dataOnly65 failureGroup=10nn66 pool=hddpool67 %nsd: device=/dev/sdi68 nsd=data12nodenn69 servers=gpfsnodenn70 usage=dataOnly71 failureGroup=10nn72 pool=hddpool73 %nsd: device=/dev/sdj74 nsd=data13nodenn75 servers=gpfsnodenn76 usage=dataOnly77 failureGroup=10nn78 pool=hddpool79 %nsd: device=/dev/sdk80 nsd=data14nodenn81 servers=gpfsnodenn

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

100

Page 109: Hana Operations Guide

Technical Documentation

82 usage=dataOnly83 failureGroup=10nn84 pool=hddpool85 %nsd: device=/dev/sdl86 nsd=data15nodenn87 servers=gpfsnodenn88 usage=dataOnly89 failureGroup=10nn90 pool=hddpool91 %nsd: device=/dev/sdm92 nsd=data16nodenn93 servers=gpfsnodenn94 usage=dataOnly95 failureGroup=10nn96 pool=hddpool97 %nsd: device=/dev/sdn98 nsd=data17nodenn99 servers=gpfsnodenn

100 usage=dataOnly101 failureGroup=10nn102 pool=hddpool103 %nsd: device=/dev/sdo104 nsd=data18nodenn105 servers=gpfsnodenn106 usage=dataOnly107 failureGroup=10nn108 pool=hddpool109 %pool:110 pool=system111 blockSize=1M112 usage=dataAndMetadata113 layoutMap=cluster114 allowWriteAffinity=yes115 writeAffinityDepth=1116 blockGroupFactor=1117 %pool:118 pool=hddpool119 blockSize=1M120 usage=dataOnly121 layoutMap=cluster122 allowWriteAffinity=yes123 writeAffinityDepth=1124 blockGroupFactor=1

Please note that the number of Fusion-io drives may be different in your server. See 6.4.1: GeneralInformation on IBM High IOPS drives (Fusion-io) on page 62 for an explanation. Omit lines withnon-existing /dev/fio* devices.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

101

Page 110: Hana Operations Guide

Technical Documentation

C Topology Vectors (GPFS 3.5 failure groups)

This is currently valid only for the DR-enabled clusters, for standard HA-enabled clusters use the plainsingle number failure groups as described in the instructions above.

With GPFS 3.5 TL2 (the base version for DR) a new failure group (FG) format called "Topology vectors"was introduced which is being used for the DR solution. A more detailed description for topology vectorscan be found in the GPFS 3.5 Advanced Administration Guide chapter "GPFS File Placement Optimizer".

In short, the topology vector is a replacement for the old FGs, storing more information on the infras-tructure of the cluster. Topology vectors are used for NSDs, but as the same topology vector ist used forall disks of a server node it will be explained in the context of a server node.

In a standard DR cluster setup all nodes are grouped evenly into four FGs (five when using the Tiebreaker-Node) with two FGs on every site.

A topology vector consists of three numbers divided by commas. The first of the three numbers definesthe site, Site A is 1, Site B is 2 and the optional Tiebreaker is 3. The second number is 0 (zero) for thefirst FG on that site, and 1 for the second FG. The third number enumerates the nodes of the group,counting in each FG time from 1.

In a standard eight node DR-cluster (4 nodes per site) we would have these topology vectors:

Site Failure Group Topology Vector Node

Site A

Failure group 1(1,0,x)

1,0,1 gpfsnode01 / hananode011,0,2 gpfsnode02 / hananode02

Failure group 2(2,0,x)

2,0,1 gpfsnode03 / hananode032,0,2 gpfsnode04 / hananode04

Site B

Failure group 3(1,1,x)

1,1,1 gpfsnode05 / hananode011,1,2 gpfsnode06 / hananode02

Failure group 4(2,1,x)

2,1,1 gpfsnode07 / hananode032,1,2 gpfsnode08 / hananode04

Site C Failure group 5 (tie-breaker) (3,0,x) 3,0,1 gpfsnode99

Table 13: Topology Vectors in a 8 node DR-cluster

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

102

Page 111: Hana Operations Guide

Technical Documentation

D Quotas

D.1 New Quota Calculation

[1.6]+ This chapter is only valid for appliance version 1.6 and later. For older appliances please refer toD.2: Pre-calculated Quotas on page 104.

The quota calcuation for this and the following appliance releases is more complex than the quotacalculations in previous releases. An utility script is provided to make the calculation easier.

In general the quota calculations follows SAP recommendations for HANA 1.4 and later. In some situa-tions the calculated quota may be bigger than the available storage capacity. In this situation the quotasget reduced to ehe storage size.

For HANA single nodes and clusters, there is a quota for HANA log files and a quota for HANA datafiles. In DR-enabled cluster a quota should be set only for HANA’s log files.

The formula for the quota calculation is

1 quota for logs = (# active Nodes) x (RAM per node in GB) x 1 x (Replication factor)2 quota for data = (# active Nodes) x (RAM per node in GB) x 3 x (Replication factor)

The number of active nodes needs explanation. For single nodes, this number is of course 1. For clusterthis is the count of all cluster nodes which are not dedicated standby nodes. A dedicated standby nodeis a node which has no HANA instance running with a configured role of master/slaves. Two examples:

• In an eight node cluster, there is only one HANA database installed. The first six nodes are installedas worker nodes, the last two are installed as standbys. So this cluster has clearly two dedicatedstandby nodes.

• Another eight node cluster has a HANA system ABC installed with the first seven nodes as workersand the last nodes as a standby node. A second HANA system QA1 is installed with a worker nodeon the last (eight) node and a standby node on node seven. This cluster has no dedicated standbynode as the eight node is not "standby only", it’s actually active for the QA1 cluster.

For DR the log quota will also be calculated based on the number of active nodes, in this case as onlyone HANA cluster is allowed on the DR file system, solely on the count of the worker nodes.

The replication factor should be 1 for single nodes, 2 for clusters and 3 for DR enabled clusters.

Manual calculation is not recommended. Please use the new saphana-quota-calculator.sh.

D.1.1 Quota Calculation Script

[1.6]+ Starting with release 1.6 a new script is available to ease the quota calculation. The standardinstallation uses this script to calculate the quotas during installation and the administrator can also callthis script to recalculate the quotas after a topology change happened, e.g. installation of more HANAinstances, changing node role, shrinking or growing the cluster.

Most values are read from the system or guessed. For a cluster the standard assumption is to have onededicated standby node. For a DR solution no reliable guess on the nodes can be made and manualoverride must be used.

The basic call is

1 # saphana-quota-calculator.sh

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

103

Page 112: Hana Operations Guide

Technical Documentation

As a result it will give the calculated quotas and the commands to set them to the calculated result.After reviewing these you can add the -a parameter to the call which will automatically set the quotasas calculated.

In the case you are running a cluster and the number of dedicated standbys is not one, use the parameter-s <# standby> to set a specific number of standby hosts. 0 is also a valid value.

In the case of a DR enabled cluster, the guess for the active worker nodes will be always wrong. Pleaseuse also the parameter -w <# workers> to set the number of nodes running HANA as active worker.The number of workers and standbys should equal the number of nodes on a site.

Additional parameters are -r to get a more detailed report on the quota calculation and -c to verify thecurrently set quotas (allows a deviation of 10%, too inaccurate for larger clusters).

D.2 Pre-calculated Quotas

-[1.5] This chapter is only valid for appliance versions up including 1.5.x. For later appliance versionsplease see D.1: New Quota Calculation on page 103

[DR] In DR-enabled cluster, only the quota for logs must be set. Use only the number of active (worker)nodes on the primary site for the calculation, do not count standby nodes. Follow the instructions below.

Refer to the following tables for the correct quotas for single node and cluster servers. Please notice thatthe values for the cluster include the HA replication cost and are set for the whole cluster, so you needto set it only on one node. The calculated values are in gigabyte.

The quotas for SAP HANA single node servers are calculated with these formulas:

1 quota for logs = (# Nodes) x (RAM per node in GB) x 12 quota for data = (# Nodes) x (RAM per node in GB) x 3

Model Quota for Data Quota for LogsXS 384 128

S, SSD, S+ 768 256M 1536 512L 3072 1024XL 6144 2048XXL 12288 4096XXXL 18432 6144

Table 14: Calculated quotas for Single Node Servers

The quotas for SAP HANA clusters are calculated with these formulas:

1 quota for logs = (# Nodes) x (RAM per node in GB) x 22 quota for data = (# Nodes) x (RAM per node in GB) x 6

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

104

Page 113: Hana Operations Guide

Technical Documentation

Model # Nodes Quota Data Quota Log

SSD,S (7143-HBX only)

2 3072 10243 4608 15364 6144 20485 7680 25606 9216 30727 10752 35848 12288 40969 13824 460810 15360 512011 16896 563212 18432 614413 19968 665614 21504 716815 23040 768016 24576 8192

M

2 6144 20483 9216 30724 12288 40965 15360 51206 18432 61447 21504 71688 24576 81929 27648 921610 30720 1024011 33792 1126412 36864 1228813 39936 1331214 43008 1433615 46080 1536016 49152 16384

L

2 12288 40963 18432 61444 24576 81925 30720 102406 36864 122887 43008 143368 49152 163849 55296 1843210 61440 2048011 67584 2252812 73728 2457613 79872 2662414 86016 2867215 92160 3072016 98304 32768

Table 15: Calculated quotas for HA-clusters

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

105

Page 114: Hana Operations Guide

Technical Documentation

E IBM Machine Type Model Code (MTM) to SAP HANA T-Shirt Size Mapping

The following table shows the SAP HANA T-Shirt Sizes to Machine Type Model (MTM) code mapping.The last X in the MTM is a placeholder for the region code the server was sold in, for example, a U forthe USA.

SAP HANA T-Shirt Size IBM MTM (World) IBM MTM (Generation 1. only, Canada only)XS 7147-H1X, 7147-HAX 7147-H7XS 7147-H2X, 7147-HBX 7147-H8X

SSD 7147-H3X 7147-H9XS+ 7143-H1X, 7143-HAX 7143-H4XM 7143-H2X, 7143-HBX 7143-H5X

L-Option for M 7143-H3X, 7143-HCX 7143-H6XXL/XXL/XXXL 7143-HDX

Table 16: SAP HANA T-Shirt Size to IBM MTM Mapping

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

106

Page 115: Hana Operations Guide

Technical Documentation

F Public References

F.1 IBM External References

IBM System x®

• IBM Standalone Solutions Configuration Tool

• MIGR-5087035 – IBM Systems Solution for SAP HANA appliance Quick Start Guide

• MIGR-5085479 – IBM System x3850 X5, System x3950 X5 Installation and user’s guide (p/n81y1249)

• MIGR-5085206 – IBM System x3690 X5 Installation and User’s Guide (p/n 00d1097)

• MIGR-5087114 – MegaCLI (Command Line) Utility for Storage Management for Linux – IBMBladeCenter and System x

• TOOLS-ASU – IBM Advanced Settings Utility (ASU)

• TOOLS-DSA – IBM Dynamic System Analysis (DSA)

• MIGR-5090923 – IBM SSD Wear Gauge CLI utility

IBM General Parallel File System™ (GPFS)

• IBM General Parallel File System Documentation

• GPFS FAQ (with supported OS levels)

• GPFS Service on IBM Fix Central (IBM ID required) for GPFS 3.5.0 and GPFS 3.4.0

• GPFS Books

– IBM Cluster products information center: GPFS

– IBM developerWorks Article: Install and configure General Parallel File System (GPFS) onxSeries

• GPFS Support in IBM Support Portal (IBM ID required)

IBM High IOPS Driver from Fix Central

• MIGR-5083174 IBM High IOPS software matrix

• IBM High IOPS Driver Search on IBM Support (IBM ID required)

• Current IBM High IOPS Supported Firmware for SLES 11 (IBM ID required)

F.2 IBM Corrective Service Tips (IBM ID required)

• RETAIN tip H203288 – High IOPS device not detected by operating system

• RETAIN tip: H204308 – "EXT QPI LINK" events and possible operating system hang

• RETAIN tip: H206022 – "Native configuration is no longer supported by the current controller andfirmware" message at POST

• RETAIN tip: H204966 – NMI error occurs after commands run in Linux operating system - IBMSystem x

• RETAIN tip: H207295 – IBM System x3850 x5 SAP HANA configurations can reset in rare cases

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

107

Page 116: Hana Operations Guide

Technical Documentation

F.3 SAP Service Marketplace (SAP Service Marketplace ID required)

• SAP Service Marketplace (SAP HANA 1.0 Installation, Configuration, Upgrade, System Adminis-tration, Security)

• SAP HANA Ramp-up Knowledge Transfer Learning Maps → SAP High Performance AnalyticalAppliance → SAP HANA 1.0 → Technology Consultants

• SAP HANA Software Download on SAP Service Marketplace → Software Download → Entry byComponent

F.4 SAP Help Portal

• SAP Help Portal (SAP HANA 1.0 Installation, Configuration, Upgrade, System Administration,Security)

• SAP HANA Master Guide

F.5 SAP Notes (SAP Service Marketplace ID required)

SAP Notes with general information about the IBM Systems Solution for SAP HANA appliance

• SAP Note 1650046 – IBM SAP HANA Appliance Operations Guide

• SAP Note 1661146 – IBM Check Tool for SAP HANA appliances

• SAP Note 1880960 – IBM Systems Solution for SAP HANA PTF List

• SAP Note 1730996 – Unrecommended external software and software versions

• SAP Note 1730929 – Using external tools in an SAP HANA appliance

• SAP Note 1803039 – Statistics server CHECK_HOSTS_CPU intern. error when restart

• SAP Note 1898103 – IBM Health Checker for SAP HANA appliances

SAP Notes regarding GPFS

• SAP Note 1787005 – HANA installation on GPFS with timestamps in the future

• SAP Note 1846872 – "No space left on device" error reported from HANA

• SAP Note 1641148 – HANA server hang caused by GPFS issue

SAP Notes regarding HANA

• SAP Note 1681092 – Multiple SAP HANA databases on one appliance

• SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery

• SAP Note 1819928 – SAP HANA appliance: Revision 50 of SAP HANA database

• SAP Note 1780950 – Connection problems due to host name resolution

• SAP Note 1829651 – Time zone settings in HANA scale out landscapes

• SAP Note 1523337 – SAP HANA Database 1.0: Central Note

• SAP Note 1514967 – SAP HANA 1.0: Central Note

• SAP Note 1514966 – Sizing SAP High-Performance Analytic Appliance 1.0

• SAP Note 1513496 – SAP HANA 1.0: Release Restrictions

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

108

Page 117: Hana Operations Guide

Technical Documentation

• SAP Note 1743225 – Potential failure of connections with scale out nodes

SAP Notes regarding SLES24/SLES for SAP/Linux

• SAP Note 618104 – Linux SAP System Information Tool

• SAP Note 1824819 – Optimal settings for SLES 11 SP2 and SLES for SAP 11 SP2

F.6 Novell SUSE Linux Enterprise Server References

Currently Supported

• SUSE Linux Enterprise Server 11 SP2 Release Notes

• SUSE Linux Enterprise Server for SAP Applications 11 SP2 Media

Supported until September 2013

• SUSE Linux Enterprise Server 11 SP1 Release Notes

• SUSE Linux Enterprise Server for SAP Applications 11 SP1 Media

24SUSE Linux Enterprise Server

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

109

Page 118: Hana Operations Guide

Technical Documentation

G Copyrights and Trademarks

© IBM Corporation 1994-2013. All rights reserved. References in this document to IBM products orservices do not imply that IBM intends to make them available in every country.

The following terms are registered trademarks of International Business Machines Corporation in theUnited States and/or other countries: AIX, AIX/L, AIX/L(logo), DB2, e(logo)server, IBM, IBM(logo),pSeries, System/390, z/OS, zSeries.

The following terms are trademarks of International Business Machines Corporation in the United Statesand/or other countries: Advanced Micro-Partitioning, AIX/L(logo), AIX 5L, DB2 Universal Database,eServer, i5/OS, IBMVirtualization Engine, Micro-Partitioning, iSeries, POWER, POWER4, POWER4+,POWER5, POWER5+, POWER6, GPFS.

A full list of U.S. trademarks owned by IBM may be found at: http://www.ibm.com/legal/copytrade.shtml.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

SAP, the SAP Logo, mySAP, R/3, HANA are trademarks or registered trademarks of SAP AG inGermany and many other countries.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates.

Other company, product or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

Information concerning non-IBM products was obtained from a supplier of these products, publishedannouncement material, or other publicly available sources and does not constitute an endorsement ofsuch products by IBM. Sources for non-IBM list prices and performance numbers are taken from publiclyavailable information, including vendor announcements and vendor worldwide home pages. IBM has nottested these products and cannot confirm the accuracy of performance, capability, or any other claimsrelated to non-IBM products. Questions on the capability of non-IBM products should be addressed tothe supplier of those products.

Operations Guide1.6.60-6

IBM Systems Solution for SAP HANA© Copyright IBM Corporation, 2013

110


Recommended